The Unraveling, Part 1

For some seventy years, AT&T, parent company of the Bell System, was all but unrivaled in domestic American telecommunications. For most of that time, General Telephone, later known as GT&E and then simply GTE, was AT&T’s only rival of any significance. Yet it accounted for a mere two million telephone lines at mid-century, less than five percent of the total market. AT&T’s period of dominance – from the gentleman’s agreement with the government in 1913 until that same government dismembered it in 1982 – roughly marked the beginning and end of a peculiar political era in the United States; a time when the citizenry was capable of believing in the benevolence and effectiveness of a large, bureaucratic system. It is difficult to impugn AT&T on its outward performance during this time. Between 1955 and 1980, AT&T added nearly a billion new miles of voice telephone circuit, most of it in microwave radio. Its cost per circuit mile dropped by an order of magnitude over the same period. These cost reductions were passed onto the consumer, who saw continuously decreasing real prices (i.e., adjusted for inflation) on their telephone bills. Whether measured in percentage of households with telephone service (90% by the 1970s), signal-to-noise ratio, or reliability, the United States could consistently boast the best telephone service in the world. At no time did AT&T show any sign of resting on the laurels of its existing technical infrastructure. Its research arm, Bell Labs, made fundamental contributions to the development of computers, solid state electronics, lasers, fiber optics, satellite communications, and more. Only by comparison with a rate of change exceptional in all of human experience – i.e. that of the computer industry – could they be made to seem a foot-dragging laggard.1 Nonetheless, by the 1970s, the idea that AT&T was holding back innovation gained sufficient political momentum to bring about its (temporary) dissolution. The unraveling of the system of cooperation between AT&T and the U.S. federal government came about slowly, and took decades to play out. It began when the Federal Communications Commission (FCC) decided to make some little tweaks to the system – they only wanted to tug away one little loose thread over here, then another over there. But their attempts to tidy up around the edges loosened more and more of the fabric. By the mid-1970s they looked on in confusion at the mess they had made. Then the Justice Department and the federal courts stepped in with a pair of scissors and had done with it. The single most important agent driving these changes, outside of the government itself, was a little upstart called Microwave Communications, Incorporated. But before we get to all that, let us see how the federal government and AT&T interacted in the happier times of the 1950s. Status Quo As we saw last time, there were two distinct areas of law that checked the power of industrial giants like AT&T in the twentieth century. On the one hand, there was regulatory law. In AT&T’s case regulatory oversight took the form of the Federal Communications Commission, created by the 1934 Telecommunications Act. On the other hand stood antitrust law, whose enforcement was the purview of the Justice Department. These two branches of law were of a very different character. If the FCC was a lathe, convening regularly to make small decisions that gradually shaped AT&T’s behavior, antitrust law was a fire axe: usually stowed away in a cabinet, but not at all subtle in its effects when deployed.2 In the 1950s AT&T received threats from both directions, but both were settled fairly amicably, with little imposition on AT&T’s core business. Neither FCC nor Justice Department challenged the assumption that AT&T would remain the dominant supplier of telephone equipment and services in the United States. Hush-a-Phone Let’s first consider AT&T’s relationship with the FCC, by way of an unusual little case regarding foreign attachments.  Since the 1920s, a tiny company in Manhattan called the Hush-a-Phone Corporation had made its living by selling a cup that attached to the speaking end of the telephone. By speaking directly into the device, a telephone user could avoid being easily overheard by those nearby, and also block out some of the background noise in his or her environment (in a busy sales office, for example). In the 1940s, however, AT&T, began to crack down on such foreign attachments – that is to say, any equipment connected to the Bell System that was not provided by the Bell System. An early model Hush-a-Phone attached to a candlestick phone According to AT&T, the humble Hush-a-Phone was such a foreign attachment, and thus any subscriber using such a device on their phone was subject to disconnection for being in violation of their terms of service. As far as is known, such a threat was never carried out, but the possibility probably did cost Hush-a-Phone some business, especially from retailers who refused to stock it. Harry Tuttle, inventor of the Hush-a-Phone and “president”3 of the business, chose to challenge this policy, and filed a complaint to the FCC in December 1948. The FCC had the power to promulgate new rules, like a legislature, but also to resolve disputes, like a court. It was in this latter capacity that the commission acted in 1950, when Tuttle came before it. Tuttle came not alone, but armed with expert witnesses from Cambridge, Massachusetts, prepared to testify that the Hush-a-Phone’s acoustic qualities were superior to the alternative – a cupped hand.4 The Hush-a-Phone position rested on these facts – that the Hush-a-Phone silencer was better than the only available alternative, that as a mere physical attachment it could not in any way harm the telephone network, and that private users should have the right to make their own decisions about equipment they found beneficial. From a contemporary point-of-view, these arguments seem indisputable, and one may find AT&T’s position absurd on its face – what right could they have to forbid private individuals from attaching something to a phone in their own home or office? Should Apple have the right to forbid you from putting your iPhone in a case? AT&T’s main agenda however, was not to attack the Hush-a-Phone specifically, but to defend the general principle of the ban on foreign attachments. There were several cogent economic and public interest arguments in favor of that principle. To begin with, the disposition of an individual AT&T telephone set was not a private matter, insofar as it could connect to millions of other subscribers sets, and anything that impaired quality on the call could potentially affect any of those other users. Also, one must remember that at this time telephone companies such as AT&T owned the entirety of the physical telephone network. Their ownership extended from the central switching stations down the wires and on into the telephones themselves, which were leased to customers. Thus from a simple private property point of view, it seemed sensible that the phone company should be able to control what was done with its equipment. AT&T had invested many millions of capital over many decades developing the most complex machine known to man. Why should every two-bit entepreneur with a wild idea assume the right to leech off of this accomplishment? Finally, it bears considering that AT&T also offered a variety of first-party attachments, from signaling lights to shoulder rests, all of which were leased (typically by businesses), and all of which put money into the coffers of AT&T that helped subsidize the low-cost bare bones local service to ordinary subscribers. Transferring that revenue into the pockets of private entrepreneurs would impair this system of redistribution. Whatever merit you may find in these arguments, they convinced the commission – the FCC unanimously confirmed that AT&T’s right to end-to-end control of their network extended as far as a simple cup attached to the rim of a telephone handset. In 1956, however, a federal appeals court overturned the FCC decision. The Hush-a-Phone may have degraded voice quality, the judges ruled, but only for subscribers who chose to use it, and AT&T had no grounds for overruling this private decision. Moreover, AT&T had neither the ability nor the intent to prevent users from muffling their voices in other ways. “To say that a telephone subscriber may produce the result in question by cupping his hand and speaking into it,” the judges wrote, ” but may not do so by using a device which leaves his hand free to write or do whatever else he wishes, is neither just nor reasonable.” Though the judges seemed disgusted by AT&T’s audacity in this case, their ruling was narrow – they did not throw out the ban on foreign attachments altogether, only the specific right for telephone subscribers to use a Hush-a-Phone, should they so desire.5 AT&T amended its tariffs to indicate that foreign attachments that were electrically or inductively connected to the phone system remained forbidden. Nonetheless, it served as a first warning that other parts of the federal government might not treat AT&T as gently as their regulators at the FCC. Consent Decree Meanwhile, in the very same year as the appeals court decision on Hush-a-Phone, the U.S. Justice Department closed an antitrust investigation against AT&T. The origins of that suit went all the way back to the origins of the FCC itself. There were two salient facts of the matter: 1) Western Electric, by itself an industrial giant, controlled 90% of the telephone equipment market and served as sole supplier of all such equipment for the Bell System, from the telephone stations leased to end users to the coaxial cables and microwave towers used to transmit calls cross-country. And 2), the entire regulatory apparatus that restrained the AT&T monopoly relied on capping its profits as a percentage of its capital investments. Here was the rub. One with a suspicious mind might easily imagine a conspiracy within Bell to take advantage of these two facts. Western Electric could inflate its prices to the rest of the Bell System (for example, charging $5 for a length of wire when the fair price would be $4), artificially increasing the dollar amount of its capital investments, and thus the absolute profits of the company. Say, for example, that the Indiana regulatory commission set the maximum return on capital for Indiana Bell at 7%. Assume that Western Electric charged it $10,000,000 for new equipment in 1934. The company would be allowed to earn $700,000 in additional profits – but if the fair price for that equipment were only $8,000,000, it should have been allowed only $560,000. Congress, concerned that shenanigans of this very sort might be in progress, made a study of the relationship between Western Electric and the operating companies part of the FCC’s initial mandate. The study took five years to complete and ran to 700 pages – thoroughly documenting the history of the Bell System, its corporate, technological, and financial structure, and all of its many operations, foreign and domestic. On the matter of the original question, the authors found it basically impossible to determine if Western Electric’s prices were fair or not – no comparable entity existed to compare them to.  Nonetheless, they recommended that competition be forced into the telephone equipment market, in order to ensure fair practices and stimulate greater efficiency. The seven FCC commissioners in 1937. Handsome devils. By the time the report came to fruition, however, in 1939, war loomed on the horizon. No one wanted to interfere with the communications backbone of the nation at such a time. A decade later, however, the Truman Justice Department revived the suspicions about the relationship between Western Electric and the rest of the Bell system. Rather than a bulky and somewhat noncommittal report, those suspicions now took the rather more pithy form of an antitrust suit. It requested that the court not only require AT&T to divest Western Electric, but also carve up the latter into three constituent companies, willing a competitive market for telephone equipment into existence through judicial decree. AT&T had at least two reasons to be nervous. First, the Truman administration had proven itself extremely aggressive in the enforcement of antitrust law. In 1949 alone, in addition to the AT&T action, the Justice Department and the Federal Trade Commission brought suit against Eastman Kodak, grocery giant A&P, Bausch and Lomb, the American Can Company, the Yellow Cab Company, and many others. Second was the precedent of United States v. Pullman Company. Much like AT&T, Pullman had a service arm, which operated railroad sleeping cars, and a manufacturing arm, that built them. Also much like AT&T, the ubiquity of Pullman’s service, and the fact that they would only serve Pullman-built cars, meant that no one could compete on the manufacturing side. Despite the suspicious relationship, as with AT&T, no evidence arose of abusive pricing by Pullman, or even of unhappy customers. Nonetheless, in 1943, a federal court ruled that the Pullman company was in violation of anti-trust law and should have its service and manufacturing businesses separated. Ultimately, however, AT&T escaped dismemberment, and indeed never faced a day in court. Instead, after years of limbo, it agreed to a consent decree with the new Eisenhower administration in 1956 to resolve the suit. The change in attitude of the government was partly due to a change in administration. The Republicans were significantly less hostile to big business than the New Deal Democrats.  But a change in economic conditions also deserves some of the credit – the continued growth of the economy from strength to strength after the war undermined the argument, popular with the New Dealers, that the prevalence of big business inevitably caused recessions, by inhibiting competition and putting a floor under prices. Finally, the growing scope and scale of the Cold War with the Soviet Union also played a role. AT&T served the War and Navy Departments ably during the Second World War and continued to do the same for its successor, the Department of Defense. In particular, in the very year of the antitrust suit, Western Electric had begun operating the Sandia nuclear weapons lab in Albuquerque, New Mexico. Without that lab, the United States could not develop and build new nuclear weapons, and without nuclear weapons it could not maintain a credible threat to the Soviet forces in Eastern Europe. The Defense Department therefore had no desire to see a weakened AT&T, and lobbied the rest of the administration on behalf of its contractor. The terms of the decree required AT&T to restrict its operations to the regulated telecommunications business. The Justice Department allowed a few exceptions, most notably government contract work – it wouldn’t do to outlaw Sandia Labs, after all. The government also required AT&T to license and provide technical advice on all its present and future patents at reasonable rates to any domestic entities. Given the panoply of innovations that continued to be forged at Bell Labs, these easy licensing terms would help propel the development of American high-tech companies for decades to come. Both of these requirements would have a major effect on the how computer networks took shape in the United States, but they did not affect AT&T’s role as the de facto monopoly provider of domestic telecommunications services. The fire axe had been restored to its cabinet, for now.  But very soon, a new threat came from the unexpected quarter of the FCC.  The lathe, always so smooth and gradual in its operation, suddenly began to bite deeper. The First Thread AT&T had long offered private line service, which allowed a customer (typically a large  company or government department) to lease one or more telephone lines for its exclusive use. Many organizations with intensive internal communication needs – such as the television networks, large oil companies, railroads, and the U.S. Department of Defense – found this more convenient, economical and secure than relying on the public telephone network. Bell engineers setting up a private radiotelephone line for a power company in 1953 The advent of microwave relay towers in the 1950s, however, reduced the entry costs for long-distance service to the point where it it became attractive to many of these organizations to simply build their own private network, rather than lease one from AT&T. The political philosophy of the FCC at this point, well-established in many rulings, held that rival telecommunications services should only be allowed where the incumbent could not or would not provide an equivalent service. To do otherwise would be to encourage wasteful inefficiencies, and to disrupt the carefully balanced system of regulation and rate-averaging that held AT&T’s monopoly in check while maximizing the services it offered to the public. The established precedent thus left no room for a general opening of private microwave service. As long as AT&T was willing and able to offer a private line service, other unregulated carriers had no right to enter that business. And so an alliance of interested parties set out to challenge that precedent. They were almost all large corporations with the wherewithal to build and operate their own private networks. Among the most prominent in this coalition was petroleum extraction industry (represented by the American Petroleum Institute (API)). With pipelines snaking across whole continents, wells spread across vast, remote oilfields; and exploratory ships and drill sites scattered across the globe, they wanted communication systems that would precisely serve their own needs. Companies like Sinclair and Humble Oil envisioned using microwave networks to monitor the status of pipelines, control derrick motors remotely, and communicate with offshore rigs, and they didn’t want to wait for AT&T’s say-so. But the petroleum industry was not alone. Virtually every form of large business from railroads and trucking companies, to retailers and automakers, petitioned the FCC in favor of private microwave systems. In the face of this pressure, the FCC opened hearings in November of 1956 to decide whether a new frequency band (above 890 megahertz) should be opened to such networks. Given that virtually the only parties to speak against private microwave were the common carriers themselves, the conclusion seemed nearly forgone. Even the Justice Department, feeling that AT&T had somehow bamboozled them in the recent consent decree, put in a word in favor of private microwave. This became a habit – for the next twenty years, Justice would consistently put its nose in the FCC’s business, opposing AT&T and favoring new market entrants in proceeding after proceeding. AT&T’s strongest counter-argument, one to which they would also return to again and again, was that the new entrants would damage the delicate balance of the regulatory system by cream-skimming. That is to say, large businesses could and would choose to build out their networks along low-cost, high-traffic routes (which were most profitable to AT&T), and then lease AT&T private lines where it was more expensive to build. The costs would ultimately by borne by ordinary telephone subscribers, whose low rates were subsidized by the highly profitable long-distance services which large companies would no longer pay for. Nonetheless, the FCC ruled in 1959, in the so-called “Above 890” decision, that any entrant could build its own private long-distance network.  This marked a break in federal regulatory policy. It called into question the fundamental assumption that AT&T should act as an engine of redistribution, charging higher rates to deep-pocketed customers in order to offer inexpensive telephone service to local users in small towns, rural areas, and poor neighborhoods. But the FCC still believed it could have its cake and eat it too. It told itself that the change was minor. It only affected at tiny percentage of AT&T traffic, and did not affect the core of the public service philosophy which had guided telephone regulation for decades. The FCC had, after all, only tugged at one loose thread. And indeed, by itself, the Above 890 decision was of modest consequence. But it set in motion a chain of events that caused a complete revolution in the structure of American telecommunications. Further Reading Fred W. Henck and Bernard Strassburg, A Slippery Slope (1988) Alan Stone, Wrong Number (1989) Peter Temin with Louis Galambos, The Fall of the Bell System (1987) Tim Wu, The Master Switch (2010)  

Read more
A Bicycle for the Mind – Prologue

“When man created the bicycle, he created a tool that amplified an inherent ability. That’s why I like to compare the personal computer to a bicycle. …it’s a tool that can amplify a certain part of our inherent intelligence. There’s a special relationship that develops between one person and one computer that ultimately improves productivity on a personal level.”                 — Steve Jobs[1] In December of 1974, hundreds of thousands of copies of the magazine Popular Electronics rolled off the presses and out to newsstands and mailboxes across the United States. The front cover announced the arrival of the “Altair 8800,” and the editorial just inside explained that this new computer kit could be acquired at a price of less than $400, putting a real computer in reach of ordinary people for the first time. The editor declared that “the home computer age is here—finally.”[2] Promotional hyperbole, perhaps, but many of the magazines’ readers agreed that the Altair marked the arrival of a moment prophesied, anticipated, and long-awaited. They devoured the issue and sent in their orders by the thousands. But Altair was more than just a successful hobby product. That issue of Popular Electronics’ convinced some readers not only to buy a computer, but to form organizations, whether for-profit or non-profit, that would collectively grow and multiply over the coming years into a massive cultural and commercial phenomenon. Some of those readers achieved lasting fame and fortune: In Cambridge, Massachusetts, the Altair cover issue galvanized a pair of ambitious, computer-obsessed friends into starting a business to write programs for the new machine; they called their new venture Micro-Soft. In Palo Alto, California, it stimulated the formation of a new computer club that drew the attention of a local circuit-building whiz named Steve Wozniak. But the announcement of Altair planted other seeds that are now mostly forgotten. In Peterborough, New Hampshire, it inspired the creation of a new magazine aimed at computer hobbyists, called BYTE. In Denver, it inspired a computer kit maker called the Digital Group to start building a rival machine that would be even better. The arrival of the Altair catalyzed a reaction that precipitated no less than five distinct, but intertwined, social structures. Three were purely commercial: a hardware industry to make personal computers, a software industry to create applications for them, and retail outlets to sell both. The other two mixed commercial and altruistic motivations: a network of clubs and periodicals to share news and ideas within the hobby community, and a cultural movement to promote the higher meaning of the personal computer as a force for individual empowerment. All of these developments seemed, to a casual observer, to appear from nowhere, ex nihilo. But the reagents that fed into this sudden explosion had been forming for years, waiting only for the right trigger to bring them together. The first reagent was a pre-existing electronics hobby culture. In the 1970s, hundreds of thousands of people, mostly men, enjoyed dabbling in circuit-building and kit-bashing with electronic components. In the United States, they were served by two flagship publications, the aforementioned Popular Electronics and Radio-Electronics. They provided do-it-yourself instructions (an issue of Popular Electronics from 1970, for example, guided readers on how to build a pair of bookcase stereo speakers, a waa-waa pedal, and an aquarium heater), product reviews, classified ads where readers could offer products or services to the community, and more. Retail stores and mail-order services like Radio Shack and Lafayette Radio Electronics provided the hobbyists with the components and tools they needed for their projects, and a fuzzy penumbra of local clubs and newsletters extended out from these larger institutions. This culture provided the medium for the personal computer’s initial, explosive growth. But why were the hobbyists so excited by the idea of a “home computer” in the first place? That energy came from the second reagent: a new way of communicating with computers which had created a generation of computer enthusiasts. Anyone involved in data processing in the 1950s and 60s would have experienced computers in the form of with batch-processing centers. The user presented a stack of paper cards representing data and instructions to the computer operators, who put the user’s job in a queue for execution. Depending on how busy the system was, they might have to wait hours to collect their results. But a new mode of interactive computing, created at defense research labs and elite campuses in the late 1950s and early 1960s, had become widely available in colleges, science and engineering firms, and even some high schools by the mid-1970s. When using a computer interactively, a user sitting at a terminal typed inputs on a keyboard and got an immediate response from the computer, either via a kind of automated typewriter called a teletype, or, less commonly, on a visual display. Users got access to this experience in one of two forms: minicomputers were smaller, less expensive machines than the traditional mainframes, low-cost enough that they could be dedicated to small office or department of a larger organization, and sometimes monopolized by one person at a time. Time-sharing systems provided interactivity by splitting a computer’s processing time among multiple simultaneous users, each seated at their own terminal (sometimes connected to a remote computer via the telephone network). The computer could cycle its attention through each terminal quickly enough to give each user the illusion of having the whole computer at their command.[3] The experience of having the machine under your direct command was entirely addictive, at least for a certain type of user, and thousands of hobbyists who had used computers in this way at work or school salivated at the notion of having it on-demand in their own home. The microprocessor served as the third reagent in the brew from which the personal computer emerged. In the years just prior to the Altair’s debut, the declining price of integrated circuits and a growing demand for cheap computation had led Intel to create a single chip that could perform all the basic arithmetic and logic functions of a computer. Up to that point, if a business wanted to add electronics to their product—a calculator, a piece of automated industrial equipment, a rocket, or what have you—they would design a circuit, assembled from some mix of custom and off-the-shelf chip, that would provide the capabilities needed for that particular application. But by the early 1970s, the cost of adding a transistor to a chip had gotten so low that it made sense in many cases to buy and program a general-purpose computing chip—a microprocessor—that did more than you really needed, but that could be mass-produced to serve the needs of many different customers at low cost. This had the accidental side-effect of bringing the price of a general-purpose computer down to a point affordable to those electronic hobbyists who had been craving the interactive computing experience. The final reagent was the explosive growth of American middle-class wealth in the decades after the Second World War. The American economy in the 1970s, despite the setbacks of “stagflation,” was an unprecedented engine of wealth and consumption, and Americans acquired new gadgets and gizmos faster than anyone else in the world. In 1973 they purchased 14.6 million cars and 9.3 million color televisions. Though Americans constituted less than six percent of the world’s population, in 1973 they purchased roughly one-third of all cars produced in the world, and one-half of all color televisions (14.6 million and 9.3 million, respectively).[4] When a Big Mac at McDonald’s would run you sixty-five cents and an average new car in the U.S. cost less than $5,000, the first run of Altairs listed at a price of $395, and when kitted out with accessories would easily cost $1,000 or more.[5] The United States was by far the most promising place on earth to find thousands of people willing and able to throw that kind of money at an expensive toy. For, despite a lot of rhetorical claims about their potential to boost productivity, home computers had almost no practical value in the 1970s. Hobbyists bought their computers in order to play with them: tinkering with the hardware itself to see how it could be expanded, writing software to see what they could make the hardware do, or playing in a more literal sense with computer games, shared for free within the hobby community or, later, purchased in dedicated hobby shops. It took years for the personal computer to evolve into a capable business machine, and years more to become an unquestioned part of everyday middle-class life. I came along at a later stage of that evolution, part of a second generation of hobbyists who grew up already familiar with home computers. I still remember a clear, warm day when my father pulled up alongside me and my friends on a then-quiet stretch of road as we rode our bicycles back from the candy store a few miles from my house. He rolled down the passenger side window of his Chevy Nova compact and showed me the treasure trove he had just plundered from the electronics store, a plastic satchel containing three computer games sleeved in colorful cardboard: MicroProse’s F-19 Stealth Fighter and Sierra On-Line’s King’s Quest III and King’s Quest IV. Given the balmy weather and the release dates of those titles, it must have been the late summer or early fall of 1988. I was nine years old. That roadside revelation changed my life. My father helped me install the games onto the Compaq Portable 286 computer that he no longer needed at work, and I became a PC gamer, forcing me to come to grips with the specialized technical knowledge which that entailed in those years: autoexec.bat files, extended and expanded memory, EGA and VGA graphics, IRQ slots, MIDI channels, and more. I learned that we didn’t have to accept the hardware of the computer as a given: it could be opened up, fiddled with, and improved, with additional memory chips and new sound and video cards. To be seriously interested in computer games at that time was, ipso facto, to become a computer hobbyist. The eager boy is grown, the tech-savvy father is bent with age, the quiet road courses with traffic, and MicroProse and Sierra still exist only as hollowed out brand-names, empty signifiers. Likewise, the personal computer as the Altair generation created it and as my generation found it has also changed out of all recognition. In the first decade of the twenty-first century, the personal computer mutated into three different kinds of device: into an always-on terminal to the Internet (and especially the World Wide Web), into a pocket communicator and attention-thief, and into a warehouse-scale computer.   But even before that, and indeed, even before I discovered the joys and frustrations of Sierra adventure games, the nature of the personal computer was already in flux. The hobbyists of the 1970s cherished a dream of free computing in two senses. First, computing made easily accessible: they believed anyone should be able to get their hands on computing power, cheaply and easily. Second, computing unshackled from organizational control, with hardware and software alike under the total and individual control of the user, who would also be the owner. Steve Jobs famously compared the personal computer to a “bicycle for our minds,” and a bicycle carried these same senses of freedom.[6] It made personal transportation easy, inexpensive, and fun, and it was also a machine that could be modified to the owner’s needs and desires without anyone else’s say so. For the computer hobbyists of the 1970s, who loved computers for their own sake as much as for what they could actually do, these two forms of freedom went hand in hand. The personal computer rewarded these dedicated apprentices with a feeling of almost mystical power – the ability to cast electronic spells. But in the 1980s, their dreams clashed with the realities of the computer’s evolution into a machine for serious business and then into a consumer appliance. Big businesses wanted control, reliability, and predictability from their capital investments in fleets of computers, not user independence and liberation. Consumers had no patience for the demands of wizardry; they wanted ease-of-use and a guided experience. They felt no sense of loss at having computers whose software or hardware was harder to understand and modify, because they had never intended to do so. The assumption of the hobbyists that personal computer owners would have complete mastery over their machine could not survive these changes. Some embraced these changes as a natural side-effect of the expansion of the audience for the personal computer, others felt them as a betrayal of the personal computer’s entire purpose. In this series, which I’m calling “A Bicycle for the Mind,” my intention is to follow the arc of these transformations; to trace where the personal computer came from and where it went. It is a story of how a hobby machine became a business machine and a consumer device, and how all three then disappeared into our pockets and our data centers. But it is also a story of how, through it all, the personal computer retained traces of its strange beginnings, as an expensive toy for nerds who believed that computer power could set you free.

Read more
Only Connect

The first telephones [Previous Part] were point-to-point devices, connecting a single pair of stations. As early as 1877, however, Alexander Graham Bell envisioned a grand, interconnected system. Bell wrote in a prospectus for potential inventors that, just as municipal gas and water systems connected homes and offices throughout major cities to central distribution centers,1 …it is conceivable that cables of telephone wires could be laid underground, or suspended overhead, communicating by branch wires with private dwellings, country houses, shops, manufactories, etc., etc., uniting them through the main cable with a central office where the wires could be connected as desired, establishing direct communication between any two places in the city… Not only so, but I believe, in the future, wires will unite the head offices of the Telephone Company in different cities, and a man in one part of the country may communicate by word of mouth with another in a distant place. But neither he, nor any of his contemporaries, possessed the technical means to actually achieve this vision. It would take decades and the exercise of a great deal of ingenuity and labor to turn the telephone into the most vast and intricate machine yet known to mankind, spanning continents and, eventually oceans, to allow any telephone station in the world to connect to any other. This transformation was enabled, among other things, by the development of the exchange – a central office with equipment to route a call from the line of the caller to that of the callee. The automation of these exchanges brought about a vast increase in the complexity of relay circuits, with important implications for the computer. The First Exchanges In the early days of the telephone, no one knew exactly what it was for. There was precedent enough for the long-distance delivery of written messages to know that the telegraph would have useful commercial and military applications. But no prior art illuminated the purpose for the long-distance transmission of sound. Was it a business instrument like the telegraph? A medium for social intercourse? A means for entertainment and edification, such as the broadcasting of music or political speeches? Gardiner Hubbard, among Alexander Bell’s primary financial backers, did find one useful analogy. Enterprising telegraphers had built, over the previous decades, a number of local district telegraph companies. Wealthy individuals or small businesses rented a dedicated telegraph line to the company’s central dispatch office. By  sending a telegraph they could request a cab, have a message boy sent out to a client or friend, or summon the police. Hubbard believed the telephone could replace the telegraph in these businesses. It was much easier to use, and the ability to hold a conversation would provide speedier service and reduce misunderstandings. And so he encouraged the formation of exactly this sort of company, offering to rent Bell telephone equipment to district telephone companies, whether newly formed or converting from the telegraph. A manager at one of these early district telephone companies might have noticed that he needed twenty phones to talk to twenty different customers. And in some cases one customer may have wanted to send a message to another: for example a doctor submitting a prescription to a pharmacist. Why not simply let them talk directly to one another?2 Bell himself was another potential source for this idea. He spent much of 1877 circumnavigating the lecture circuit to promote the telephone. George Coy attended such a lecture in New Haven, Connecticut, in which Bell expounded on his grand vision for a central telephone office. Suitably inspired, Coy formed the District Telephone Company of New Haven, acquired a license from the Bell Company and found his first subscribers. By January 1878 he had interconnected twenty-one of them via the first public telephone switchboard, a makeshift contraption of second-hand wire and teapot lid handles.3 New Haven switchboard, as rendered by Popular Science Monthly in 1907. The operator would rotate the brass levers to 1) connect the operator to a caller 2) ring the callee 3) connect the caller and callee Within the year, similar bespoke devices for interconnecting local telephone subscribers were spreading across the country. The world’s mental model for how the telephone would be used began to crystallize around these nexuses of local conversation – among merchants and suppliers, businessmen and clients, doctors and pharmacists. Even among friends and acquaintances, for those wealthy enough to afford such a luxury. Alternative visions of the telephone (e.g., as a broadcast medium) fell by the wayside. Within a few years, these telephone offices also converged on a template for switchboard equipment that would remain stable for decades to come: an array of sockets that could be inter-connected by plugged cables wielded by the operator. They converged, too, on the ideal sex of that operator. At first the telephone companies, many of which evolved from telegraph companies, hired from the existing labor pool of young, male telegraph clerks and messenger boys. But customers complained of their rudeness while managers despaired at their rowdy behavior. It was not long before they were entirely replaced by polite, respectable young women. The course of the further evolution of these central switching offices would be determined by a contest for dominance over the landscape of telephony: between the incumbent Goliath of the Bell system and the emerging challengers known as the independents. Bell and the Independents The holder of Bell’s 1876 patent 174,465 on “Improvement in Telegraphy”, the American Bell Telephone Company, benefited immensely from the broad scope granted to that patent by the American courts. By ruling that it covered not just the specific instruments that Bell described, but the very principle of transmitting sound via an undulating current, the judicial system granted American Bell an effective monopoly on the telephone in the United States until 1893, the end of the patent’s seventeen-year term. The company’s leaders used this time wisely. Most notable among them were President William Forbes and Theodore Vail. Forbes was a Boston brahmin and leading member of the coterie of investors who took control of the company when the capital of Bell’s early partners ran dry. Vail, great-nephew of Samuel Morse’s partner Alfred Vail, presided over Bell’s most important operating company, Metropolitan Telephone in New York City, and acted as general manager for American Bell. Vail had proved his managerial mettle as superintendent of the Railway Mail Service, which sorted mail in railway cars enroute to its destination, and was considered among the most impressive logistic feats of the age. Forbes and Vail focused especially on building up Bell’s presence in major cities, and on interconnecting those cities with long-distance lines. Given that the most valuable asset of a telephone company was its existing subscriber base, they expected that the unparalleled access to existing customers in Bell’s network would give them an unassailable competitive advantage in acquiring new customers after the expiration of their patents. Bell generally expanded into new cities not as American Bell per se, but by licensing its patent portfolio to a local operating company, and buying a controlling interest in that company as part of the deal. To further the expansion of “long lines” to interconnect these urban offices, they founded yet another company, American Telephone and Telegraph (AT&T) in 1885. Vail added the presidency of this new company to his already substantial duties. Perhaps the most crucial investment in Bell’s portfolio, though, was the 1881 purchase of a controlling interest in Chicago electrical equipment maker Western Electric: originally co-founded by Bell rival Elisha Gray, later the primary equipment supplier for Western Union, now to become the manufacturing arm of the Bell system. Only in the early 1890s, with Bell’s legal monopoly on the verge of expiration, did independent telephone companies began to emerge from the hiding places into which Bell had driven them with the truncheon known as U.S. Patent 174,465. For the next twenty years or so the independents remained a serious competitive threat to Bell, and both sides expanded rapidly in a battle for territory and subscribers. In order to fuel their expansion, Bell, in a flourish of legal and financial legerdemain, inverted their organizational structure to transform AT&T from held company to holding company. American Bell was organized under Massachusetts law, which still hewed to the older notion of a corporation as a limited public charter – therefore American Bell had to petition the legislature of that state in order to take on new capital. AT&T, organized under the liberalized corporate laws of New York, had no such requirement. AT&T expanded its networks and founded or bought new companies to consolidate and defend its hold over the major urban centers, while weaving its ever-growing network of long lines across the country. The independents meanwhile claimed new territory as fast as possible, especially in the smaller towns where AT&T did not yet have presence. During this period of intense competition, the number of telephones in service grew at an astonishing pace. By 1900 there were roughly 1.4 million telephones in the United States, as against 800,000 in Europe and 100,000 in the rest of the world together. This amounted to one phone for every sixty Americans. Outside the U.S., only Sweden and Switzerland came anywhere close to this density.  Of the 1.4 million American phone lines, 800,000 were Bell subscribers, with the rest belonging to the independents. Just three years later those numbers had grown to 3.3 million and 1.3 million, respectively, with tens of thousands of exchanges.4 The state of American telephone exchanges ca. 1910 The growing size of those exchanges placed ever greater strain on the central switching offices. In response, the telephone industry developed new switching technology along two main branches: one, which Bell favored, remained operator-assisted. The other, taken up by the independents, used electro-mechanical devices to eliminate the operator altogether. For convenience, we will call this a split between manual and automatic switching. But it’s good to keep in mind that this terminology is a bit misleading. Much like “automated” grocery checkout stations, the electro-mechanical switches, especially in their earliest incarnations, put new burdens on the customer. They automated away labor costs from the point of view of the telephone company, but from a systemic point of view, they merely offloaded labor from a paid operator to the (unpaid) customer. An Operator Is Standing By During this era of competition, Chicago was a primary center of innovation in the Bell System. Angus Hibbard, general manager of Chicago Telephone, pushed the boundaries of telephony in order to expand its offerings to a wider customer base, in ways that made AT&T headquarters uncomfortable. But because of the loose affiliation between AT&T and its operating companies, they had no direct control over his actions, and could only watch and squirm. Up to this time, most Bell customers were merchants, business leaders, doctors, or lawyers, who paid a flat yearly amount for unlimited telephone use. Few others could afford the $125 annual fee, equivalent to several thousand dollars today. To expand its service to a wider clientele, Chicago Telephone introduced three new offerings in the 1890s, of decreasing cost and service level.  The first tier was metered service on a multiple-party line, which cost the user a fee per call in addition to a much lower base fee (because it was shared among others on the same line). The operator recorded each customer’s use on paper: not until after World War I was the first automatic meter installed in Chicago. The second tier was the neighborhood exchange, which offered unlimited calling within a few blocks, but fewer operators per customer (and therefore longer average connection times) than full-service exchanges. Finally there was the “nickel-in-the-slot,” a pay phone installed in a customer’s home or office. A five-cent fee sufficed to call anywhere within the city, with a five minute limit enforced by  the operator. This was the first telephone service truly accessible to the middle-class, and by 1906 40,000 of the 120,000 phones in Chicago were nickel-in-the-slots. In order to serve his rapidly growing subscriber base, Hibbard worked closely with Western Electric, whose main plant also lay in Chicago, and especially with Charles Scribner, its chief engineer. Though he has since faded into obscurity, Scribner, author of several hundred patents over the course of his career, was in his time a renowned inventor and engineer. Among his first achievements was the development of the standard switchboard for the Bell system, including a connector for the operator’s cord known as the “jack-knife” for its resemblance to a pocket-knife, later shortened to “jack”. Scribner, Hibbard, and their teams re-engineered the central telephone office to enable operators to serve calls with ever increasing efficiency. The busy signal and “howler” (indicating that a phone was off the hook) relieved operators of the need to inform callers about these error conditions. Tiny electric lamps to indicate active calls replaced shutters which had to be manually reset by the operator. The operator’s greeting of “Hello,” which invited conversation, was replaced by “Number please,” which allowed only one possible response. Due to these and other changes, the average connection time for local calls in Chicago decreased from 45 seconds in 1887 to 6.2 seconds in 1900.5 A typical operator-assisted exchange, ca. 1910 While Chicago Telephone, Western Electric, and other arms of the Bell octopus worked to make operator connections as fast and efficient as possible, however, others were trying to dispense with the operator altogether. Strowger Devices for connecting telephones without human intervention were patented, exhibited, and put into service as early as 1879, by inventors in the United States, France, the United Kingdom, Sweden, Italy, Russia, and Hungary. In the U.S. alone, twenty-seven patents were filed for automatic telephone switchboards by 1889.6 Yet, as so often in our story, the credit for automatic switching has accrued disproportionately to a single name: Almon Strowger. This is not entirely unjust; those who preceded him built one-offs or private curiosities, had the misfortune to reside in small and slow-growing telephone markets, or simply never successfully exploited their ideas. Strowger’s machine was the first deployed at industrial scale. Though to call it Strowger’s machine is also an elision, for he never did build the thing himself. Strowger, a fifty-year-old Kanas City schoolmaster turned undertaker, was an unlikely innovator in an era of increasing technical specialization. The tale of how he conceived of his switch has been variously told, and seems to belong more to the realm of myth than fact. All of the stories, however, revolve around Strowger’s frustration that the operator or operators at his local telephone exchange were diverting clients to a rival undertaker. It’s unclear, and at this point unknowable, whether Strowger was really the victim of such a conspiracy. But it seems more likely that he was not so good of an undertaker as he liked to believe. In any case, from this (possibly fevered) imagination emerged the idea for a “girl-less” telephone. His 1889 patent described how it would work, with a rigid mechanical arm to replace the gracile one of the telephone operator. Rather than a corded jack, it held a metal contact point that would sweep through an arc in order to select among up to 100 different subscriber lines (either in a single plane, or, in the “two-motion” design, ten stacked planes of ten lines each). The caller controlled the arm with two telegraph keys, one for the tens digit, the other for the ones digit. To connect to subscriber 57, the caller pressed the tens key five times to move the arm to the correct group of ten subscribers, then pressed the ones key seven times to move to the correct subscriber in that group, then pressed a final key to make the connection. On an operator-assisted phone, by contrast, the caller would simply lift the transmitter, wait for the operator to acknowledge, say ’57,’ and wait to be connected. Operation of the two-motion Strowger switching mechanism In addition to being laborious, the system was equipment-intensive: it required five wires from the subscriber station to the central office and two local batteries (one for controlling the switch, and one for talking). Bell by this time was already moving to a central battery system, so their newest subscriber stations had no battery and a single pair of wires.7 Strowger reportedly built the first model of his switch from from pins inserted into a stack of stiff, starched collars. In order to realize a practical instrument he required financial and technical assistance from several important partners: notably businessman Joseph Harris and engineer Alexander Keith. Harris provided Strowger with funding and oversaw the creation of the Strowger Automatic Telephone Exchange Company to manufacture switches. He wisely chose to site the company not in Kansas City, but in his own home of Chicago. Due to the presence of Western Electric, Chicago was the bustling hub of telephone engineering. Among the first engineers recruited was Keith, who crossed over from the world of electrical power generation, and became technical director of Strowger Automatic. Keith, with the help of other skilled engineers, transformed Strowger’s rough concept into a precision instrument ready for mass production and use, and oversaw all of the major technical improvements to that instrument over the next twenty years. Two of those improvements were of special importance: first, the replacement of multiple keys with a single dial that automatically generated the pulses needed to move the switch into position, as well as the connection signal. This greatly simplified the subscriber equipment, and became the default mechanism for controlling automatic switches until Bell’s introduction of Touch-Tone in the 1960s. The automatic telephone became synonymous with the dial telephone. Second was the development of two-tiered switching systems that allowed first 1,000, then later 10,000, customers to connect to one another by dialing three or four numbers. The first-level switch selected among ten or 100 switches in the second level, and then that second-level switch selected among 100 subscriber lines. This made it possible for automatic switching to compete in larger towns and cities with thousands of subscribers. A 1903 Autelco dial telephone unit Strowger Automatic installed its first commercial switch in La Porte, Indiana in 1892, serving the eighty subscribers of the independent Cushman Telephone Company. The former Bell affiliate in town had conveniently been swept out of the way after losing a patent dispute with AT&T, giving Cushman and Strowger a perfect opportunity to step in and sweep up its former customers. Five years later, Keith oversaw the first two-level installation at Augusta, Georgia, serving 900 lines. By that time Strowger himself had retired to Florida, where he died several years later. His name was dropped from the company, now the Automatic Telephone Company, more commonly known as Autelco. Autelco was the dominant supplier of electro-mechanical switching equipment in the United States, and in much of Europe. By 1910, automatic switches served 200,000 American subscribers from 131 exchanges, almost all built by Autelco. Every one was owned by an independent telephone company.8 200,000 was still only a small fraction, however, of the millions of American telephone subscribers. Even among the independents most still followed the lead of Bell, and Bell had yet to seriously consider replacing its operators. Common Control The Bell system’s opponents tried to attribute its commitment to operator-assisted switching to some kind of nefarious motive, but it is hard to find any of their insinuations convincing. There were several good reasons, and another that seemed sensible but appears specious in hindsight, that Bell resisted switching to automatic systems. The first problem for Bell was to develop its own switching system. AT&T had no desire to pay Autelco to fit out its switching centers. Luckily, in 1903, it had acquired the patent to a device developed by the Lorimer brothers of Brantford, Ontario. This was the very town where Alexander Bell’s parents had settled after leaving Scotland, and where the notion of a telephone had first congealed in his imagination, during a visit in the summer of 1874. Unlike the Strowger switch, the Lorimer device used revertive pulses to move the arm of its selector – that is, the electric pulses originated at the switch, with each pulse triggering a relay in the subscriber equipment, causing it to count down the number set by subscriber on a lever until it hit zero. In 1906, Western Electric tasked two separate teams to develop switches based on the core Lorimer patent, and the systems they created – panel and rotary switching – formed a second generation of automatic switching devices. Both replaced the Lorimers’ lever with a standard dial, removeing the revertive pulse receiver to the interior of the central office. More important for our purposes than the mechanics of Western Electric’s switching equipment – carefully recorded by historians of the telephone in loving, not to say excruciating, detail – were the relay circuits used to control them – sadly neglected by those same historians, who briefly acknowledge their existence before passing on to the true objects of their devotion. This is doubly unfortunate, because those relay control circuits have two important consequences for our story. In the long term, they inspired the realization that combinations of switches could be built to represent arbitrary arithmetic and logical operations. That realization will be the subject of our next installment. More immediately, they solved the last major engineering obstacle to adoption of automatic switching: the ability to scale it to serve the large urban areas where Bell had many thousands of subscribers. The means by which Alexander Keith scaled the Strowger switch to 10,000 lines could not be stretched much further. Continuing to multiply switching levels simply required too much equipment to dedicate to each call. Bell engineers called the alternative scaling mechanism that they devised a sender. It stored the number dialed by the caller into a register, then translated that number into arbitrary (usually non-decimal) codes to control the switching machinery. This allowed for much more flexible switching arrangements – for example, calls between exchanges could be routed via a central office (which corresponded to no digit in the dialed number), rather than having to directly connect each exchange in a city to every other one. It seems that Edward C. Molina, a research engineer at the AT&T Traffic Division, first conceived of the sender. Molina had made his mark with novel studies that applied the mathematics of probability to the study of telephone traffic. These studies lead him the realization, around 1905, that if call routing could be decoupled from the decimal number dialed by the user, much more efficient automated use could be made of the lines. Molina had demonstrated mathematically that spreading calls over larger groups of lines allowed a switch to support a larger call volume while maintaining the same probability of a busy signal. Strowger selectors, however, were limited to 100 lines, selected by two dialed digits. 1,000 line selectors based on three digits had proved impractical. The movements of a selector controlled by a sender, however, did not have to correspond to decimal digits entered by the caller. Such a selector could choose from 200, or even 500, lines, as in fact the rotary and panel systems, respectively, did. Molina proposed a register and translation device built from a mix of relays and ratcheted wheels, but by the time AT&T was actually ready to deploy panel and rotary systems, other engineers had concocted faster senders, made solely from relay circuits. Molina’s translation device, from U.S. Patent 1,083,456 (filed in 1906, granted 1914). It was a short step from the concept of the sender to the concept of common control. There was no need, the Western Electric teams realized, to have a sender for each subscriber line, or even for each active call. Instead, a small number of these control devices could be shared among all lines. When a call came in, a sender would engage briefly to record the dialed digits, talk to the switching equipment to route the call, then disengage to make itself ready for another call.  With the panel switch9, sender, and common control, AT&T had in hand a flexible and scalable system that could address even the needs of the massive New York and Chicago telephone networks. Relays in a panel switch sender Despite its engineers having swept away all possibly technical objections to operator-less telephony, however, executives at AT&T were still not sold. They were not convinced that users could reliably dial the six- or seven-digit numbers that would be required to enable automatic switching across large metropolitan areas. At the time, callers reached subscribers at other local exchanges by giving the operator two pieces of information: the exchange they wished to reach, and the (typically four-digit) number they wished to reach there. For example, a customer in Pasadena might reach a friend in Burbank by asking for “Burbank, 5553”.  Bell leadership believed that replacing “Burbank” with an arbitrary two- or three-digit code would lead to a high frequency of mis-dialed numbers, frustrating users and degrading the company’s quality of service. In 1917, William Blauvelt, an employee of AT&T (role unknown), proposed a means to mollify this concern. When manufacturing the subscriber station, Western Electric could print two or three letters next to each number of the dial. The telephone directory, meanwhile, would show the first few letters of each exchange, corresponding to its dial code, in bold, for example Burbank. Instead of having to remember an arbitrary numerical code for the destination exchange, the caller would then simply dial the letters directly: BUR-5553. A 1939 Bell telephone dial for Lakewood 2697, i.e. 52-2697. Even with no remaining objections to its adoption of automatic switching, however, AT&T still had no compelling technical or operational reason to change its very successful method of connecting calls. Its hand was forced only by the advent of the Great War.  Massively increased demand for industrial production drove manufacturing wages ever upwards: in the U.S. they more than doubled between 1914 and 191910, pulling wages in other sectors along in their wake. Suddenly the key point of comparison between operator-assisted and automatic switching became not technical or operational, but financial. Given the growing cost of employing operators, AT&T decided by 1920 that it could not afford not to mechanize, and gave the order to begin installing automatic offices. The first such office, using a panel switch in Omaha, Nebraska, came online in 1921, followed by a New York City exchange in October 1922. By 1928 twenty percent of AT&T offices were automatic; by 1934, fifty percent; 1960, ninety-seven percent. Bell decommissioned its last operator-assisted exchange, in Maine, in 1978. Operators still had a role to play for the foreseeable future, however, in connecting long-distance calls, where they did not begin to be replaced by machines until after World War II. It might be natural to assume, given the stories about technology and business that tend to have currency in our culture, that a lumbering AT&T had barely escaped destruction from nimble, innovative upstarts – the independents – by finally adopting the obviously superior technology that they had pioneered. But in fact AT&T had put paid to the threat of the independents a decade before it began seriously automating its switching centers. Bell Triumphant Two events in the decade before 1910 convinced most in the business community that no challenger would ever topple the Bell System. First was the failure of the United States Independent Telephone Company of Rochester, New York. United States Independent sought, for the first time, to build a competing long-distance network to AT&T’s. But they suffered financial ruin after failing to gain entry to the crucial New York City market. Second was the collapse of the independent Illinois Telephone and Telegraph’s effort to penetrate the Chicago market. Not only could no other company compete with AT&T’s long lines, it seemed that none could challenge Bell operating companies in the major urban markets either. Moreover, the 1907 rechartering by the city of Chicago of Bell’s operating company there (Hibbard’s Chicago Telephone) signaled that city governments would not try to foster competition in the telephone business. The new economic concept of natural monopoly had taken hold – the belief that for certain kinds of public services, convergence on a single provider was both beneficial and the natural result of market forces. The correct response to monopoly, under this theory, was public regulation, not forced competition.11 The 1913 “Kingsbury commitment” added the imprimatur of the federal government to Bell’s position. At first it seemed that the incoming progressive Wilson administration, deeply skeptical of massive corporate combinations, might break up the Bell system or otherwise curtail its dominance. So it certainly appeared when Wilson’s Attorney General, James McReynolds, immediately re-opened an anti-Bell lawsuit under the Sherman Anti-Trust Act that had been tabled by his predecessor. But AT&T and the government soon came to a brokered settlement, signed by company Vice President Nathan Kingsbury. AT&T agreed to divest Western Union (in which it had purchased a controlling interest several years earlier), to stop buying independent telephone companies, and to interconnect independents into its long-distance network at reasonable rates. At face value, it may seem that AT&T had suffered a significant check to its ambitions. But the ultimate effect of the Kingsbury Commitment was to confirm it as the national power in telephony. Cities and states had already signaled that they would not try to force an end to monopoly in telephony, now the federal government had done the same. Moreover, the fact that independents could now get access to AT&T’s long distance network ensured that it would be the only such network of note in the United States until the advent of microwave transmission half a century later. The independents became part of a vast machine with Bell at its center. In fact, the ban on acquisition of independent companies was lifted in 1921, because so many of those companies, eager to sell, petitioned the government to be allowed to do so. That being said, many independents did survive, and even thrive, notably General Telephone & Electric (GTE), which acquired Autelco as its counterpart to Western Electric and held its own collection of local operating companies. But they all felt the gravitational pull of the Bell star around which they circled. Despite their now-comfortable situation, Bell’s leaders had no intention of staying idle. In order to promote the innovations in telephony that would ensure its continued dominance, AT&T President Walter Gifford formed the Bell Telephone Laboratories in 1925, with some 4,000 employees. Bell also soon developed a third-generation automatic switching system, the crossbar switch, controlled by the most complex relay circuits yet known. These two developments would lead two men, George Stibitz and Claude Shannon, to ponder the curious analogies between circuits of switches and systems of mathematical logic and computation. [Next Part] Sources Christopher Beauchamp, Invented by Law: Alexander Graham Bell and the Patent That Changed America (2015) John Brooks, Telephone: The First Hundred Years (1975) Robert J. Chapuis, One Hundred Years of Telephone Switching, vol. 1 (1982) M.D. Fagen, ed., A History of Engineering and Science in the Bell System: The Early Years (1875-1925) (1975) Anton A. Huurdeman, The Worldwide History of Telecommunications (2003) Richard R. John, Network Nation (2010) Oscar Myers, “Common Control Telephone Switching Systems,” The Bell System Technical Journal (November 1952)

Read more
The Era of Fragmentation, Part 2: Sowing the Wasteland

On May 9, 1961, Newton Minow, newly-appointed chairman of the FCC, gave the first speech of his tenure. He spoke before the National Association of Broadcasters, a trade industry group founded in the 1920s to forward the interests of commercial radio, an organization dominated in Minow’s time by the big three of ABC, CBS, and NBC. Minow knew broadcasters were apprehensive about what changes the new administration might bring, after the activist rhetoric of JFK’s “New Frontier” presidential campaign; and indeed, after a few words of praise, he proceeded to indict the medium which his audience had created. “When television is good,” Minow said, nothing — not the theater, not the magazines or newspapers — nothing is better. But when television is bad, nothing is worse. I invite each of you to sit down in front of your television set when your station goes on the air and stay there, for a day, without a book, without a magazine, without a newspaper, without a profit and loss sheet or a rating book to distract you. Keep your eyes glued to that set until the station signs off. I can assure you that what you will observe is a vast wasteland. Instead of a cavalcade of “mayhem, violence, sadism, murder,” and endless commercials “screaming, cajoling, and offending” the audience, Minow advocated programming that would “enlarge the horizons of the viewer, provide him with wholesome entertainment, afford helpful stimulation, and remind him of the responsibilities which the citizen has toward his society.” Minow did follow up on his rhetoric, but not by cracking down on existing  programming. He instead threw his support behind efforts to open new avenues that would allow less commercially-driven voices to reach television audiences. The 1962 All-Channel Receiver Act, for example, which required TV manufacturers to include ultra-high frequency (UHF) receivers in their sets, opened dozens of additional channels for television broadcasting. But another medium for distributing television, with a potential capacity well beyond that of even UHF, was already unspooling, mile-by-mile, across the American landscape. Community Television Cable television began as community antenna television (CATV), a means of bringing broadcast television to those not reached by the major network stations, often towns nestled in mountainous terrain. In the late 1940s, entrepreneurs began setting up their own antennas on high ground to capture broadcast signals, then amplified them and re-transmitted them through a shielded cable to paying customers below. Cable operators soon found others ways to bring more television programming to audiences, such as importing stations from a city with broad programming options to one with few channels (from Los Angeles from San Diego, for example). Cable providers began to build out their own microwave networks to allow this kind of entertainment arbitrage, transcending their former confinement to single local markets.  By 1971, about 19 million American viewers were served by cable television (out of a total population of a bit more than 200 million).1 But cable had the potential to do far more with its tethered customers than simply extend the reach of the “wasteland” emanating from the over-the-air broadcasters. Since the signals inside the cable were isolated from the outside world, it was not bound by FCC spectrum allocations – it had, for all practical purposes at the time, potentially unlimited bandwidth. In the 1960s, some cable systems had begun to exploit this capacity to offer programming centered around local events, often created by local high school or university students. The CA of community antenna, that brought television to local consumers, expanded into community access, that brought local producers to television. In the forefront of proclaiming the potential of cable was a superficially-unlikely proponent of high technology, Ralph Lee Smith. Smith was a bohemian writer and folk musician in Manhattan’s Greenwich Village, whose website today focuses mainly on his expertise with the Appalachian dulcimer. Yet his first book, The Health Hucksters, published in 1960, revealed him to be a classic progressive, like Minow, in favor of firm government action to support the public interest. In 1970 he published an article entitled “The Wired Nation”, which he expanded into a book two years later. In his book, Smith laid out a sketch of a future “electronic highway,” based on a report by the Electronics Industry Association (EIA), whose members included the major computer manufacturers. This highway system, a national network of coaxial cable, the EIA and Smith imagined, would wire up every television in all the homes and offices of the nations. At the hubs of this vast network of activity would be computers, and users at their home terminals would be able to send signals upstream to those computers to control what was delivered to their televisions – everything from personal messages, to shopping catalogs, to books from a remote library. Wired Cities In the early 1970s, MITRE Corporation headed a research project that attempted to bring this vision to life. MITRE, a non-profit spun out of MIT’s Lincoln Lab to help manage the development of the SAGE air defense system, had built up staffing in the Washington, D.C. area over the course of the previous decade, settling into a campus in McLean, Virginia. They came to the “wired nation” by way of another visionary concept of the 1960s, computer-aided instruction (CAI). CAI seemed feasible due to the emergence of time-sharing systems, which would allow each student to work at their own terminal, with dozens or hundreds of such devices connected to a single central computer center. Many academics in the 1960s believed (or at least hoped) that CAI would transform the educational landscape, making possible customized, per-student curricula, and bringing top-quality instruction to inner city and rural kids. The problem of the inner cities had become especially pressing at a time when riots raged in American cities nearly every summer, whether in Watts, Detroit, or Newark. Among the many researchers experimenting with CAI was Charles Victor Bunderson, a psychology professor who had built a CAI lab at the University of Texas in Austin. Bunderson was working on a computer-based curriculum for remedial math and English for junior college students under an NSF grant, but the project proved more than he could handle. So he enlisted the help of David Merrill, an education professor at Brigham Young University in Utah. Merrill had done his PhD at the University of Illinois, under Larry Stolurow, creator of SOCRATES, a computer-based teaching machine. Nearby on the Urbana campus he had encountered PLATO, another early CAI project. Together Bunderson and Merrill pitched both NSF and MITRE on a larger grant that would fund MITRE, Texas, and BYU together. MITRE would lead the project and apply its system-building expertise to the underlying hardware and software of the time-sharing apparatus. Bunderson’s Austin lab would provide the overarching educational strategy (based on learner control – direction of the pace of learning by the student) and the course software. BYU would implement the instructional content. NSF bought the concept, and provided the princely sum of five million dollars to get TICCET – Time-Shared Interactive Computer-Controlled Educational Television – off the ground. MITRE’s design consisted of two Data General Nova minicomputers, supporting 128 Sony color televisions for the terminal output. The full student carrel also contained headphones and a keyboard, but a touch-tone telephone could also be supported as the input device. Using inexpensive minicomputers for processing and terminal equipment that was already widely available in homes would bring the overall cost of the system down and make it more feasible to deploy in schools. Under MITRE’s influence, CAI was infused with the spirit of the “wired nation.” TICCET became TICCIT, with “educational” becoming “information”. MITRE imagined a system that could deliver not just education, but social services and information of all kind to the under-served urban core. They hired Ralph Lee Smith as a consultant, and made plans for a microwave link to connect TICCIT to the local cable system in the nearby planned community of Reston, Virginia. The demonstration system went live on the Reston Transmission Company’s cable links in July 1971. MITRE had sweeping plans in place to extend this concept to a Washington Cable System split into nine sectors across the District of Columbia, to be launched in time for the 1976 U.S. bicentennial. But in the event, the Reston system failed to live up to expectations. For all the rhetoric of bringing on-demand education and social services to the masses, the Reston TICCIT system offered nothing more than the ability to call up pre-set screens of information on the television (e.g., a bus schedule, or local sports scores) by dialing into the MITRE Data General computers. It was a glorified time-and-temperature line. By 1973, the Reston system went out of operation, and the Washington D.C. cable system was never to be. One major obstacle to expansion was the cost of the local memory needed to continually refresh the screen image with the data dispatched from the central computer. MITRE transferred the TICCIT technology to Hazeltine Corporation for commercial development in 1976, where it lived on for another decade as instructional software. Videotex The first major American experiment in two-way television is hard to characterize as anything but a failure. But in the same time period, the idea that the television was the ideal delivery mechanism for the new computerized services on the information age sunk firmer roots in Europe. This second wave of two-way-television, driven mainly by telecommunications giants, generally abandoned the upstart technology of cable in favor of the well-established telephone line, as the means for communication between television and computer. Though cable had a massive advantage in bandwidth, telephone held the trump card of incumbency – relatively few people had cable access, especially outside the U.S. It began with Sam Fedida, an Egyptian-born engineer, who joined the British Post Office (BPO) in 1970. At the time, the Post Office was also the telecommunications monopoly, and Fedida was assigned to design a “viewphone” system, comparable to the Picturephone service that AT&T had just launched in Pittsburgh and Chicago. However the picturephone concept faced an enormous technical hurdle – a continuous video stream gulped down huge quantities of bandwidth, which was prohibitively expensive in the days before fiber optic cables. To be able to transmit just 250 lines of vertical resolution (half of a standard television of the time), AT&T had to levy a $150 a month base rate for thirty minutes of service, plus twenty-five cents per additional minute. Fedida, therefore, came up with an alternative idea that would be both more flexible and less costly – not viewphone, but Viewdata.2 The Post Office could connect users to computers at its switching centers, offering information services by sending screens of data down to home television sets and receiving input back through the customer’s phone line using standard touch-tone signals. Static screens of text and simple images, which might be refreshed every few seconds at most, would use far less bandwidth than video. And the system would make use of existing hardware that most people already had in their homes, rather than the custom screens required for Picturephone. Fedida and his successor as lead of the project, Alex Reid, convinced the BPO that Viewdata would bring new traffic, and thus more revenue, to the existing, fixed-cost telecommunications infrastructure, especially in off-peak evening hours. Thus began a trajectory away from philanthropic ideas about how interactive television could be made to benefit society, towards commercial calculations about how online services could bring in new revenue. After several years of development, the BPO opened Viewdata to users in selected cities in 1979, under the brand name of “Prestel”, a portmanteau of press (as in publishing) and telephone. GEC 4000 minicomputers in local telephone offices responded to user’s requests over the telephone line for to fetch any of over 100,000 different ‘pages’ – screens of information stored in a database. The data came from government organizations, newspapers, magazines, and other businesses, and covered subjects from news and weather to accounting and yoga. Each local database received regular updates from a central computer in London. A simple Prestel screen The terminals rendered each page in 24 lines of 40 characters each in full color, with simple graphics composed from characters that contained simple geometric shapes. The screens were organized in a tree-like structure for user navigation, but users could also call up a particular page directly by entering its unique numeric code. The user could send information, as well as receiving it, for example to, submit a request for a seat reservation on an airplane. In the mid-1980s, Prestel Mailbox launched to the public, allowing users to directly message one another. Prestel’s blocky graphics, composed from rectangles partially or completed filled with color. Contrary to the engineers’ initial intent, at launch Prestel required a custom television set with a built-in modem and other electronic hardware, likely due to resistance from the television manufacturers, who demanded a cut of the action from BPO’s incurstion in their territory. As one might guess, requiring the purchase a new, very expensive television ( £650 or more) to get started on the service was a huge impediment to gaining subscribers, and the strategy was soon abandoned in favor of cheaper set-top boxes. Nonetheless, the cost of using the system kept most potential users away – £5 a quarter for a subscription, plus the cost of the telephone call, plus a per-minute system fee during daytime hours, plus a further per-minute charge for some premium services. The system only had 60,000 subscribers by the mid-1980s, and was most popular in the travel and financial services industries, rather than for recreational use. It survived into the mid-1990s, but never broke the 100,000 subscriber mark. Despite its struggles, however, Prestel had many competitors and imitators, with others launching similar services based on screens of text and simple graphics delivered to home televisions over telephone lines. The category was known as “videotex” and systems of that type included Canada’s Telidon, West Germany’s Bildschirmtext, and Australia’s Prestel-based Viatel. Like the BPO’s system, almost all were launched by the state-controlled telecommunications authority. Despite lacking such an organization, videotex found its way to the United States, too, where it eventually spawned a major new competitor in the information services market. Videotex Reimagined The story of Prodigy begins in Canada. The Communications Research Centre (CRC), a government lab in Ottowa, had been working on encoding simple graphics into a stream of text throughout the 1970s, independently of BPO’s Viewdata work. They developed a system which allowed designers to add arbitrary colored polygons to their screens, using special character codes to specify position, direction, color, and so forth. Other special characters switched the system between text and graphical modes. This allowed for richer and more intuitive graphics than the Prestel system, which built images from small, simple shapes which could not break the grid of characters. AT&T, impressed with the flexibility of the Canadian system and freed by the FCC to compete in limited ways in the digital services market by the 1980 “Computer II” ruling, decided to try to bring videotex two-way television services to the American market. The joint CRC and AT&T standard was called NAPLPS (North American Presentation Level Protocol Syntax). AT&T developed a terminal called Sceptre, with a modem and hardware for decoding NAPLPS, and launched videotex experiments in several different regions of the country in the early 1980s, each with different partners: Viewtron with the Knight-Ridder newspaper conglomerate in Florida, Gateway with Times-Mirror (another newspaper conglomerate) in California, and VentureOne with CBS in New Jersey. IThe Viewtron and Gateway projects both lost money and closed down in 1986. But VentureOne, although CBS and AT&T closed it down in 1983 after less than a a year of operation, laid the groundwork for a longer-lasting achievement. A Viewtron weather map, showing off the  power of NAPLPS graphics. Due to a new court ruling which came down as part of the 1984 breakup of Ma Bell, AT&T (now a long-distance-only enterprise divested of its local operating companies) was once again forbidden from the computer services market. CBS therefore relaunched its videotex efforts with two new partners, IBM and Sears, in 1984. They called the company Trintex, invoking the trinity of companies involved in this new videotex project. IBM, still the dominant computer manufacturer in the world by some margin, brought obvious value to the partnership. Sears would bring its retailing know-how online, and CBS its media expertise and content.3 If Viewdata began videotex’s trajectory towards commercialization, Trintex completed it.  Trintex hired David Waks, a systems engineer who had been working with computers since his undergrad days at Cornell in the late 1950s, to architect their videotex system.  Waks argued that the system shouldn’t really be videotex at all, or rather that Trintex should abandon the coupling between videotex and home television. The NAPLPS protocol was a fine enough way to efficiently deliver high-resolution graphics over low-bandwidth connections – the Sceptre terminal’s modem supported only 1,200 bits per second, which was pretty good for the time. Waks even improved upon it, coming up with a system for partial refreshes, so that changes could be made to one area without re-delivering the whole screen’s content. However, the assumption that a set-top box connected to a television was the lowest-friction way to deliver online services no longer made sense, given that millions of Americans now had machines perfectly capable of decoding and displaying NAPLPS content – home computers. Trintex, Waks argued, should follow the same model as GEnie and CompuServe, using microcomputer software for the client terminal, rather than a dedicated hardware box. It’s likely IBM, maker of the IBM PC, threw their support behind the idea as well. By the time the system launched in 1987, one of the three partners, CBS, had dropped out due financial difficulties. Trintex did not have a pleasant ring to it anyway, so the company re-branded itself as “Prodigy.” Local calls to a regional computer connected users to a data network based on IBM’s system network architecture (SNA), which routed them to Prodigy’s data center near IBM’s headquarters in White Plains, New York. A clever caching system retained frequently used data in the regional computers so that it did not have to be fetched from New York, an adumbration of today’s content delivery networks (CDNs).4 Distinguished by its ease-of-use, vibrant graphics, and a monthly pricing structure with no hourly usage fees, Prodigy quickly gained ground on its main competitors, CompuServe and GEnie (and soon America Online). The flat-rate business model, however, depended heavily on fees collected from online shopping and advertisers. Prodigy’s leaders seem somehow to overlooked that interpersonal communication, which consumed hours of computer and network time but produced no revenue, was consistently the most popular use for online services. Just as it had conformed to the technological structure of its predecessors, Prodigy was forced to follow their billing model as well, switching to hourly billing in the early 90s. A weather map on Prodigy Prodigy represented simultaneously the acme and the demise of videotex technology in the United States, having derived from videotex but abandoned the idea of the television or some dedicated consumer-friendly terminal as the delivery channel. Instead it used the same microcomputer-centric approach of its successful contemporaries. When TICCIT and Viewdata were conceived, a computer was an expensive piece of machinery that individuals could scarcely aspire to own. Nearly everyone working on digital services at that time assumed they would have to be delivered from central, time-shared computers to inexpensive, “dumb” terminals, with the home television as the most obvious display device. But by the mid-1980s, the market penetration of microcomputers in the U.S. was such that a new world was coming into view. It became possible to imagine – indeed hard to deny – that nearly everyone would soon own a computer, and it would serve as their on-ramp to the “information superhighway.” Even as videotex manqué, Prodigy was the last of its kind in the U.S. By the time it launched, all the other major videotex experiments in the country had shut down. There was another videotex system, however, which I have not yet mentioned, the most widely used of them all – France’s Minitel. Its story, and the distinctive philosophy of its launch and operation, require their own chapter in this story to elucidate. As if transforming the “boob tube” into a useful tool for self-improvement and communication were not an ambitious enough goal, Minitel sought to bend the trajectory of an entire nation, from relative decline upwards to technological supremacy.  [Previous] [Next] Further Reading Brian Dear, The Friendly Orange Glow (2017) Jennifer Light, From Warfare to Welfare (2005) MITRE Corporation, MITRE: The First Twenty Years (1979)

Read more
ARPANET, Part 3: The Subnet

With ARPANET, Robert Taylor and Larry Roberts intended to connect many different research institutions, each hosting its own computer, for whose hardware and software it was wholly responsible. The hardware and software of the network itself, however, lay in a nebulous middle realm, belonging to no particular site. Over the course of the years 1967-1968, Roberts, head of the networking project for ARPA’s Information Processing Techniques Office (IPTO), had to determine who should build and operate the network, and where the boundary of responsibility should lie between the network and the host institutions. The Skeptics The problem of how to structure the network was at least as much political as technical. The principal investigators at the ARPA research sites did not, as a body, relish the idea of ARPANET.  Some evinced a perfect disinterest in ever joining the network; few were enthusiastic. Each site would have to put in a large amount of effort to in order to let others share its very expensive, very rare computer. Such sharing had manifest disadvantages (loss of a precious resource), while its potential advantages remained uncertain and obscure. The same skepticism about resource sharing had torpedoed the UCLA networking project several years earlier. However, in this case, ARPA had substantially more leverage, since it had directly paid for all those precious computing resources, and continued to hold the purse strings of the associated research programs. Though no direct threats were ever made, no “or else,” issued, the situation was clear enough – one way or another ARPA would build its network, to connect what were, in practice, still its machines. Matters came to a head at a meeting of the principal investigators in Ann Arbor, Michigan, in the spring of 1967. Roberts laid out his plan for a network to connect the various host computers at each site. Each of the investigators, he said, would fit their local host with custom networking software, which it would use to dial up other hosts over the telephone network (this was before Roberts had learned about packet-switching). Dissent and angst ensued. Among the least receptive were the major sites that already had large IPTO-funded projects, MIT chief among them. Flush with funding for the Project MAC time-sharing system and artificial intelligence lab, MIT’s researchers saw little advantage to sharing their hard-earned resources with rinky-dinky bit players out west. Regardless of their stature, moreover, every site had certain other reservations in common. They each also had their own unique hardware and software, and it was difficult to see how they could even establish a simple connection with one another, much less engage in real collaboration. Just writing and running the networking software for their local machine would also eat up a significant amount of time and computer power. It was ironic yet surprisingly fitting that the solution adopted by Roberts to these social and technical problems came from Wes Clark, a man who regarded both time-sharing and networking with distaste. Clark, the quixotic champion of personal computers for each individual, had no interest in sharing computer resources with anyone, and kept his own campus, Washington University in St. Louis, well away from ARPANET for years to come. So it is perhaps not surprising that he came up with a network design that would not add any significant new drain on each site’s computing resources, nor require those sites to spend a lot of effort on custom software. Clark proposed setting up a mini-computer at each site which would handle all the actual networking functions. Each host would have to understand only how to connect to its local helpmate (later dubbed an Interface Message Processor, or IMP), which would then route on the message so that it reached the corresponding IMP at the destination. In effect, he proposed that ARPA give an additional free computer to each site, which would absorb most of the resource costs of the network. At a time when computers were still scarce and very dear, the proposal was an audacious one. Yet with the recent advent of mini-computers that cost just tens of thousands of dollars rather than hundreds, it fell just this side of feasible.1 While alleviating some of the concerns of the principal investigators about a network tax on their computer power, the IMP approach also happened to solve another political problem for ARPA. Unlike any other ARPA project to date, the network was not confined to a single research institution where it could be overseen by a single investigator. Nor was ARPA itself equipped to directly build and manage a large-scale technical project. It would have to hire a third party to do the job. The presence of the IMPs would provide a clear delineation of responsibility between the externally-m network and the locally-managed host computer. The contractor would control the IMPs and everything between them, while the host sites would each remain fully (and solely) responsible for the hardware and software on their own computer. The IMP Next, Roberts had to choose that contractor. The old-fashioned Licklider approach of soliciting a proposal directly from a favored researcher wouldn’t do in this case. The project would have to be put up for public bid like any other government contract. It took until July of 1968 for Roberts to prepare the final the details of the request for bids. About a half year had elapsed since the final major technical piece of the puzzle fell into place, with the revelation of packet-switching at the Gatlinburg conference. Two of the largest computer manufacturers, Control Data Corporation (CDC) and International Business Machines (IBM), immediately bowed out, since they had no suitable low-cost minicomputer to serve as the IMP. Honeywell DDP-516 Among the major remaining contenders, most chose Honeywell’s new DDP-516 computer, though some plumped instead for the Digital PDP-8. The Honeywell was especially attractive because it featured an input/output interface explicitly design to interact with real-time systems, for applications like controlling industrial machinery. Communications, of course, required similar real-time precision – if an incoming message were missed because the computer was busy doing other work, there was no second chance to capture it. By the end of the year, after strongly considering Raytheon, Roberts offered the job to the growing Cambridge firm of Bolt, Beranek and Newman. The family tree of interactive computing, was, at this date, still extraordinarily ingrown, and in choosing BBN Roberts might reasonably have been accused of a kind of nepotism. J.C.R. Licklider had brought  interactive computing to BBN before leaving to serve as the first director of IPTO, seed his intergalactic network, and mentor men like Roberts. Without Lick’s influence, ARPA and BBN would have been neither interested in nor capable of handling the ARPANET project. Moreover, the core of the team assembled by BBN to build the IMP came directly or indirectly from Lincoln Labs: Frank Heart (the team’s leader), Dave Walden, Will Crowther, and Severo Ornstein. Lincoln, of course, is where Roberts himself did his graduate work, and where a chance collision with Wes Clark had first sparked Lick’s excitement about interactive computing. But cozy as the arrangement may have seemed, in truth the BBN team was as finely tuned for real-time performance as the Honeywell 516. At Lincoln, they worked on computers that interfaced with radar systems, another application where data would not wait for the computer to be ready. Heart, for example, had worked on the Whirlwind computer as a student as far back as 1950, joined the SAGE project, and spent a total of fifteen years at Lincoln Lab. Ornstein had worked on the SAGE cross-telling protocol, for handing off radar track records from one computer to another, and later on Wes Clark’s LINC, a computer designed to support scientists directly in the laboratory, with live data. Crowther, now best known as the author of Colossal Cave Adventure, spent ten years building real-time systems at Lincoln, including the Lincoln Experimental Terminal, a mobile satellite communications station with a small computer to point the antenna and process the incoming signals.2 The IMP team at BBN. Frank Heart is the older man at center. Ornstein is on the far right, next to Crowther. The IMPs were responsible for understanding and managing the routing and delivery of messages from host to host. The hosts could deliver up to 8000 bytes at a time to their local IMP, along with a destination address. The IMP then sliced this into smaller packets which were routed independently to the destination IMP, across 50 kilobit-per-second lines leased from AT&T. The receiving IMP reassembled the pieces and delivered the complete message to its host. Each IMP kept a table that tracked which of their neighbors offered fastest route to reach each possible destination. This was updated dynamically based on information received from those neighbors, including whether they appeared to be unavailable (in which case the delay in that direction was effectively infinite). To meet the speed and throughput requirements specified by Roberts for all of this processing, Heart’s team crafted little poems in code. The entire operating program for the IMP required only about 12,000 bytes; the portion that maintained the routing tables only 300.3 The team also took several precautions to address the fact that it would be infeasible to have maintenance staff on site with every IMP. First, they equipped each computer with remote monitoring and control facilities. In addition to an automatic restart function that would kick in after power failure, the IMPs were programmed to be able to restart their neighbors by sending them a fresh instance of their operating software. To help with debugging and analysis, an IMP could be instructed to start taking snapshots of its state at regular intervals. The IMPs would also honor a special ‘trace’ bit on each packet, which triggered additional, more detailed logs. With these capabilities, many kinds of problems could be addressed from the BBN office, which acted as a central command center from which the status of the whole network could be overseen. Second, they requisitioned from Honeywell the military-grade version of the 516 computer, equipped with a thick casing to protect it from vibration and other environmental hazards. BBN intended this primarily as a “keep out” sign for curious graduate students, but nothing delineated the boundary between the hosts and the BBN-operated subnet as visibly as this armored shell. The first of these hardened cabinets, about the size of a refrigerator, arrived on site at the University of California, Los Angeles (UCLA) on August 30, 1969, just 8 months after BBN received the contract. The Hosts Roberts decided to start the network with four hosts – in addition to UCLA, there would be an IMP just up the coast at the University of California, Santa Barbara (UCSB), another at Stanford Research Institute (SRI) in northern California, and the last at the University of Utah. All were scrappy West Coast institutions looking to establish themselves in academic computing. The close family ties also continued, as two of the involved principal investigators, Len Kleinrock at UCLA and Ivan Sutherland at the University of Utah, were also Roberts’ old office mates from Lincoln Lab. Roberts also assigned two of the sites special functions within the network.  Doug Englebart of SRI had volunteered as far back as the 1967 principals meeting to set up a Network Information Center. Leveraging SRI’s sophisticated on-line information retrieval system, he would compile the telephone directory, so to speak, for ARPANET: collating information about all the resources available at the various host sites and making it available to everyone on the network. On the basis of Kleinrock’s expertise in analyzing network traffic, meanwhile, Roberts designated UCLA as the Network Measurement Center (NMC). For Kleinrock and UCLA, ARPANET was to serve not only as a practical tool but also as an observational experiment, from which data could be extracted and generalized to learn lessons that could be applied to improve the design of the network and its successors. But more important to the development of ARPANET than either of these formal institutional designations was a more informal and diffuse community of graduate students called the Network Working Group (NWG). The sub-net of IMPs allowed any host on the network to reliably deliver a message to any other; the task taken on by the Network Working Group was to devise a common language or set of languages that those hosts could use to communicate. They called these the “host protocols.” The word protocol, a borrowing from diplomatic language, was first applied to networks by Roberts and Tom Marill in 1965, to describe both the data format and the algorithmic steps that determine how two computers communicate with one another. The NWG, under the loose, de facto leadership of Steve Crocker of UCLA, began meeting regularly in the spring of 1969, about six months in advance of the delivery of the first IMP. Crocker was born and raised in the Los Angeles area, and attended Van Nuys High School, where he was a contemporary of two of his later NWG collaborators, Vint Cerf and Jon Postel4. In order to record the outcome of some of the group’s early discussions, Crocker developed one of the keystones of the ARPANET (and future Internet) culture, the “Request for comments” (RFC). His RFC 1, published April 7, 1969 and distributed to the future ARPANET sites by postal mail, synthesized the NWG’s early discussions about how to design the host protocol software. In RFC 3, Crocker went on to define the (very loose) process for all future RFCs: Notes are encouraged to be timely rather than polished. Philosophical positions without examples or other specifics, specific suggestions or implementation techniques without introductory or background explication, and explicit questions without any attempted answers are all acceptable. The minimum length for a NWG note is one sentence. …we hope to promote the exchange and discussion of considerably less than authoritative ideas. Like a “Request for quotation” (RFQ), the standard way of requesting bids for a government contract, an RFC invited responses, but unlike the RFQ, the RFC also invited dialogue. Within the distributed NWG community anyone could submit an RFC, and they could use the opportunity to elaborate on, question, or criticize a previous entry. Of course, as in any community, some opinions counted more than others, and in the early days the opinion of Crocker and his core group of collaborators counted for a great deal. In fact by July 1971, Crocker had left UCLA (while still a graduate student) to take up a position as a Program Manager at IPTO. With crucial ARPA research grants in his hands, he wielded undoubted influence, intentionally or not. Jon Postel, Steve Crocker, and Vint Cerf – schoolmates and NWG collaborators – in later years. The NWG’s initial plan called for two protocols. Remote login (or Telnet) would allow one computer to act like a terminal attached to the operating system of another, extending the interactive reach of any ARPANET time-sharing system across thousands of miles to any user on the network. The file transfer protocol (FTP) would allow one computer to transfer a file, such as a useful program or data set, to or from the storage system of another. At Roberts’ urging, however, the NWG added a third basic protocol beneath those two, for establishing a basic link between two hosts. This common piece was known as the Network Control Program (NCP). The network now had three conceptual layers of abstraction – the packet subnet controlled by the IMPs at the bottom, the host-to-host connection provided by NCP in the middle, and application protocols (FTP and Telnet) at the top. The Failure? It took until August of 1971 for NCP to be fully defined and implemented across the network, which by then comprised fifteen sites. Telnet implementations followed shortly thereafter, with the first stable definition of FTP arriving a year behind, in the summer of 1972. If we consider the state of ARPANET in this time period, some three years after it was first brought on-line, it would have to be considered a failure when measured against the resource-sharing dream envisioned by Licklider and carried into practical action by his protégé, Robert Taylor. To begin with, it was hard to even find out what resources existed on the network which one could borrow. The Network Information Center used a model of voluntary contribution – each site was expected to provided up-to-date information about its own data and programs. Although it would have collectively benefited the community for everyone to do so, each individual site had little incentive to advertise its resources and make them accessible, much less provide up-to-date documentation or consultation. Thus the NIC largely failed to serve as an effective network directory. Probably it’s most important function in those early years was to provide electronic hosting for the growing corpus of RFCs. Even if Alice at UCLA knew about a useful resource at MIT, however, an even more serious obstacle intervened. Telnet would get Alice to the log-in screen at MIT but no further. For Alice to actually access any program on the MIT host, she would have to make an off-line agreement with MIT to get an account on their computer, usually requiring her to fill out paperwork at both institutions and arrange for funding to pay MIT for the computer resources used. Finally, incompatibilities between hardware and system software at each site meant that there was often little value to file transfer, since you couldn’t execute programs from remote sites on your own computer. Ironically, the most notable early successes in resource sharing were not in the domain of interactive time-sharing that ARPANET was built to support, but in large-scale, old-school, non-interactive data-processing. UCLA added their underutilized IBM 360/91 batch-processing machine to the network and provided consultation by telephone to support remote users, and thus managed to significantly supplement the income of the computer center. The ARPA-funded ILLIAC IV supercomputer at the University of Illinois and the Datacomputer at the Computer Corporation of America in Cambridge also found some remote clients on ARPANET.5 None of these applications, however, came close to fully utilizing the network. In the fall of 1971, with fifteen host computers online, the network in total carried about 45 million bits of traffic per site per day, an average of 520 bits-per-second on a network of AT&T leased lines with a capacity of 50,000 bits-per-second each.6 Moreover, much of this was test traffic generated by the Network Measurement Center at UCLA. The enthusiasm of a few early adopters aside (such as Steve Carr, who made daily use of the PDP-10 at the University of Utah from Palo Alto7), not much was happening on ARPANET.8 But ARPANET was soon saved from any possible accusations of stagnation by yet a third application protocol, a little something called email. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late: The Origins of the Internet (1996)  

Read more
Britain’s Steam Empire

The British empire of the nineteenth century dominated the world’s oceans and much of its landmass: Canada, southern and northeastern Africa, the Indian subcontinent, and Australia. At its world-straddling Victorian peak, this political and economic machine ran on the power of coal and steam; the same can be said of all the other major powers of the time, from also-ran empires such as France and the Netherlands, to the rising states of Germany and the United States. Two technologies bound the far-flung British empire together, steamships and the telegraph; and the latter, which might seem to represent a new, independent technical paradigm based on electricity, depended on the former. Only steamships, who could adjust course and speed at will regardless of prevailing winds, could effectively lay underwater cable.[1] A 1901 map of the cable network of the Eastern Telegraph Company (which later became Cable & Wireless) shows the pervasive commercial and imperial power of Victorian London. Not just an instrument of imperial power, the steamer also created new imperial appetites: the British empire and others would seize new territories just for the sake of provisioning their steamships and protecting the routes they plied. Within this world system under British hegemony, access to coal became a central economic and strategic factor. As the economist Stanley Jevons wrote in his 1865 treatise on The Coal Question: Day by day it becomes more obvious that the Coal we happily possess in excellent quality and abundance is the Mainspring of Modem Material Civilization. …Coal, in truth, stands not beside but entirely above all other commodities. It is the material energy of the country — the universal aid — the factor in everything we do. With coal almost any feat is possible or easy; without it we are thrown back into the laborious poverty of early times.[2] Steamboats and the Projection of Power As the states of Atlantic Europe—Portugal and Spain, then later the Netherlands, England, and France—began to explore and conquer along the coasts of Africa and Asia in the sixteenth and seventeenth centuries, their cannon-armed ships proved one of their major advantages. Though the states of India and Indonesia had access to their own gunpowder weaponry, they did not have the ship-building technology to build stable firing platforms for large cannon broadsides. The mobile fortresses that the Europeans brought with them allowed them to dominate the sea lanes and coasts, wresting control of the Indian Ocean trade from the local powers.[3] What they could not do, however, was project power inland from the sea. The galleons and later heavily armed ships of the Europeans could not sail upriver. In this era, Europeans rarely could dominate inland states. When it did happen, as in India, it typically required years or decades of warfare and politicking, with the aid of local alliances. The steamboat, however, opened the rivers of Africa and Asia to lightning attacks or shows of force: directly by armed gunboats themselves, or indirectly through armies moving upriver supplied by steam-powered craft. We already know, of course, how Laird used steamboats in his expedition up the Niger in 1832. Although his intent was purely commercial, not belligerent, he had demonstrated the that interior of Africa could be navigated with steam. When combined with quinine to protect European settlers from malaria, the steamboat would help open a new wave of imperial claims on African territory. But even before Laird’s expedition, the British empire had begun to experiment with the capabilities of riverine steamboats. British imperial policy in Asia still operated under the corporate auspices of the East India Company (EIC), not under the British government, and in 1824 the EIC went to war with Burma over control of territories between the Burmese Empire and British India, in what is now Bangladesh. It so happened that the company had several steamers on hand, built in the dockyards of Calcutta (now Kolkata), and the local commanders put them to work in war service (much as Andrew Jackson had done with Shreve’s Enterprise in 1814).[4] Most impressive was Diana, which penetrated 400 miles up the Irrawaddy to the Burmese imperial capital at Amarapura: “she towed sailing ships into position, transported troops, reconnoitered advance positions, and bombarded Burmese fortifications with her swivel guns and Congreve rockets.”[5] She also captured the Burmese warships, who could not outrun her and whose small cannons on fixed mounts could not effectively put fire on her either. A depiction of an attack on Burmese fortifications by the British fleet. The steamship Diana is at right. In the Burmese war, however, steamships had served as the supporting cast. In the First Opium War, the steamship Nemesis took a star turn. The East India Company traditionally made its money by bringing the goods of the East—mainly tea, spices, and cotton cloth—back west to Europe. In the nineteenth century, however, the directors had found an even more profitable way to extract money from their holdings in the subcontinent: by growing poppies and trading the extracted drug even further east, to the opium dens of China. The Qing state, understandably, grew to resent this trade that immiserated its citizens, and so in 1839 the emperor promulgated a ban on the drug. The iron-hulled Nemesis was built and dispatched to China by the EIC with the express purpose of carrying war up China’s rivers. Shemounted a powerful main battery of twin swivel-mount 32-pounders and numerous smaller weapons, and with a shallow draft was able to navigate not just up the Pearl River, but into the shallow waterways around Canton (Guangzhou), destroying fortifications and ships and wreaking general havoc. Later Nemesis and several other steamers, towing other battleships, brought British naval power 150 miles up the Yangtze to its junction with the Grand Canal. The threat to this vital economic lifeline brought the Chinese government to terms.[6] Nemesis and several British boats destroying a fleet of Chinese junks in 1841. Steamboats continued to serve in imperial wars throughout the nineteenth century. A steam-powered naval force dispatched from Hong Kong helped to break the Indian Rebellion of 1857. Steamers supplied Herbert Kitchener’s 1898 expedition up the Nile to the Sudan, with the dual purpose of avenging the death of Charles “Chinese” Gordon fourteen years earlier and of preventing the French from securing a foothold on the Nile. His steamboat force consisted of a mix of naval gunboats and a civilian ship requisitioned from the ubiquitous Cook & Son tourism and logistics firm.[7] Kitchener could only dispatch such an expedition because of the British power base in Cairo (from whence it ruled Egypt through a puppet khedive), and that power base existed for one primary reason: to protect the Suez Canal. The Geography of Steam: Suez In 1798, Napoleon’s army of conquest, revolution, and Enlightenment arrived in Egypt with the aim of controlling the Eastern half of the Mediterranean and cutting off Britain’s overland link to India. There they uncovered the remnants of a canal linking the Nile Delta to the Red Sea. Constructed in antiquity and restored several times after, it had fallen into disuse sometime in the medieval period. It’s impossible to know for certain, but when operable, this canal had probably served as a regional waterway connecting the Egyptian heartland around the Nile with the lands around the head of the Red Sea. By the eighteenth century, in an age of global commerce and global empires, however, a nautical connection between the Mediterranean and Red Sea had more far-reaching implications.[8] A reconstruction of the possible location of the ancient Nile-Suez canal. [Picture by Annie Brocolie / CC BY-SA 2.5] Napoleon intended to restore the canal, but before any work could commence, France’s forces in Egypt withdrew in the face of a sustained Anglo-Ottoman assault. Though British commercial and imperial interests presented a far stronger case for a canal than any benefits France might have hoped to get from it, the British government fretted about upsetting the balance of power in the Middle East and disrupting their textile industry’s access to the Egyptian cotton cloth. They contented themselves instead with a cumbrous overland route to link the Red Sea and the Mediterranean. Meanwhile, a series of French engineers and diplomats, culminating in Ferdinand de Lesseps, pressed for the concession required to build a sea-to-sea Suez Canal, and construction under French engineers finally began in 1861. The route formally opened in November, 1869 in a grand celebration that attracted most of the crowned heads of continental Europe.[9] It was just as well that the project was delayed: it allowed for the substitution, in 1865, of steam dredges for conscripted labor at the work site. Of the hundred million cubic yards of earth excavated for the canal, four-fifths were dug out with iron and steam rather than muscle, generating 10,000 horsepower at the cost of £20,000 of coal per month.[10] Without mechanical aid, the project would have dragged on well into the 1870s, if it were completed at all. Moreover, Napoleon’s precocious belief in the project notwithstanding, the canal’s ultimate fiscal health depended of the existence of ocean-going steamships as well. By sail, depending on the direction of travel and the season, the powerful trade winds on the southern route could make it the faster option, or at least the more efficient one given the tolls on the canal.[11] But for a steamship, the benefits of cutting off thousands of miles from the journey were three-fold: it didn’t just save time, it also saved fuel, which in turn freed more space for cargo. Given the tradeoffs, as historian Max Fletcher wrote, “[a]lmost without exception, the Suez Canal was an all-steamer route.”[12] The modern Suez Canal, with the Mediterranean Sea on the left and the Red Sea on the right. [Picture by Pierre Markuse / CC BY 2.0] Ironically, the British, too conservative in their instincts to back the canal project, would nonetheless derive far more obvious benefit from it than the French government or investors, who struggled to make their money back in the early years of the canal. The new canal became the lifeline to the empire in India and beyond. This new channel for the transit of people and goods was soon complemented by an even more rapid channel for the transmission of intelligence. The first great achievement of the global telegraph age was the transatlantic cable laid in 1866 by Brunel’s Great Eastern, whose cavernous bulk allowed it to lay the entire line from Ireland to Newfoundland in a single piece in 1866.[13] This particular connection served mainly commercial interests, but the Great Eastern went on to participate in the laying of a cable from Suez to Aden and on to Bombay in 1870, providing relatively instantaneous electric communication (modulo a few intermediate hops) from London to its most precious imperial possession.[14] The importance of the Suez for quick communications with India in turn led to further aggressive British expansion in 1882: the bombarding of Alexandria and the de facto conquest of an Egypt still nominally loyal to the Sultan in Istanbul. This was not the only such instance. Steam power opened up new ways for empires to exert their might, but also pulled them to new places sought out only because steam power itself had made them important. The Geography of Steam: Coaling Stations In that vein, coaling stations—coastal and island stations for restocking ships with fuel—became an essential component of global empire. In 1839, the British seized the port of Aden (on the gulf of the same name) from the Sultan of Lahej for exactly that purpose, to serve as a coaling station for the steamers operating between the Red Sea and India.[15] Other, pre-existing waystations waxed or waned in importance along with the shift from the geography of sail to that of steam. St. Helena in the Atlantic, governed by the East India Company since the 1650s, could only be of use to ships returning from Asia in the age of sail, due to the prevailing trade winds that pushed outbound ships towards South America. The advent of steam made an expansion of St. Helena’s role possible, but then the opening of Suez diverted traffic away from the South Atlantic altogether. The opening of the Panama Canal similarly eclipsed the Falkland Islands’ position as the gateway to the Pacific.[16] In the case of shore-bound stations such as Aden, the need to protect the station itself sometimes led to new imperial commitments in its hinterlands, pulling empire onward in the service of steam. Aden’s importance only multiplied with the opening of the Suez Canal, which now made it part of the seven-thousand-mile relay system between Great Britain and India. Aggressive moves by the Ottoman Empire seemed to imperil this lifeline, and so the existence of the station became the justification for Britain to create a protectorate (a collection of vassal states, in effect) over 100,000 square miles of the Arabian Peninsula.[17] Britain created the 100,000-square-mile Aden protectorate to safeguard its steamship route to India. Coaling stations acquired local coal where it was available—from North America, South Africa, Bengal, Borneo, or Australia—where it was not, it had to be brought in, ironically, by sailing ships. But although one lump of coal may seem as good as another, it was not, in fact, a single fungible commodity. Each seam varied in the ratio and types of chemical impurities it contained, which affected how the coal burned. Above all, the Royal Navy was hungry for the highest quality coal. By the 1850s, the British Admiralty determined that a hard coal from the deeper layers of certain coal measures in South Wales exceeded all others in the qualities required for naval operations: a maximum of energy and a minimum of residues that would dirty engines and black smoke that would give away the position of their ships over the horizon. In 1871 the Navy launched its first all-steam oceangoing warship, the HMS Devastation, which needed, at full bore, 150 tons of this top-notch coal per day, without which it would become “the verist hulk in the navy.” The coal mines lining a series of north-south valleys along the Bristol Channel, which had previously supplied the local iron industry, thus became part of a global supply chain. The Admiralty demanded access to imported Welsh coal across the globe, in every port where the Navy refueled, even where local supplies could be found.[18] The dark green area indicates the coal seams of South Wales, where the best  steam coal in the world could be found. The British supply network far exceeded that of any other nation in its breadth and reliability, which gave their navy a global operational capacity that no other fleet could match. When the Russians sent their Baltic fleet to attack Japan in 1905, the British refused it coaling service and pressured the French to do likewise, leaving the ships reliant on sub-par German supplies. It suffered repeated delays and quality shortfalls in its coal before meeting its grim fate in Tsushima Strait. Aleksey Novikov-Priboi, a sailor on one of the Russian ships, later wrote that “coal had developed into an idol, to which we sacrificed strength, health, and comfort. We thought only in terms of coal, which had become a sort of black veil hiding all else, as if the business of the squadron had not been to fight, but simply to get to Japan.”[19] Even the rising naval power of the United States, stoked by the dreams of Alfred Mahan, could scarcely operate outside its home waters without British sufferance. The proud Great White Fleet of the United States that circumnavigated the globe to show the flag found itself repeatedly humbled by the failures of its supply network, reliant on British colliers or left begging for low-quality local supplies.[20] But if British steam power on the oceans still outshone that of the U.S. even beyond the turn of the twentieth century, on land it was another matter, as we shall next time.

Read more
America’s Steam Empire

[Apologies for the long delay on this one, a combination of writer’s block and a house move slowed me down this summer. Hopefully the next installment will follow more rapidly!] Railroads and Continental Power The Victorian Era saw the age of steam at its flood tide. Steam-powered ships could decide the fate of world affairs, a fact that shaped empires around the demands of steam, and that made Britain the peerless powerof the age. But steam created or extended commercial and cultural networks as well as military and political ones. Faster communication and transportation allowed imperial centers to more easily project power, but it also allowed goods and ideas to flow more easily along the same links. Arguably, it was more often commercial than imperial interests that drove the building of steamships, the sinking of cables and the laying of rail, although in many cases the two interests were so entangled that they can hardly be separated: the primary attraction of an empire, after all (other than prestige) lay in the material advantages to be extracted from the conquered territories. The growth of the rail system in the United States provides a case study in this entanglement. While British commercial and imperial power derived from its command of the oceans, America drew strength from the continental scale of its dominions. Steamboats had gone some way to making the vast interior more accessible, and played a supporting role in the wars that wrested control of the continent from the Native American nations and Mexico. A steam-powered fleet raided Mexican ports and helped seize a coastal base for the Army at Vera Cruz in 1847, but the Army then had to march hundreds of miles overland to capture Mexico City, supplied by pack mules. Likewise, steamboats delivered troops and supplied firepower in the numerous Indian Wars of the nineteenth century, when a nearby navigable waterway existed.[1] But more often than not, the Army relied on literal horsepower.  A Steamboat on the Missouri River. The technology that did bind the continent once and for all by steam power was the railroad. The early development of rails in the U.S. recapitulated the British story, on a smaller scale and in a compressed timeframe: horse-drawn mine rails led to small local horse-drawn freight networks, which were followed in turn by intercity lines carrying a mix of passengers and freight, which then finally, gradually adopted steam locomotives as their exclusive source of rail traction. All the pieces were thus in place for a rail boom in the U.S. in the 1830s, roughly contemporaneous with the explosion of railways in Britain.[2] The American merchant class threw their money at rail projects, drawn to the new technology by avarice and driven towards it by fear. The Erie Canal was the chief symbol and author of that fear. Completed in 1825, it threatened to drain all the wealth of the West into New York City via the Great Lakes. Other leading mercantile cities on the seaboard—such as Philadelphia, Baltimore, and Charleston—risked being bypassed and left behind without a gateway to the growing population and commerce of the west. Their states reacted with grand projects to compete with New York’s.[3]   Cutting a canal of their own was one option, of course, but without an existing watercourse going in the right general direction, a feature which some cities like Baltimore entirely lacked, this would prove very difficult. The Appalachians, moreover, presented a daunting obstacle to an all-water route to the west. Tunnels could bore through high ground, locks and inclines could lift boats over it, but all at a formidable cost. And even with horse traction (which remained common throughout the 1830s), rail wagons could travel faster than a towed canal boat. So, by 1830, several railways (such as the Baltimore & Ohio, or B&O, intended to link the city to the river of that name, though it would take over two decades to do so) began to stretch westward. This twentieth-century relief map shown the route of the Baltimore and Ohio railroad gives a sense of the daunting geography that had to be dealt with. [George P. Grimsley, “The Baltimore & Ohio Railroad,” XVI International Geological Congress (Washington: 1933)] Some cities that had already launched canal companies switched over to rail as events in Britain made the practicability of the technology clear. Pennsylvania, despite having already invested heavily in canals, abandoned a plan to connect the Delaware and Susquehanna by canal based on intelligence from England. William Strickland, a disciple of Henry Latrobe who visited England in 1825 to learn about the latest developments in transportation, advised the government that railroads were the future, so they instead backed an eighty-two-mile railroad from Philadelphia to Columbia.[4] In the early years, American rail technology depended heavily on engineers like Strickland who had traveled to Britain to learn about locomotive and railroad design. The first major rail lines in Massachusetts, New Jersey, Pennsylvania, and Maryland all imitated the techniques used to construct the Liverpool and Manchester line in England.[5] To the extent that these early railways were steam-powered, they also relied mostly on locomotives imported from Britain or modeled on British exemplars. Many early American locomotives either came straight from the workshop of George and Robert Stephenson in Newcastle, or copied the design of the Stephensons’ Samson or Planet locomotives.[6] Old Ironsides, the first locomotive built by Philadelphia manufacturer Mathias Baldwin. It is a near-exact copy of the Stephensons’ Planet. Three factors gradually shunted American railroad technology off onto a different track from that of its British forebears: the presence of the Appalachians, the relative dearth of capital and labor west of the Atlantic, and the abundance there of cheap land and timber. The dominant railway pattern in Britain consisted of heavily graded routes made as flat and straight as possible, with gentle curves that both kept the locomotive and wagons secure on the tracks and minimized the cost of land acquisition. They were built to last, with bridges and viaducts constructed of sturdy stone and iron.[7] The same kind of construction could be found in the early railways on the eastern seaboard: the Thomas Viaduct, for example, on the B&O line, spanned (and still spans) the Patapsco River on arches of solid masonry. The Thomas Viaduct, typical of the British style in early American railroad design. But American builders could not afford to take the same approach as they moved westward, crossing the high mountains and vast distances required to reach the small towns of the Ohio valley and other points west. The United States for the most part still embodied the Jeffersonian ideal of a rural, agrarian society, and especially so in the west, where only 7% or so of the population lived in towns. Larger cities with a wealthy merchant class, a robust banking system, and capital to spare existed only on the coasts.[8] A scrappier approach would be needed to make railways work in this context. Cheap construction trumped all other factors. In the early years, builders frequently resorted to flimsy “strap-iron” rails, consisting of a thin veneer of iron nailed to a wooden rail. They avoided expensive tunnelling or levelling operations to cross hills or mountains in favor of steeper gradients and tighter curves: by 1850, the U.S. had dug only eleven miles of railway tunnels compared to eighty in Britain, despite having several times Britain’s total track mileage by that point, much of which crossed mountainous terrain. As rails moved westward, American rail builders figured out how to construct bridges of timber trusses, a material readily available in the heavily wooded Ohio Valley, rather than iron or heavy stone construction like the Thomas Viaduct.[9] A wooden trestle bridge over the Genesee River in New York, more typical of the fully developed style of American railroad building. The steep grades and sharp curves of American railways required changes to locomotive design: more powerful engines to haul loads up steeper slopes, and swiveling wheels for navigating turns without derailing. In 1832, John B. Jervis, chief engineer for New York’s Mohawk and Hudson Railroad, devised a four-wheeled truck for the front of his locomotive, which could rotate independently of the main carriage, allowing the locomotive to turn through might tighter angles. Other builders quickly copied the idea. Matthias Baldwin of Philadelphia, who went on to become the most prolific builder of American locomotives, had modeled his first (1831) locomotive on the Stephenson Planet. By 1834, however, he had developed a new design that incorporated Jervis’ bogie, a design that he would sell by the dozen over the next decade.[10] A few years later, a competing Philadelphia locomotive builder, Joseph Harrison Jr., developed the equalizing beam to distribute the weight of the vehicle evenly over multiple axles. This opened the way to locomotives with four or more driving wheels, providing the power needed to ascend mountain grades.[11] Baldwin’s 1834 Lancaster. Note that the front four wheels can swivel independently of the rear, drive wheels. Iron Rivers One of the defining processes of modern times has been the decoupling of humanity from the cycles and contours of the natural world, contours and cycles that shaped its existence for millennia. Steam power, as we have seen before, abetted this process by providing a free-floating source of mechanical power, using energy “cheated” from nature by drawing down reserves of carbonaceous matter stored up for eons underground. The course of rivers and streams, which had guided human settlement since humans began settling, provide a case in point. A river provides a source of drinking water and a natural sewer, but also a highway for travel and trade. Since before recorded history, people had moved bulk goods (such as food, fodder, fuel, timber, and ore) mainly by water. The steamship allowed people to exploit such waterways more intensively, but then rail lines appeared and extended existing watersheds, acting as new tributaries. Finally, the main-line railroads that emerged by mid-century created artificial iron rivers, entirely independent of water, draining goods from their catchment area out to a major commercial hub where they might find a buyer.[12] As these rails reached westward in the United States, they also drained the life out of the steamboating trade, which faded to a shadow of its former self. Trains ran several times faster, followed the straightest course possible from town to town, and—unaffected by drought, flood, or freeze—operated year-round in virtually any weather.[13] Efficient, reliable, and immune from the whims and cycles of nature, they were modernity incarnate. As Mark Twain reflected in 1883, on revisiting St. Louis for the first time in decades: …the change of changes was on the ‘levee.’ …Half a dozen sound-asleep steamboats where I used to see a solid mile of wide-awake ones! This was melancholy, this was woeful. The absence of the pervading and jocund steamboatman from the billiard-saloon was explained. He was absent because he is no more. His occupation is gone, his power has passed away, he is absorbed into the common herd, he grinds at the mill, a shorn Samson and inconspicuous.[14] By the 1880s major riverfront cities such as Cincinnati and Louisville, cities that owed their existence to the Ohio river trade, cities molded by the millennia-old pattern of waterborne commerce, spurned the natural highway that lay at their feet. They shipped out some 95% of their goods—from cotton and tobacco to ham and potatoes—by rail.[15] The steamboat had clearly lost out. But in the long run, none of the also-ran cities of the eastern seaboard—such as Baltimore, Philadelphia and Charleston—gained much on their peers from their investments in the railroad, either. New York continued to dominate them all. Instead, the biggest winner of the dawning American rail age emerged at the junction of the new iron rivers of the Midwest; a vast new metropolis was rising from the mudflats of the Lake Michigan shoreline on the back of the railroad. Player With Railroads Rivers and harbors had given life to many a great metropolis over the millennia; Chicago was the first to be quickened by rails. Not that water had nothing to do with it: Chicago’s small river ran close to the watershed of the Illinois River, giving it huge potential as a water link that could connect shipping flows on the Mississippi River system to the Great Lakes (and thus, via the Erie Canal, New York, the commercial nexus of the entire country). In the 1830s, Chicago was still a muddy little trading entrepot, its hinterlands recently wrested from the Potawatomi Indians, but a speculative real estate bubble took off on the assumption that it would explode in importance once a canal was built to connect the two water systems.[16] That bubble collapsed with the crash of 1837, and the hoped-for canal did not finally appear until April 1848, with the help of the state and federal government.[17] By that time, the first of the railroads that would soon overshadow the canal in economic and cultural importance had already begun construction. The Galena and Chicago Union was overseen by Chicago bigwigs, but funded mainly by farmers along the proposed route, who opened their pockets in the (justified) belief that a railroad would drive up the value of their crops and their lands. By the start of the Civil War, The Galena and Chicago formed just one part of a vascular system of rails fanning out from Chicago across Illinois and southern Wisconsin to various points on the Mississippi—Galena to the northwest, Rock Island west, and Quincy southwest—that brought farm produce from the hinterlands into the city and returned with manufactured goods—like the new, Chicago-made, McCormick Reaper. Chicago’s railroads circa 1866. The lines fanning out to the west (such as the Chicago & North Western and Chicago, Rock Island & Pacific) connected Chicago to the natural resources of the Midwest. The trunk lines along Lake Michigan (Pittsburgh, Fort Wayne and Chicago; Lake Shore & Michigan Southern) connected it to the markets of the East. [David Buisseret, Historic Illinois from the Air (Chicago: University of Chicago Press, 1990), p.135] These lines formed the first of two different “railsheds” that served Chicago. The other, owned and operated mostly by eastern capital, consisted of a series of parallel trunk lines that formed an arterial connection to the cities of the east, especially New York. The base of Lake Michigan—a barrier, rather than a highway, from the point of view of the railroads—served as a choke point that brought both of these rail systems to Chicago. Competition among the various entering lines (and, in the ice-free months, with lake traffic for bulk goods), kept rates low and furthered Chicago’s advantages. The western rail system gathered in the products of the plains and prairies of the West–grain, livestock, and timber—while the eastern system disgorged it en masse to hungry markets. The city in between served as middleman, market maker, processor, storehouse, and more: “Hog Butcher for the World,\Tool Maker, Stacker of Wheat,\Player with Railroads and the Nation’s Freight Handler.”[18] Chicago’s rival as gateway to the West, St. Louis, had long served as the concentration point for goods flowing from the territory north and west of it along the Mississippi and Missouri rivers, the former stomping grounds of Lewis and Clark. But as the Chicago railroads reached the Mississippi, they siphoned that traffic off to the east, starving St. Louis of commercial sustenance. An 1870s rendition of the Union Stock Yards of Chicago, where the livestock of the plains became meat. The area is now an industrial park. The rivermen fought a brief rear guard action in the mid-1850s: they tried to block the railroads from spreading further west by having the Chicago and Rock Island bridge across the Mississippi declared a hazard to navigation in 1857. Future president Abraham Lincoln traveled to Chicago to spearhead the case for the defense, and secured a hung jury, which was, practically speaking, a victory for the railroad interests.[19] Towns like Omaha, Nebraska, which might have naturally oriented their trade downriver to Missouri, now looked east. As one correspondent reported circa 1870, “Omaha eats Chicago groceries, wears Chicago dry goods, builds with Chicago lumber, and reads with Chicago newspapers. The ancient store boxes in the cellar have ‘St. Louis’ stenciled on them; those on the pavement, ‘Chicago.’”[20] St. Louis was not the only party to suffer from the westward expansion of the railroads, however, and its fate was farfrom the bleakest. Annihilating Distance In the Mexican-American war of 1846-1848, the United States had acquired vast new territories in the West, including Alta (upper) California, on the Pacific coast. Then, shortly thereafter, James Marshall found flecks of gold in the waters of the sawmill he had established in the hills east of Sutter’s Fort, site of the future city of Sacramento. Word of wealth running in the streams drew the desperate, foolish, and cunning to the new territory by the hundreds of thousands. For those on the Atlantic seaboard, the fastest route to instant riches required two steamer legs in the Gulf of Mexico and the Pacific, bridged by a short but difficult crossing of the malarial isthmus of Panama; this could be done in a month or two if the steaming schedules lined up favorably. The sea journey clear round the southern tip of South America and back took two or three times as long, but avoided the risks of tropical disease. The direct landward journey offered the worst of both worlds: it took just as much time as the Cape Horn route with the added risk of death by illness or injury, along with the nagging fear of Indian attack. Only those who could not afford sea passage chose to go this way.[21] As California’s population boomed and Pacific trade began to expand, any American with a lick of avarice could see that great profit would be derived from a safer and more reliable means of reaching the Pacific, that transcontinental rail links would provide the best such means, and that—treaties and other promises notwithstanding—the natives living along the way would have to be pushed aside in the name of progress. The Kansas-Nebraska Act only left the rump of Oklahoma as ‘unorganized territory’ not (yet) claimed for the use of white settlers. That work began in earnest in the mid-1850s. The Kansas-Nebraska Act of 1854, best known for its calamitous escalation of the rising tensions over slavery that would soon engender Civil War, originated in the desire of rail promoters like senator Stephen Douglas of Illinois to open a route to the west. Douglas preferred a route through Nebraska along the flatlands of the Platte River valley, but that was treaty-bound Indian Territory, designated for tribes such as the Kickapoo, Delaware, Shawne, and others. No investors would touch a railroad company that did not pass through securely white-controlled land, and so the Indian Territory would have to give way to new American territories, Kansas and Nebraska. Those previously living there could either decamp to parts still further west or be herded into the last remnant of the Indian Territory in Oklahoma. Anyone paying attention could foresee that neither refuge would stay a refuge for long.[22] The transcontinental railroad exhibited the typical American stye of building, with wooden trestle bridges such as this one. This locomotive has four drive wheels, to provide more hauling power. The machinations involved in planning and funding the trans-continental route were extensive enough to fill entire books. The Civil War provide the crucial impetus to end the talking and start the building, because the federal government no longer needed to take Southern opinion into account in its planning. As Douglas had advocated, the route began at the junction of the Platte with the Missouri River at Omaha and stretched west across plains and mountains to Sacramento, the epicenter of the Gold Rush. Despite a handful of raids that damaged equipment or killed small parties of workers or soldiers, the Cheyenne, Sioux and other tribes that lived in the area could do little to impede the coming of the iron road, which could count on the protection of the U.S. Army. In addition to providing military cover, the government made the whole enterprise worthwhile for the railroad companies (the Central Pacific and Union Pacific) by allotting them generous land grants along the right-of-way which they could sell to farmers or borrow against directly.[23] Railroads of the Western U.S.  in 1880 [John K. Wright, ed., Atlas of the Historical Geography of the United States (Washington: Carnegie Institution, 1932),] The presence of the new rail route, along with numerous other lines that sprouted up across the West (often with land grants of their own), then accelerated further dispossession. They brought western lands into easy reach of eastern or immigrant settlers, and made those lands attractive by providing a way for those settlers to get their farm produce to market. The railroads also brought destruction to the keystone resource on which the livelihood of the equestrian tribes of the Great Plains depended. For decades, trains had carried domesticated livestock to urban slaughterhouses; the new lines across the Great Plains now made it profitable for white hunters to slaughter the bison herds of the plains in situ and then send their robes east by rail.[24] Railroad Time North America became a continent bound by steam; to the detriment of some but the great good fortune of others. By 1880, a rail traveler in Omaha could reach not just Sacramento, but also Los Angeles, Butte, Denver, Santa Fe, and El Paso. By 1890, the white population spreading along these rails had so completely covered the West that a distinct frontier of settlement ceased to exist.[25] Nothing better symbolizes the transformation of the United States into a railroad continent (not to mention the general power of steam to supplant natural cycles with those convenient to human economic activity) than the dawn of railroad time. In the early 1880s, the country’s railroad companies exercised their power to change the reckoning of time across the entire continent, and, for the most part, their change stuck. Traditionally, localities would set their clock to the local solar noon: the time when the sun stood highest in the sky. But this would not do for rail networks that spanned many stations; trains, unlike any earlier form of travel, could be scheduled to the minute, and they needed a standard time to schedule against. So, each rail company began keeping their own rail time (synched to the city where they were headquartered) which they used across all of their stations: in April 1883, forty-nine distinct railroad times existed in the United States. [26] In that same month, William F. Allen, a railroad engineer, put forth a proposal to a convention of railroad managers to standardize the entire U.S. rail system on a series of hour-wide time zones. This would satisfy various pressures: from scientists for a system of time they could use to align measurements across the country and the globe, from state governments for more uniform time standards, and from travelers for easier-to-understand timetables. Britain had already adopted Greenwich Mean Time as their national time for similar reasons (and by a similar process – it had begun as a country-wide railway time in 1847 before being adopted by the government in 1880). The companies duly implemented the system in November 1883, and by March of the next year, most of the major cities in the U.S. had adjusted their clocks to conform to the new railroad time system.[27]   As shown in this map from 1884, the railroad time system does not correspond exactly to the modern U.S. time zones (adopted by federal law in 1918), but it is recognizably similar. We have, by now, wandered a good way down the stream, exploring the consequences of the steamboat and locomotive, the most romantic and striking symbols of the age of steam. At this point we must make our way back up to the central channel of our story, resuming the story of the development of the technology of the steam engine, the prime mover itself.

Read more
A Craving for Calculation

In 1965, Patrick Haggerty, president of Texas Instruments (TI), wanted to make a new bet on the future of electronics. In that future, he believed, in a theme he frequently expounded, the use of electronic would become “pervasive.” A decade before, he had pushed for the development of a transistor-based pocket radio, to demonstrate the potential of solid-state electronics (those based on small pieces of semiconductor material cut from a wafer, instead of bulbous vacuum tubes). Now a new form of electronic component was on the rise: integrated circuits that packed dozens, or even hundreds, of components onto a single semiconductor chip. With Jack Kilby, an integrated circuit pioneer and now one of TI’s top research director, Haggerty conceived of a new consumer product worthy of this new age: a calculator that could “fit in a coat pocket and sell for less than $100.” At the time, a typical calculator had the size, shape, and weight of a typewriter and cost well over $1000 (over $10,000 in 2024 dollars.)[1] The Texas Instruments Cal Tech prototype. With no good options for an electronic display, output was instead printed onto a paper tape [National Museum of American History]. You could call it prescient or premature, either way Haggerty’s goal proved just out of reach. By the end of 1966, Kilby and his team had built a prototype calculator (code named “Cal Tech”): the most compact ever made, at less than three pounds and about four by six by two inches. But it contained four half-inch integrated circuits of about 1,000 transistors each. TI could produce such dense chips only at an experimental scale: each wafer of silicon printed with copies of the Cal Tech circuits yielded only a small number of viable, error-free chips. No one had yet built a semiconductor plant that could realize Haggerty’s dream. But the world’s electronic manufacturing capabilities were advancing at an unprecedented speed. By 1972, what had been impossible suddenly became commonplace, and pocket calculators began selling by the millions.[2] The story of the pocket calculator provides the perfect preamble to the story of the personal computer. Many of the actors that figure in it—semiconductor manufacturers Intel and MOS Technology, calculator makers Commodore, MITS, and Sinclair; electronics retailer Radio Shack; and even individual calculator enthusiasts like Steve Wozniak—will recur in the story of the personal computer. Just like the computer, calculators that had served as specialized tools purchased by organizations became consumer goods that could be found in millions of homes. In the process, they raised cultural questions that would reappear with the personal computer, as in both cases many people found themselves acquiring the latest electronic fad without a clear sense of what it was actually for. Most importantly, the calculator put in place the commercial conditions for the personal computer’s emergence: it not only served as the main engine for the relentless drop in semiconductor price-per-component in the first half of the 1970s, it also induced the creation of the first commercial microprocessor, which put all the core functionality of a computer on a single chip. However, the calculator differs from the personal computer in one very significant way: calculators slid directly down the market from pricey machines owned by organizations to birthday gifts handed out by middle-class parents. At incredible speed (far faster than computers) calculators became as commonplace as wristwatches; indeed, it wasn’t long before manufacturers put calculators in wristwatches. Though the market leaders changed rapidly as the technology advanced, there was no disruption from below, no new path blazed by a doughty band of rugged entrepreneurs. We will have to consider later just why that was the case. Integrated Circuits As is obvious from the story of the Cal Tech, the essential prerequisite to all of these developments lay in the microchip, or integrated circuit. The pocket calculator posed a few other engineering challenges, most notably how to display numbers while running off of a small battery, but none of them mattered without access to chips that could pack hundreds or thousands of circuit components (and the wires to connect them) into a tiny area. The primary logic component for calculators (and most advanced electronics) was the transistor: a tiny sliver of semiconductor, doped with impurities to let it act as an electronic switch, then typically packaged into a metal or plastic container about the size of a pencil eraser. Three wires protruded from the package for making connections to other components (two for the main voltage flow passing through the transistor, and a third control wire to turn that flow on or off). No matter how tightly packed, thousands of such independent transistors could never fit inside a case that would then fit inside your pocket, not to mention how much it would cost the manufacturer to pay workers to assemble those components together. Modern transistors in a variety of different packages [Benedikt.Seidl]. Everyone in the industry knew that long-term progress in electronics depended on some kind of solution to the assembly problem. As the number of components in circuits grew, the number of manufacturing steps also grew, and manufacturing error rates multiplied—a device with one thousand components, each of which a skilled worker could connect with 99.9% reliability, had a 63% chance of having at least one defective connection. The search for an end to this “tyranny of numbers” drove many research projects in the late 1950s, most of them funded by the various arms of the United States military, all of whom foresaw an unending appetite for ever-more-sophisticated electronics to control their weaponry and defense systems. The military-funded projects included “micro-modules” (individual components that would snap together like tinkertoys), “microcircuits” (wires and passive components etched onto a ceramic substrate into which active components, like transistors, could be connected), and “molecular electronics” (nanotechnology avant la lettre).[3] Fairchild Semiconductor finally cracked the puzzle in 1959, drawing on a new transistor design with a flat surface and silicon dioxide coating developed at Bell Labs. Jean Hoerni, one of the so-called “traitorous eight” who had recently defected from Shockley Semiconductor to form Fairchild, figured out how to print transistors onto a wafer by first letting the silicon grow a protective coating of oxide, then etching away holes which could be doped with impurities to create the transistors: the doping would not affect the areas still covered with oxide. Fairchild’s head of research and development, Robert Noyce, realized that this “planar process” of transistor manufacture would enable integrated circuits. No one had been able to deposit metal wires directly onto raw semiconductor because it would destroy the components underneath. But Noyce saw that the protective oxide layer would prevent that, while still allowing the wires to link up the transistors through the carefully-etched windows.[4] Robert Noyce pictured in front and center of the “traitorous eight” who left transistor pioneer Shockley Semiconductor to form Fairchild Semiconductor. Jean Hoerni is second from right, and Gordon Moore, of Moore’s Law, at far left. The integrated circuit eliminated the tyranny of numbers by eliminating effectively all human labor from the manufacturing process, reducing circuit-building to a chemical process of deposition, etching, and doping. Making ever-smaller components and ever-denser circuits became a mere matter of process improvements, with no fundamental barrier to higher density and improved yields other than “engineering effort,” as Fairchild researcher Gordon Moore observed in the paper that gave birth to “Moore’s Law.”[5] As late as 1965, when Moore wrote his famous paper, integrated circuits remained an expensive, niche technology used mainly in aerospace systems for the military and NASA, where reliability and reducing size and weight were all-important. But because of the ever-greater density and reduced costs that he had predicted, that changed very quickly. By the end of the decade, it became reasonable to consider putting integrated circuits into a mere calculator. Moore’s original chart depicting the scaling of chip density that became known as Moore’s Law. The Calculator Business At mid-century, the typical calculator very much resembled a typewriter, complete with a moving carriage, and cost somewhere in the neighborhood of $1000. It operated mechanically, usually with the aid of an electric motor to drive the machinery.[6] Then, over the course of the 1950s and 1960s, the industry recapitulated the previous two decades of the history of the programmable computer, developing calculators based on relays (electromagnetic switches), vacuum tubes, and then transistors.[7] Friden STW-10 mechanical calculator. In the mid-1960s, the market for electronic calculators amounted to a bit over one hundred million dollars. They cost more than their mechanical equivalents and were no smaller—their advantage lay in speed of calculation, quiet operation, and the ability to compute non-linear functions. At first, American office equipment makers like SCM (Smith-Corona Marchant), Friden (a division of Singer, once famous for its sewing machines), and Burroughs dominated the market, but new competitors appeared later in the decade, especially from Japan, just as microchips reached the crossover point where they became economical for mass-market applications.[8] The 1965 SCM Marchant Cogito 240 electronic calculator. It retailed for over $2000 [National Museum of American History]. Throughout the 1960s and into the 1970s, the U.S. retained a dominating lead in semiconductor manufacturing. Only American factories owned by American companies could produce the most advanced chips. Japanese manufacturers lagged behind, but offered significantly cheaper labor for assembly of electronic components. These relative economic advantages proved important to how the pocket calculator market played out. In the late 1960s, even the most compact electronic calculators still required the assembly of many chips and other components, and so Japanese companies combined their growing manufacturing expertise with lower labor costs to undercut American calculator makers—buying chips from U.S. factories, shipping them to Japan for assembly, and then shipping the assembled calculators back to American buyers. Casio, Sharp, and Canon (formerly makers of electro-mechanical calculators, radios and televisions, and cameras, respectively) all became major players in the calculator market in this way. Driven by competition and the ever-improving economics of semiconductor manufacturing, prices plummeted and the market ballooned. By 1970, half of all metal-oxide semiconductors (MOS) (the most rapidly growing manufacturing process, and soon to become the only one that mattered), went into calculators.[9] Thus the calculator became both the prime beneficiary and the prime mover of the virtuous cycle of Moore’s Law: greater sales volume funded production improvements which reduced costs and produced greater sales volume. Already by 1971, the dream of the Cal Tech came within reach: calculators the size of, if not yet a pocket, at least a paperback, that cost $200 or less. A bevy of small calculator makers rushed into the market to soak up the growing demand—including MITS, the small New Mexican electronics outfit that would later produce the Altair, and Sinclair Radionics, a small British firm whose principals would go on to found two of the most successful personal computer businesses in the United Kingdom. An important player in the early personal computer industry, MOS Technology, grew its initial business on the back of the calculator market. Calculator maker Allen-Bradley didn’t want to be solely dependent on Texas Instruments for chips, so it turned to the upstart MOS as a second-source supplier. Shortly thereafter, MOS became a supplier for another calculator maker, Commodore Business Machines, which would later try its hand at personal computers as well. We will have more to say about all of these companies later. Many of the new entrants were American firms. Bowmar, a small electronics outfit from Fort Wayne, Indiana became, for a few years, the largest producer of pocket calculators. For the pendulum had swung again: by this time integrated circuits had become so dense (sometimes containing thousands of transistors) that calculators only required a handful of chips, and so labor became a smaller factor in production costs. It no longer paid to ship chips calculators across the ocean for assembly in Japan. American calculator-makers boomed as the market continued to swell with incredible speed: in 1973, seven million pocket calculators were sold worldwide (a figure that personal computers did not reach until 1985), and by this time many models were indeed truly pocket-sized.[10] The Bowmar 901B calculator, introduced in the fall of 1971 [National Museum of American History]. Then, in 1974, the market shifted yet again. Semiconductor manufacturers, tired of watching their chips go out the door to calculator makers who profited by throwing them into a plastic case along with a few buttons, decided to do it themselves and cut out the middle man. Prices plummeted yet further, squeezing profit margins and threatening even roaring successes like Bowmar with destruction if they could not vertically integrate and start making their own chips. Smaller companies had no hope of doing even that. Caught in a vise between falling prices and the wholesale cost of chips, all of the personal-computer-adjacent companies we met earlier—MITS, Sinclair, MOS Technologies, and Commodore—came to a crisis point. The Calculator and the Microprocessor All of this churn in the calculator market had a very important side effect: the introduction of the first commercial microprocessors. Though sometimes called “a computer on a chip,” this is a slight exaggeration. These chips did not by themselves constitute a complete computer, but they contained all the basic logic and arithmetic functions needed to perform any computation, and (unlike most chips at the time, which were hard-wired for a particular task), they accepted programmed instructions, allowing a single chip to support many different applications. It began with a Japanese company. The Nippon Calculating Machine Corporation was one of many mechanical calculator makers who pivoted (or tried to pivot) to electronic calculators in the late 1960s. To project a high-tech image more in line with their new products, they rebranded in 1967 as the Business Computer Corporation, or Busicom.[11] Almost immediately, however, Busicom faced a new challenge that required another pivot: rival Japanese calculator maker Sharp partnered with American chipmaker Rockwell to create a calculator that crammed all the necessary components into just four chips. Busicom began looking for its own U.S. partner who could work the same kind of microchip magic, and found two: Mostek for its high-volume calculator designs (not to be confused with MOS Technology), and Intel for fancier models. Both had been recently founded by employees breaking away from established companies: Mostek from Texas Instruments and Intel from Fairchild Semiconductor (where the integrated circuit had been born). Both companies intended to exploit the new metal-oxide semiconductor (MOS) manufacturing process, which could cram hundreds or even thousands of transistors onto a single chip. The Intel founders, who included Robert Noyce, intended to mass produce MOS semiconductor memories for computers, intending to displace the then-dominant magnetic core memory. But that business would take time and money to grow; they were happy to have side gigs like the Busicom contract to generate income in the meantime. In June 1969, three Busicom employees arrived at Intel’s Santa Clara, California offices to kick off the collaboration, among them thirty-five-year-old Masatoshi Shima. Shima had studied chemistry at university, but couldn’t find a job in that field when he graduated in 1967, so he became a programmer at Busicom instead, then transferred to a hardware engineering position at their Osaka plant. Due to his programming experience, he was assigned to develop the “high-end” design for what became the Intel collaboration: a new set of chips that would use programmed logic (much like a computer, but with a fixed program stored on a read-only memory chip, or ROM) rather than logic hardcoded into the circuits. The same chipset, Busicom hoped, could be re-used in a variety of different calculator models and other devices, by simply supplying a different ROM. Now Shima presented Busicom’s design to Intel, with about eight chips (the exact number varies depending on the account): two to perform decimal arithmetic; a shift register to store intermediate results; chips for interfacing with the display, keyboard, and printer; and the ROM. Intel gave the responsibility for executing the plan to one of its experienced engineers, Ted Hoff—but Ted didn’t like it. Hoff, an engineer in his early 30s from New York with experience working on computers through post-graduate work at Stanford, believed that the many large chips required by the Busicom design would make it impossible to build at the contracted price. He came up with an alternative design based on the streamlined architecture of the computer he had been working with most recently, Digital Equipment’s PDP-8. It would have a much leaner instruction set for its programs than the Busicom design, offloading complexity by storing intermediate data into a memory chip, Intel’s bread-and-butter. In total, Hoff’s proposal called for only four chips: the ROM (later designated 4001), the memory (4002), a register for storing the active working result (4003), and the processor to execute instructions (4004). This was half the number of chips proposed by Busicom, and, given the greater simplicity of the chips, it would sell for less than half as much. With just a single chip to execute all of the programmed logic (the 4004), this was the first microprocessor to go into commercial development.[12] Ted Hoff holding an Intel chip (not the 4004). Noyce loved the microprocessor concept, and his backing gave Hoff the cover to push the idea forward, even though his official mandate consisted only of ushering the Busicom design through to production. Though Noyce made some enthusiastic pronouncements about how everyone would someday own their own computer, the slow, barely capable 4004 in no way threatened the computer industry. What was significant about the microprocessor for Intel in 1971 was not that it was an inexpensive computer, but that it computerized electronics.[13] The distinction between fixed hardware and malleable software made computers incredibly flexible; introducing the same distinction into the world of electronics made it possible to create new devices without the expense in time and money (to buyer or seller) of designing and manufacturing new chips. Instead, a client could simply write new instructions and flash them onto a ROM to customize an already existing set of chips to their needs. Integrated circuits had solved the problem of scaling circuit production, now microprocessors could solve the problem of scaling circuit design, by moving most of the design work into cheap software. As of the summer of 1969, though, Intel only had a design sketch. Hoff hadn’t solved the many concrete engineering problems needed to complete implementation, nor did he have the time or the expertise to solve them. In early 1970, Intel hired Italian-born engineer Federico Faggin to work out the exact chip layouts for the four chips, and finally, in 1971, Busicom was able to sell their product, the 141-PF. It was the first calculator powered by a microprocessor. The Buiscom 141-PF [Christian Bassow / CC BY-SA 4.0]. Momentous as this may seem in retrospect, at the time, it hardly made a ripple. Instead Busicom’s breakthrough product came from its other collaboration, with Mostek, which resulted in the LE-120A “Handy LE”, also introduced in 1971. At just five-by-three-by-one inches, it was the first truly pocket-sized calculator.[14] Instead of a microprocessor, it used a “calculator-on-a-chip,” a single piece of silicon with over two thousand transistors that could perform all of the functions needed by the calculator. Despite emerging from the dynamics of the calculator market, the microprocessor didn’t stick there: sales volume had grown so high in the early 1970s that it continued to be more economical to make custom calculator chips not designed as general-purpose processors. Busicom Handy-LE. But this doesn’t mean the microprocessor was a flop. It found a growing market in a wide variety of other applications, from machine tools to automobiles. Intel almost immediately followed up the 4004 with the more powerful 8008 processor, developed in parallel with the 4004 as part of another client project with computer terminal maker Datapoint. Competitors launched their own microprocessor designs soon after.[15] For the personal computer, of course, the microprocessor was sine qua non, a necessary (but not sufficient) precondition for its creation. Calculator Culture Meanwhile, within just a couple of years, the pocket calculator had exploded in market size, reaching a breadth of audience that it would take the personal computer a decade of growth to match. The industry developed very differently also, expanding to a totally different set of buyers and sales channels than those traditionally served by desktop calculator makers, without the need for any disruptive entrepreneurs to show the way. Some of the same firms that had made desktop calculators transitioned smoothly to making pocket models. Many calculator makers failed, but mainly due to the shifting dynamics of semiconductor production, not from failing to see new market opportunities. To understand how this happens, we have to look at who was buying calculators, and why. The market developed in several waves, each reaching different groups of purchasers. Initial sales, in 1971 and early 1972, predominantly went to businessmen and -women, who sought relief from the daily grind of arithmetical drudgery that pervaded nearly every profession. A device in their pocket or on their desk could now give instant, accurate answers: areas, ratios, and price estimates for building contractors; interest calculations for bankers, discounts and commissions for salesmen. On top of that, every small business owner had invoices, bills, and payrolls to tote up—the large corporations had long-since computerized these operations, but the likes of bodega owners, hairdressers, and roofers could not afford to do so. A key contrast with the personal computer, as will become clearer later, is that the source of demand was obvious—anyone who considered the various use cases could see that millions of people could justify a $100 or $200 purchase to automate their daily dose of arithmetic. The pocket calculator was also very simple. Unlike early PCs, the very first models already had enough functionality to satisfy the needs of a wide array of mass market buyers, and there was virtually nothing to them but a handful of microchips, a small display, and a cheap plastic case. For basic four-function calculators all of the benefit of Moore’s Law scaling went into reducing prices, not increasing capabilities, and so prices fell very, very fast. Low prices then produced to a second wave of buyers, hard on the heels of the first: ordinary middle-class households picking up calculators for themselves or for their children, often as birthday or Christmas gifts. In the spring of 1973, for example, Bowmar introduced a $59 model, “aimed specifically at housewives and students at about the junior high school level” that weighed just six ounces. By the following Christmas season, 10% of Americans already owned a calculator, and they sold for as little as $17, twenty times less than typical prices in 1971.[16] Meanwhile, yet another group of buyers fell in love with the more advanced pocket models that followed the basic four-function models—people involved in math-intensive business (such as finance), or in science or engineering work, whether as professionals or students. Steve Wozniak, for example, “drooled” over the Hewlett-Packard HP-35 scientific calculator, released in 1972, and bought one as soon as he could despite the steep price of $395. He was not the only one: the HP-35 became an instant campus status symbol in science and engineering departments, and sold far beyond the manufacturer’s expectations. In addition to the nerdy cultural cachet it provided, this group of users also had a clear use for a more powerful calculator. Unlike its basic low-cost brethren, the HP-35 could calculate trigonometric functions, exponentials, logarithms, and more. It could also render very large or very small numbers in scientific notation. The need for such capabilities came up often in scientific work, and had previously required tedious table look-ups or the use of a slide rule, a more laborious and less precise tool than the calculator. Within a handful of years, the slide rule, once the signature accessory of the engineering set, all but disappeared.[17] HP-35 calculator [Mister rf /CC BY-SA 4.0]. Other than figuring taxes each spring and toting up simple checkbook balances, the day-to-day usefulness of the calculator to ordinary consumers was less clear. Prices had fallen so far by 1973 that middle-class families could afford to buy a calculator on a whim, and fad probably drove at least as many sales as pragmatism. As calculators proliferated by the tens of millions, schools had to decide whether to embrace them, reject them, or seek some kind of wary truce. Meanwhile, a whole sub-genre of books appeared to advise the befuddled on what to do with their new devices “after you’ve balanced your checkbook, added up your expenses, or done your math homework”: eight different general-audience books on pocket calculators appeared in 1975 alone, including Oleg D. Jefimenko’s How to Entertain with Your Pocket Calculator: Pastimes, Diversions, Games, and Magic Tricks, Len Buckwalter’s, 100 Ways to Use Your Pocket Calculator, and James Rogers’ The Calculating Book: Fun and Games with Your Pocket Calculator.[18] The cover of one of many 1970s guides on the uses of the pocket calculator. It should come as no surprise that some of these authors also inhabited the very same electronic hobby community that would create the personal computer. Buckwalter, for example, wrote a book in the early 60s called Having Fun with Transistors, maintained a regular column on CB radio in the magazine Electronics Illustrated, and in 1978 would go on to publish The Home Computer Book: A Complete Guide for Beginners. Calculators fascinated electronic hobbyists, and their magazines teemed with advertisements for calculators, articles about calculators, and ideas for building, using, or modifying calculators. This hobby interest is how a small outfit like MITS got involved with calculator manufacture in the first place. Intersecting with this hobby community was a more loose-knit group of dreamers, mostly young men like Steve Wozniak, who had seen what a real computer could do and wanted to bring that power home. For them, the pocket calculator, especially the more sophisticated scientific or programmable models, represented a step in the right direction, a powerful almost-computer that they could hold in their hands. It is to this dream of a computer to call your own that we will turn next.

Read more
ARPANET, Part 2: The Packet

By the end of 1966, Robert Taylor, had set in motion a project to interlink the many computers funded by ARPA, a project inspired by the “intergalactic network” vision of J.C.R. Licklider. Taylor put the responsibility for executing that project into the capable hands of Larry Roberts. Over the following year, Roberts made several crucial decisions which would reverberate through the technical architecture and culture of ARPANET and its successors, in some cases for decades to come. The first of these in importance, though not in chronology, was to determine the mechanism by which messages would be routed from one computer to another. The Problem If computer A wants to send a message to computer B, how does the message find its way from the one to the other? In theory, one could allow any node in a communications network to communicate with any other node by linking every such pair with its own dedicated cable. To communicate with B, A would simply send a message over the outgoing cable that connects to B. Such a network is termed fully-connected. At any significant size, however, this approach quickly becomes impractical, since the number of connections necessary increases with the square of the number of nodes.1 Instead, some means is needed for routing a message, upon arrival at some intermediate node, on toward its final destination. As of the early 1960s, two basic approaches to this problem were known. The first was store-and-forward message switching. This was the approach used by the telegraph system. When a message arrived at an intermediate location, it was temporarily stored there (typically in the form of paper tape) until it could be re-transmitted out to its destination, or another switching center closer to that destination. Then the telephone appeared, and a new approach was required. A multiple-minute delay for each utterance in a telephone call to be transcribed and routed to its destination would result in an experience rather like trying to converse with someone on Mars. Instead the telephone system used circuit switching. The caller began each telephone call by sending a special message indicating whom they were trying to reach. At first this was done by speaking to a human operator, later by dialing a number which was processed by automatic switching equipment. The operator or equipment established a dedicated electric circuit between caller and callee. In the case of a long-distance call, this might take several hops through intermediate switching centers. Once this circuit was completed, the actual telephone call could begin, and that circuit was held open until one party or the other terminated the call by hanging up. The data links that would be used in ARPANET to connect time-shared computers partook of qualities of both the telegraph and the telephone. On the one hand, data messages came in discrete bursts, like the telegraph, unlike the continuous conversation of a telephone. But these messages could come in a variety of sizes for a variety of purposes, from console commands only a few characters long to large data files being transferred from one computer to another. If the latter suffered some delays in arriving at their destination, no one would particularly mind. But remote interactivity required very fast response times, rather like a telephone call. One important difference between computer data networks and bout the telephone and the telegraph was the error-sensitivity of machine-processed data. A single character in a telegram changed or lost in transmission, or a fragment of a word dropped in a telephone conversation, were matters unlikely to seriously impair human-to-human communication. But if noise on the line flipped a single bit from 0 to 1 in a command to a remote computer, that could entirely change the meaning of that command. Therefore every message would have to be checked for errors, and re-transmitted if any were found. Such repetition would be very costly for large messages, which would be all the more likely to be disrupted by errors, since they took longer to transmit. A solution to these problems was arrived at independently on two different occasions in the 1960s, but the later instance was the first to come to the attention of Larry Roberts and ARPA. The Encounter In the fall of 1967, Roberts arrived in Gatlinburg, Tennessee, hard by the forested peaks of the Great Smoky Mountains, to deliver a paper on ARPA’s networking plans. Almost a year into his stint at the Information Processing Technology Office (IPTO), many areas of the network design were still hazy, among them the solution to the routing problem. Other than a vague mention of blocks and block size, the only reference to it in Roberts’ paper is in a brief and rather noncommittal passage at the very end: “It has proven necessary to hold a line which is being used intermittently to obtain the one-tenth to one second response time required for interactive work. This is very wasteful of the line and unless faster dial up times become available, message switching and concentration will be very important to network participants.”2 Evidently, Roberts had still not entirely decided whether to abandon the approach he had used in 1965 with Tom Marrill, that is to say, connecting computers over the circuit-switched telephone network via an auto-dialer. Coincidentally, however, someone else was attending the same symposium with a much better thought-out idea of how to solve the problem of routing in data networks. Roger Scantlebury had crossed the Atlantic, from the British National Physical Laboratory (NPL), to present his own paper. Scantlebury took Roberts aside after hearing his talk, and told him all about something called packet-switching. It was a technique his supervisor at the NPL, Donald Davies had developed. Davies’ story and achievements are not generally well-known in the U.S, although in the fall of 1967, Davies’ group at the NPL was at least a year ahead of ARPA in its thinking. Davies, like many early pioneers of electronic computing, had trained as a physicist. He graduated from Imperial College, London in 1943, when he was only 19 years old, and was immediately drafted into the “Tube Alloy” program – Britain’s code name for its nuclear weapons project. There he was responsible for supervising a group of human computers, using mechanical and electric calculators to crank out numerical solutions to problems in nuclear fission.3 After the war, he learned from the mathematician John Womersley about a project he was supervising out at the NPL, to build an electronic computer that would perform the same kinds of calculations at vastly greater speed. The computer, designed by Alan Turing, was called ACE, for “automatic computing engine.” Davies was sold, and got himself hired at NPL as quickly as he could. After contributing to the detailed design and construction of the ACE machine, he remained heavily involved in computing as a research leader at NPL. He happened in 1965 to be in the United States for a professional meeting in that capacity, and used the occasion to visit several major time-sharing sites to see what all the buzz was about. In the British computing community time-sharing in the American sense of sharing a computer interactively among multiple users was unknown. Instead, time-sharing meant splitting a computer’s workload across multiple batch-processing programs (to allow, for example, one program to proceed while another was blocked reading from a tape).4 Davies’ travels took him to Project MAC at MIT, RAND Corporation’s JOSS Project in California, and the Dartmouth Time-Sharing System in New Hampshire. On the way home one of his colleagues suggested they hold a seminar on time-sharing to inform the British computing community about the new techniques that they had learned about in the U.S. Davies agreed, and played host to a number of major figures in American computing, among them Fernando Corbató (creator of the Compatible Time-Sharing System at MIT), and Larry Roberts himself. During the seminar (or perhaps immediately after), Davies was struck with the notion that the time-sharing philosophy could be applied to the links between computers, as well as to the computers themselves. Time-sharing computers gave each user a small time slice of the processor before switching to the next, giving each user the illusion of an interactive computer at their fingertips. Likewise, by slicing up each message into standard-sized pieces which Davies called “packets,” a single communications channel could be shared by multiple computers or multiple users of a single computer. And moreover, this would address all the aspects of data communication that were poorly served by telephone- or telegraph-style switching. A user engaged interactively at a terminal, sending short commands and receiving short responses, would not have their single-packet messages blocked behind a large file transfer, since that transfer would be broken into many packets. And any corruption in such large messages would only affect a single packet, which could easily be re-transmitted to complete the message. Davies wrote up his ideas in an unpublished 1966 paper, entitled “Proposal for a Digital Communication Network.” The most advanced telephone networks were then on the verge of computerizing their switching systems, and Davies proposed building packet-switching into that next-generation telephone network, thereby creating a single wide-band communications network that could serve a wide variety of uses, from ordinary telephone calls to remote computer access. By this time Davies had been promoted to Superintendent of NPL, and he formed a data communications group under Scantlebury to flesh out his design and build a working demonstration. Over the year leading up to the Gatlinburg conference, Scantlebury’s team had thus worked out details of how to build a packet-switching network. The failure of a switching node could be dealt with by adaptive routing with multiple paths to the destination, and the failure of an individual packet by re-transmission. Simulation and analysis indicated an optimal packet size of around 1000 bytes – much smaller and the loss of bandwidth from the header metadata required on each packet became too costly, much larger and the response times for interactive users would be impaired too often by large messages. The paper delivered by Scantlebury contained details such as a packet layout format… And an analysis of the effect of packet size on network delay. Meanwhile, Davies’ and Scantlebury’s literature search turned up a series of detailed research papers by an American who had come up with roughly the same idea, several years earlier. Paul Baran, an electrical engineer at RAND Corporation, had not been thinking at all about the needs of time-sharing computer users, however. RAND was a Defense Department-sponsored think tank in Santa Monica, California, created in the aftermath of World War II to carry out long-range planning and analysis of strategic problems in advance of direct military needs.[^sdc] Baran’s goal was to ward off nuclear war by building a highly robust military communications net, which could survive even a major nuclear attack. Such a network would make a Soviet preemptive strike less attractive, since it would be very hard to knock out America’s ability to respond by hitting a few key nerve centers. To that end, Baran proposed a system that would break messages into what he called message blocks, which could be independently routed across a highly-redundant mesh of communications nodes, only to be reassembled at their final destination.  [^sdc]: System Development Corporation (SDC), the primary software contractor to the SAGE system and the site of one of the first networking experiments, as discussed in the last segment, had been spun off from RAND. ARPA had access to Baran’s voluminous RAND reports, but disconnected as they were from the context of interactive computing, their relevance to ARPANET was not obvious. Roberts and Taylor seem never to have taken notice of them. Instead, in one chance encounter, Scantlebury had provided everything to Roberts on a platter: a well-considered switching mechanism, its applicability to the problem of interactive computer networks, the RAND reference material, and even the name “packet.” The NPL’s work also convinced Roberts that higher speeds would be needed than he had contemplated to get good throughput, and so he upgraded his plans to 50 kilobits-per-second lines. For ARPANET, the fundamentals of the routing problem had been solved.5 The Networks That Weren’t As we have seen, not one, but two parties beat ARPA to the punch on figuring out packet-switching, a technique that has proved so effective that its now the basis of effectively all communications. Why, then, was ARPANET the first significant network to actually make use of it? The answer is fundamentally institutional. ARPA had no official mandate to build a communications network, but they did have a large number of pre-existing research sites with computers, a “loose” culture with relatively little oversight of small departments like the IPTO, and piles and piles of money. Taylor’s initial 1966 request for ARPANET came to $1 million, and Roberts continued to spend that much or more in every year from 1969 onward to build and operate the network6. Yet for ARPA as a whole this amount of money was pocket change, and so none of his superiors worried too much about what Roberts was doing with it, so long as it could be vaguely justified as related to national defense.  By contrast, Baran at RAND had no means or authority to actually do anything. His work was pure research and analysis, which might be applied by the military services, if they desired to do so. In 1965, RAND did recommend his system to the Air Force, which agreed that Baran’s design was viable. But the implementation fell within the purview of the Defense Communications Agency, who had no real understanding of digital communications. Baran convinced his superiors at RAND that it would be better to withdraw the proposal than allow a botched implementation to sully the reputation of distributed digital communication. Davies, as Superintendent of the NPL, had rather more executive authority than Baran, but a more limited budget than ARPA, and no pre-existing social and technical network of research computer sites. He was able to build a prototype local packet-switching “network” (it had only one node, but many terminals) at NPL in the late 1960s, with a modest budget of £120,000 pounds over three years.7 ARPANET spent roughly half that on annual operational and maintenance costs alone at each of its many network sites, excluding the initial investment in hardware and software.8 The organization that would have had the power to build a large-scale British packet-switching network was the post office, which operated the country’s telecommunications networks in addition to its traditional postal system. Davies managed to interest a few influential post office officials in his ideas for a unified, national digital network, but to change the  momentum of such a large system was beyond his power. Licklider, through a combination of luck and planning, had found the perfect hothouse for his intergalactic network to blossom in. That is not to say that everything except for the packet-switching concept was a mere matter of money. Execution matters, too. Moreover, several other important design decisions defined the character of ARPANET. The next we will consider is how responsibilities would be divided between the host computers sending and receiving a message, versus the network over which they sent it. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late (1996) Leonard Kleinrock, “An Early History of the Internet,” IEEE Communications Magazine (August 2010) Arthur Norberg and Julie O’Neill, Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986 (1996) M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)

Read more
ARPANET, Part 1: The Inception

By the mid-1960s, the first time-sharing systems had already recapitulated the early history of the first telephone exchanges. Entrepreneurs built those exchanges as a means to allow subscribers to summon services such as a taxi, a doctor, or the fire brigade. But those subscribers soon found their local exchange just as useful for communicating and socializing with each other1. Likewise time-sharing systems, initially created to allow their users to “summon” computer power, had become communal switchboards with built-in messaging services2. In the decade to follow, computers would follow the next stage in the history of the telephone – the interconnection of exchanges to form regional and long-distance networks. The Ur-Network The first attempt to actually connect multiple computers into a larger whole was the ur-project of interactive computing itself, the SAGE air defense system. Because each of the twenty-three SAGE direction centers covered a particular geographical area, some mechanism was needed for handing off radar tracks from one center to another when incoming aircraft crossed a boundary between those areas. The SAGE designers dubbed this problem “cross-telling,” and they solved it by building data links on dedicated AT&T phone lines among all the neighboring direction centers. Ronald Enticknap, part of a small Royal Air Force delegation to SAGE, oversaw the design and implementation of this subsystem. Unfortunately, I have found no detailed description of the cross-telling function, but evidently each direction center computer determined when a track was crossing into another sector and sent its record over the phone line to that sector’s computer, where it could be picked up by an operator monitoring a terminal there3. The SAGE system’s need to translate digital data into an analog signal over the phone line (and then back again at the receiving station) occasioned AT&T to develop the Bell 101 “dataset”, which could deliver a modest 110 bits per second. This kind of device was later called a “modem”, for its ability to modulate the analog telephone signal using an outgoing series of digital data, and demodulate the bits from the incoming wave form. SAGE  thus laid some important technical groundwork for later computer networks. The first computer network of lasting significance, however, is one whose name is well known even today: ARPANET. Unlike SAGE, it connected a diverse set of time-shared and batch-processing hardware each with its own custom software, and was intended to be open-ended in scope and function, fulfilling whatever purposes users might desire of it. ARPA’s section for computer research – the Information Processing Techniques Office (IPTO) –  funded the project under the direction of Robert Taylor, but the idea for such a network sprang from the imagination of that office’s first director, J.C.R. Licklider. The Vision As we learned earlier, Licklider, known to his colleagues as ‘Lick,’ was a psychologist by training. But he became entranced with interactive computing while working on radar systems at Lincoln Laboratory in the late 1950s. This passion led him to fund some of the first experiments in time-shared computing when he became the director of the newly-formed IPTO, a position he took in 1962. By that time, he was already looking ahead to the possibility of linking isolated interactive computers together into a larger superstructure. In his 1960 paper on “man-computer symbiosis”, he wrote that [i]t seems reasonable to envision …a ‘thinking center’ that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval and the symbiotic functions suggested earlier in this paper. The picture readily enlarges itself into a network of such centers, connected to one another by wide-band communication lines and to individual users by leased-wire services. Just as the TX-2 had kindled Licklider’s excitement over interactive computing, it may have been the SAGE computer network that prompted Licklider to imagine that a variety of interactive computing centers could be connected together to provide a kind of telephone network for intellectual services. Whatever its exact origin, Licklider began disseminating this vision among the community of researchers that he had created at IPTO, most famously in his memo of April 23, 1963, directed to the “Members and Affiliates of the Intergalactic Computer Network,” that is to say the various researchers receiving IPTO funding for time-sharing and other computing projects. The memo is rambling and shambolic, evidently dictated on the fly with little to no editorial revision. Determining exactly what Licklider intended it to say about computer networks therefore requires some speculative inference. But several significant clues stand out. First, Licklider revealed he sees the “various activities” funded by IPTO as in fact belonging to a single “overall enterprise.”  He follows this pronouncement by discussing the need to allocate money and projects to maximize the advantage accruing to that enterprise, as network of researchers as a whole, given that, “to make progress, each of the active researchers needs a software base and a hardware facility more complex and more extensive than he, himself, can create in reasonable time.” To achieve this global efficiency might, Licklider conceded, requires some individual concessions and sacrifices by certain parties. Then Licklider began to explicitly discuss computer (rather than social) networks. He wrote of the need for some sort of network control language (what would later be called a protocol) and his desire to eventually see an IPTO computer network consisting of “..at least four large computers, perhaps six or eight small computers, and a great assortment of disc files and magnetic tape units–not to mention the remote consoles and teletype stations…” Finally, he spent several pages laying out a concrete example of how a future interaction with such a computer network might play out. Licklider imagines a situation where he is running an analysis on some experimental data. “The trouble is,” he writes, “I do not have a good grid-plotting program. …Is there a suitable grid-plotting program anywhere in the system? Using prevailing network doctrine, I interrogate first the local facility, and then other centers. Let us suppose that I am working at SDC, and that I find a program that looks suitable on a disc file in Berkeley.” He asks the network to execute this program for him, assuming that, “[w]ith a sophisticated network-control system, I would not decide whether to send the data and have them worked on by programs somewhere else, or bring in programs and have them work on my data.” Taken together, these fragments of thought appear to reveal a larger scheme in Licklider’s mind: first, to parcel out particular specialties and areas of expertise among IPTO-funded researchers, and then to build beneath that social community a physical network of IPTO computers. This physical instantiation of IPTO’s “overall enterprise” would allow researchers to share in and benefit from the specialized hardware and software resources at each site. Thus IPTO would avoid wasteful duplication while amplifying the power of each funding dollar by allowing every researcher to access the full spectrum of computing capabilities across all of IPTO’s projects. This idea, of resource-sharing among the research community via a communications network, sowed the seeds within IPTO that led, several years later, to the creation of ARPANET. Despite its military provenance, originating as it did in the halls of the Pentagon, ARPANET thus had no real military justification. It is sometimes said that the network was designed as a war-hardened communications network, capable of surviving a first-strike nuclear attack. There is a loose connection, as we’ll see later, between ARPANET and an earlier project with that aim, and ARPA’s leaders occasionally trotted out the “hardened systems” idea to justify their network’s existence before Congress or the Secretary of Defense. But in truth, IPTO built ARPANET purely for its own internal purposes, to support its community of researchers – most of whom themselves lacked any direct defense justification for their activities. Meanwhile, by the time of his famous memo Licklider had already begun planning the germ of his intergalactic network, to be led by Len Kleinrock at UCLA. The Precursors Kleinrock, the son of working class immigrants from Eastern Europe, grew up in Manhattan in the shadow of the George Washington Bridge. He worked his way through school, taking evening sessions at City College to study electrical engineering. When he heard about a fellowship opportunity for graduate study at MIT, capped by a semester of full time work at Lincoln Lab, he jumped at the opportunity. Though built to serve the needs of SAGE, Lincoln had since diversified into many other research projects, often tangentially related to air defense, at best. Among them was the Barnstable Study, a concept floated by the Air Force to create an orbital belt of metallic strips (similar to chaff) to use as reflectors for a global communication system4. Kleinrock had fallen under the spell of Claude Shannon at MIT, and so decided to focus his graduate work on the theory of communication networks. The Barnstable Study provided Kleinrock with his first opportunity to apply the tools of information and queuing theory to a data network, and he extended that analysis into a full dissertation on “communications nets,” combining his mathematical analysis with empirical data gathered by running simulations on Lincoln’s TX-2 computers. Among Kleinrock’s close colleagues at Lincoln, sharing time with him in front of the TX-2, were Larry Roberts and Ivan Sutherland, whom we will meet again shortly. By 1963, Kleinrock had accepted a position at UCLA, and Licklider saw an opportunity – here he had an expert in data networking at a site with three local computer centers: the main computation center, the health sciences computer center, and the Western Data Processing Center (a cooperative of thirty institutions with shared access to an IBM computer). Moreover, six of the Western Data Processing Center institutions had remote connections to the computer by modem, and the IPTO-sponsored System Development Corporation (SDC) computer resided just a few miles away in Santa Monica. IPTO issued a contract to UCLA to interconnect these four centers, as a first experiment in computer networking. Later, according to the plan, a connection with Berkeley would tackle the problems inherent in a longer-range data connection. Despite the promising situation, the project foundered and the network was never built. The directors of the different UCLA centers didn’t trust one other, nor fully believe in the project, and they refused to cede control over their computing resources to one another’s users. IPTO had little leverage to influence the situation, since none of the UCLA computing centers were funded directly by ARPA5. IPTO’s second try at networking proved more successful, perhaps because it was significantly more limited in scope – a mere experimental trial rather than a pilot plant. In 1965, a psychologist and disciple of Licklider’s named Tom Marill left Lincoln Lab to try to profit from the excitement around interactive computing by starting his own time-sharing business. Lacking much in the way of actual paying customers, however, he began casting about for other sources of income, and thus proposed that IPTO fund him to carry out a study of computer networking. IPTO’s new director, Ivan Sutherland, decided to bring a larger and more reputable partner on board  as ballast, and so sub-contracted the work to Marill’s company via Lincoln Lab. Heading things from the Lincoln side would be another of Kleinrock’s old office-mates, Lawrence (Larry) Roberts. Roberts had cut his teeth on the Lincoln-built TX-0 as an undergrad at MIT. He spent hours each day entranced before the glowing console screen, eventually constructing a program to (badly) recognize written characters using neural nets. Like Kleinrock he ended up working at Lincoln for his graduate studies, solving computer graphics and computer vision problems, such as edge-detection and three-dimensional rendering, on the larger and more powerful TX-2. Up until late 1964, Roberts had remained entirely focused on his imaging research. Then he came across Lick. In November of that year, he attended an Air Force-sponsored conference on the future of computing at the Homestead hot springs resort in western Virginia. There he talked late into the night with his fellow conference participants, and for the first time heard Lick expound on his idea for an Intergalactic Network. Roberts began to feel a tickle at the back of his brain – he had done great work on computer graphics, but it was in effect trapped on the one-of-a-kind TX-2. No one else could use his software, even if he had way to provide it to them, because no one else had equivalent hardware to run it on. The only way to extend the influence of his work was to report on it in academic papers in the hopes that others would and could replicate it elsewhere. Licklider was right, he decided, a network was exactly the next step needed to accelerate computing research. And so Roberts found himself working with Marill, trying to connect the Lincoln TX-2 with a cross-country link to the SDC computer in Santa Monica, California. In an experimental design that could have been ripped straight from Licklider’s “Intergalactic Network” memo, they planned to have the TX-2 pause in the middle of a computation, use an automatic dialer to remotely call the SDC Q-32, invoke a matrix multiply program on that computer, and then continue the original computation with the answer. Setting aside the basic sensibility of using dearly-bought cutting-edge technology to span a continent in order to use a basic math routine, the whole process was painfully slow due to the use of the dial telephone network. To make a telephone call required setting up a dedicated circuit between the caller and recipient, usually routed through several different switching centers. As of 1965, virtually all of these were electro-mechanical6. Magnets shifted metal bars from one place to another in order to complete each step of the circuit. This whole process took several seconds, during which time the TX-2 could only sit idle and wait. Moreover the lines, though perfectly suited for voice conversation, were noisy with respect to individual bits and supported very low bandwidth (a couple hundred bits per second). A truly effective intergalactic, interactive, network, would require a different approach.[^others] The Marill-Roberts experiment had not shown long-distance networking to be practical or useful, merely theoretically possible. But that was enough. The Decision In the middle of 1966, Robert Taylor took over the directorship of IPTO, succeeding Ivan Sutherland as the third to hold that title. A disciple of Licklider and a fellow-psychologist, he came to IPTO by way of a position administering computer research for NASA. Nearly as soon as he arrived, Taylor seems to have decided that the time had come to make the intergalactic network a reality, and it was Taylor who launched the project that produced ARPANET. ARPA money was still flowing freely, so Taylor had no trouble securing the extra funding from his boss, Charles Herzfeld. Nonetheless, the decision carried significant risk of failure. Other than the very limited 1965 cross-country connection, no one had ever attempted anything like ARPANET. One could point to other early experiments in computer networking. For example, Princeton and Carnegie-Mellon set up a network of time-shared computers in the late 1960s in conjunction with IBM.7 The main distinction between these and the ARPA efforts was their uniformity – they used exactly the same computer system hardware and software at each site. ARPANET, on the other hand, would be bound to deal with diversity. By the mid-1960s, IPTO was funding well over a dozen sites, each with its own computer, and each of those computers had a different hardware design and operating software. The ability to share software was rare even among different models from a single manufacturer – only the brand-new IBM System/360 product line had attempted this feat. This diversity of systems was a risk that added a great deal of technical complexity to the network design, but also an opportunity for Licklider-style resource sharing. The University of Illinois, for example, was in the midst of construction on the massive, ARPA-funded ILLIAC IV supercomputer. It seemed improbable to Taylor that the local users at Urbana-Champaign could fully utilize this huge machine. Even sites with systems of more modest scale – the TX-2 at Lincoln and the Sigma-7 at UCLA, for example, could not normally share software due to their basic incompatibilities. The ability to overcome this limitation by directly accessing the software at one site from another was attractive. In the paper describing their networking experiment, Marill and Roberts had suggested that this kind of resource sharing would produce something akin to Ricardian comparative advantage among computing sites: The establishment of a network may lead to a certain amount of specialization among the cooperating installations. If a given installation, X, by reason of special software or hardware, is particularly adept at matrix inversion, for example, one may expect that users at other installations in the network will exploit this capability by inverting their matrices at X in preference to doing so on their home computers.[^ricardo] Taylor had one further motivation for proceeding with a resource-sharing network. Purchasing a new computer for each new IPTO site, with all the capabilities that might be required by the researchers at that site, had proven expensive, and as one site after another was added to IPTO’s portfolio, the budget for each was becoming thinly stretched. By putting all the IPTO-funded systems onto a single network, it might be possible to supply new grantees with more limited computers, or perhaps even none at all. They could draw whatever computer power they needed from a remote site with excess capacity, the network as whole acting as a communal reservoir of hardware and software. Having launched the project and secured its funding, Taylor’s last notable contribution to ARPANET was to select someone to actually design the system and see it through to completion. Roberts was the obvious choice. His engineering bona fides were impeccable, he was already a respected member of the IPTO research community, and he was one of of a handful of people with hands-on experience designing and building a long-distance computer network. So in the fall of 1966, Taylor called Roberts to ask him to come down from Massachusetts to work for ARPA in Washington. But Roberts proved difficult to entice. Many of the IPTO principal investigators cast a skeptical eye on the reign of Robert Taylor, whom they viewed as something of a lightweight. Yes, Licklider had been a psychologist too, with no real engineering chops, but at least he had a doctorate, and a certain credibility earned as one of the founding fathers of interactive computing. Taylor was an unknown with a mere master’s degree. How could he oversee the complex technical work going on within the IPTO community? Roberts counted himself among these skeptics. But a combination of stick and carrot did their work. On the one hand Taylor exerted a certain pressure on Roberts’ boss at Lincoln, reminding him that a substantial portion of his lab’s funding now came from ARPA, and that it would behoove him to encourage Roberts to see the value in the opportunity on offer. On the other hand, Taylor offered Roberts the newly-minted title of “Chief Scientist”, a position that would report over Taylor’s head directly to a Deputy Director of ARPA, and mark Roberts as Taylor’s successor to the directorship. On these terms Roberts agreed to take on the ARPANET project.8 The time had come to turn the vision of resource-sharing into reality. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late (1996) Arthur Norberg and Julie O’Neill, Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986 (1996) M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)  

Read more
Coda: Steam’s Last Stand

In the year 1900, automobile sales in the United States were divided almost evenly among three types of vehicles: automakers sold about 1,000 cars powered by internal combustion engines, but over 1,600 powered by steam engines, and almost as many by batteries and electric motors. Throughout all of living memory (at least until the very recent rise of electric vehicles), the car and the combustion engine have gone hand in hand, inseparable. Yet, in 1900, this type claimed the smallest share.For historians of technology, this is the most tantalizing fact in the history of the automobile, perhaps the most tantalizing fact in the history of the industrial age. It suggests a multiverse of possibility, a garden of forking, ghostly might-have-beens. It suggests that, perhaps, had this unstable equilibrium tipped in a different direction, many of the negative externalities of the automobile age—smog, the acceleration of global warming, suburban sprawl—might have been averted. It invites the question, why did combustion win? Many books and articles, by both amateur and professional historians, have been written to attempt to answer this question.However, since the electric car, interesting as its history certainly is, has little to tell us about the age of steam, we will consider here a narrower question—why did steam lose? The steam car was an inflection point where steam power, for so long an engine driving technological progress forward, instead yielded the right-of-way to a brash newcomer. Steam began to look like relic of the past, reduced to watching from the shoulder as the future rushed by. For two centuries, steam strode confidently into one new domain after another: mines, factories, steamboats, railroads, steamships, electricity. Why did it falter at the steam car, after such a promising start?The Emergence of the Steam CarThough Germany had given birth to experimental automobiles in the 1880s, the motor car first took off as successful industry in France. Even Benz, the one German maker to see any success in the early 1890s, sold the majority of its cars and motor-tricycles to French buyers. This was in large part due to the excellent quality of French cross-country roads – though mostly gravel rather than asphalt, they were financed by taxes and overseen by civil engineers, and well above the typical European or American standard of the time. These roads…made it easier for businessmen [in France] to envisage a substantial market for cars… They inspired early producers to publicize their cars by intercity demonstrations and races. And they made cars more practical for residents of rural areas and small towns.[1] The first successful motor car business arose in Paris, in the early 1890s. Émile Levassor and René Panhard (both graduates of the École centrale des arts et manufactures, an engineering institute in Paris), met as managers at a machine shop that made woodworking and metal-working tools. They became the leading partners of the firm and took it into auto making after becoming licensors for the Daimler engine.The 1894 Panhard & Levassor Phaeton already shows the beginning of the shift from horseless carriages with an engine under the seats to the modern car layout with a forward engine compartment. [Jörgens.mi / CC BY-SA 3.0]Before making cars themselves, they looked for other buyers for their licensed engines, which led them to a bicycle maker near the Swiss border, Peugeot Frères Aînés, headed by Armand Peugeot. Though bicycles seem very far removed from cars today, they made many contributions to the early growth of the auto industry. The 1880s bicycle boom (stimulated by the invention of the chain-driven “safety” bicycle) seeded expertise in the construction of high-speed road vehicles with ball bearings and tubular metal frames. Many early cars resembled bicycles with an additional wheel or two, and chain drives for powering the rear wheels remained popular throughout the first few decades of automobile development. Cycling groups also became very effective lobbyists for the construction of smooth cross-country roads on which to ride their machines, literally paving the way for the cars to come.[2]Armand Peugeot decided to purchase Daimler engines from Panhard et Levassor and make cars himself. So, already by 1890 there were two French firms making cars with combustion engines. But French designers had not altogether neglected the possibility of running steam vehicles on ordinary roads. In fact, before ever ordering a Daimler engine, Peugeot had worked on a steam tricycle with the man who would prove to be the most persistent partisan of steam cars in France, Léon Serpollet.A steam-powered road vehicle was not, by 1890, a novel idea. It had been proposed countless times, even before the rise of steam locomotives: James Watt himself had first developed an interest in engines, all the way back in the 1750s, after his friend John Robison suggested building a steam carriage. But those who had tried to put the idea into practice had always found the result wanting. Among the problems were the bulk and weight of the engine and all its paraphernalia (boiler, furnace, coal), the difficulty of maintaining a stoked furnace and controlling steam levels (including preventing the risk of boiler explosion), and the complexity of operating the engine. The only kinds of steam road vehicles to find any success, were those that inherently required a lot of weight, bulk, and specialized training to operate—fire engines and steamrollers—and even those only appeared in the second half of the nineteenth century.[3]Consider Serpollet’s immediate predecessor in steam carriage building, the debauched playboy Comte Albert de Dion. He commissioned two toymakers, George Bouton and Charles Trépardoux to make several small steam cars in the 1880s. These coal-fueled machines took thirty minutes or more to build up a head of steam. In 1894 a larger Dion steam tractor finished first in one of the many cross-country auto races that had begun to spring up to help carmakers promote their vehicles. But the judges disqualified Dion’s vehicle on account of its impracticality: requiring both a driver and a stoker for its furnace, it was in a very literal sense a road locomotive. A discouraged Comte de Dion gave up the steam business, but De Dion-Bouton went on to be a successful maker of combustion automobiles and automobile engines.[4]This De Dion-Bouton steam tractor was disqualified from an auto race in 1894 as impractical.Coincidentally enough, Léon Serpollet and his brother Henri were, like Panhard and Levassor, makers of woodworking machines, and like Peugeot, they came from the Swiss borderlands in East-central France. Also like Panhard and Levassor, Léon studied engineering in Paris, in his case at the Conservatoire national des arts et métiers. But by the time he reached Paris, he and his brother had already concocted the invention that would lead them to the steam car: a “flash” boiler that instantly turned water to steam by passing it through a hot metal tube. This would allow the vehicle to start more quickly (though it still took time to heat the tube before the boiler could be used) and also alleviate safety concerns about a boiler explosion.The most important step to the (relative) success of the Serpollets’ vehicles, however, was when they replaced the traditional coal furnace with a burner for liquid, petroleum-based fuel. This went a long way towards removing the most disqualifying objections to the practicality of steam cars. Kerosene or gasoline weighed less and took up less space than an energy-equivalent amount of coal, and an operator could more easily throttle a liquid-fuel burner (by supplying it with more or less fuel) to control the level of steam.Figure 68: A 1902 Gardner-Serpollet steam car.With early investments from Peugeot and a later infusion of cash from Frank Gardner, an American with a mining fortune, the Serpollets built a business, first selling steam buses in Paris, then turning to small cars. Their steam powerplants generated more power than the combustion vehicles of the time, and Léon promoted them by setting speed records. In 1902, he surpassed seventy-five miles-per-hour along the promenade in Nice. At that time, a Gardner-Serpollet factory in eastern Paris was turning out about 100 cars per year. Though impressive numbers by the standards of the 1890s, already this was becoming small potatoes. In 1901 7,600 cars were produced in France, and 14,000 in 1903; the growing market left Gardner-Serpollet behind as a niche producer. Léon Serpollet made one last pivot back to buses, then died of cancer in 1907 at age forty-eight. The French steam car did not survive him.[5]Unlike in the U.S., steam car sales barely took off in France, and never had parity with the total sales of combustion engine cars from the likes of Panhard et Levassor, Peugeot, and many other makes. There was no moment of balance when it appeared that the future of automotive technology was up for grabs. Why this difference? We’ll have more to say about that later, after we consider the American side of the story.The Acme of the Steam CarAutomobile production in the United States lagged roughly five years behind France; and so it was in 1896 that the first small manufacturers began to appear. Charles and George Duryea (bicycle makers, again), were first off the block. Inspired by an article about Benz’ car, they built their own combustion-engine machine in 1893, and, after winning several races, they began selling vehicles commercially out of Peoria, Illinois in 1896. Several other competitors quickly followed.[6]Steam car manufacturing came slightly later, with the Whitney Motor Wagon Company and the Stanley brothers, both in the Boston area. The Stanleys, twins named Francis and Freelan (or F.E. and F.O.), were successful manufacturers of photographic dry plates, which used a dry emulsion that could be stored indefinitely before use, unlike earlier “wet” plates. They fell into the automobile business by accident, in a similar way to many others—by successfully demonstrating a car they had constructed as a hobby, drawing attention and orders. At an exhibition at the Charles River Park Velodrome in Cambridge, F.E. zipped around the field and up an eighty-foot ramp, demonstrating greater speed and power than any other vehicle present, including an imported combustion-engine De Dion tricycle, which could only climb the ramp halfway.[7]The Stanley brothers mounted in their 1897 steam car.The rights to the Stanley design, through a complex series of business details, ended up in possession of Amzi Barber, the “Asphalt King,” who used tar from Trinidad’s Pitch Lake to pave several square miles worth of roads across the U.S.[8] It was Barber automobiles, sold under the Locomobile brand, that formed the plurality of the 1,600 steam cars sold in the U.S. in 1900: the company sold 5,000 total between 1899 and 1902, at the quite-reasonable price of $600. Locomobiles were quiet and smooth in operation, produced little smoke or odor (though they did breathe great clouds of steam), had the torque required to accelerate rapidly and climb hills, and could smoothly accelerate by simply increasing the speed of the piston, without any shifting of gears. The rattling, smoky, single-cylinder engines of their combustion-powered competitors had none of these qualities.[9]Why then, did the steam car market begin to collapse after 1902? Twenty-seven makes of steam car first appeared in the U.S. in 1899 or 1900, mostly concentrated (like the Locomobile) in the Northeast—New York, Pennsylvania, and (especially) Massachusetts. Of those, only twelve continued making steam cars beyond 1902, and only one—the Lane Motor Vehicle Company of Poughkeepsie, New York—lasted beyond 1905. By that year, the Madison Square Garden car show had 219 combustion models on display, as compared to only twenty electric and nine steam.[10]Barber, the Asphalt King, was interested in cars, regardless of what made them go. As the market shifted to combustion, so did he, abandoning steam at the height of his own sales in 1902. But the Stanleys loved their steamers. Their contractual obligations to Barber being discharged in 1901, they went back into business on their own. One of the longest lasting holdouts, Stanley sold cars well into the 1920s (even after the death of Francis in a car accident in 1918), and the name became synonymous with steam. For that reason, one might be tempted to ascribe the death of the steam car to some individual failing of the Stanleys: “Yankee Tinkerers,” they remained committed to craft manufacturing and did not adopt the mass-production “Fordist” methods of Detroit. Already wealthy from their dry plate business, they did not commit themselves fully to the automobile, allowing themselves to be distracted by other hobbies, such as building a hotel in Colorado so that people could film scary movies there.[11]Some of the internal machinery of a late-model Stanley steamer: the boiler at top left, burner at center left, engine at top right, and engine cutaway at bottom right. [Stanley W. Ellis, Smogless Days: Adventures in Ten Stanley Steamers (Berkeley: Howell-North Books, 1971), 22]But, as we have seen, there were dozens of steam car makers, just as there were dozens of makers of combustion cars; no idiosyncrasies of the Stanley psychology or business model can explain the entire market’s shift from one form of power train to another—if anything it was the peculiar psychology of the Stanleys that kept them making steam cars at all, rather than doing the sensible thing and shifting to combustion. Nor did the powers that be put their finger on the scale to favor combustion engines.[12] How, then, can we explain both the precipitous rise of steam in the U.S. (as opposed to its poor showing in France) as well as its sudden fall?The steam car’s defects were as obvious as its advantages. Most annoying was the requirement to build up a head of steam before you could go anywhere: this took about ten minutes for the Locomobile. Whether starting or going, the controls were complex to manage. Scientific American described the “quite simple” steps required to get a Serpollet car going:A small quantity of alcohol is used to heat the burner, which takes about five minutes; then by the small pump a pressure is made in the oil tank and the cock opened to the burner, which lights up with a blue flame, and the boiler is heated up in two or three minutes. The conductor places the clutch in the middle position, which disconnects the motor from the vehicle and regulates the motor to the starting position, then puts his foot on the admission pedal, starting the motor with the least pressure and heating the cylinders, the oil and water feed working but slightly. When the cylinders are heated, which takes but a few strokes of the piston, the clutch is thrown on the full or wean speed and the feed-pumps placed at a maximum, continuing to feed by hand until the vehicle reaches a certain speed by the automatic feed, which is then regulated as desired.[13]Starting a combustion car of that era also required procedures long-since streamlined away—cranking the engine to life, adjusting the carburetor choke and spark plug timing—but even at the time most writers considered steamers more challenging to operate. Part of the problem was that the boilers were intentionally small (to allow them to build steam quickly and reduce the risk of explosion), which meant lots of hands-on management to keep the steam level just right. Nor had the essential thermodynamic facts changed – internal combustion, operating over a larger temperature gradient, was more efficient than steam. The Model T could drive fifteen to twenty miles on a gallon of fuel, the Stanley could go only ten, not to mention its constant thirst for water, which added another “fueling” requirement.[14]The rather arcane controls of a 1912 Stanley steamer. [Ellis, Smogless Days: Adventures in Ten Stanley Steamers, 26]The steam car overcame these disadvantages to achieve its early success in the U.S. because of the delayed start of the automobile industry there. American steam car makers, starting later, skipped straight to petroleum-fueled burners, bypassing all the frustrations of dealing with a traditional coal-fueled firebox, and banishing all associations between that cumbersome appliance and the steam car.At the same time, combustion automobile builders in the U.S. were still early in their learning curve compared to those in France. A combustion engine was a more complex and temperamental machine than a steam engine, and it took time to learn how to build them well, time that gave steam (and electric) cars a chance to find a market. The builders of combustion engines, as they learned from experience, rapidly improved their designs, while steam cars improved relatively little year over year.Most importantly, they never could get up and running as quickly as a combustion engine. In one of those ironies which history graciously provides to the historian, the very impatience that the steam age had brough forth doomed its final progeny, the steam car. It wasn’t possible to start up a steam car and immediately drive; you always had to wait for the car to be ready. And so drivers turned to the easier, more convenient alternative, to the frustration of steam enthusiasts, who complained of “[t]his strange impatience which is the peculiar quirk of the motorist, who for some reason always has been in a hurry and always has expected everything to happen immediately.”[15] Later Stanleys offered a pilot light that could be kept burning to maintain steam, but “persuading motorists, already apprehensive about the safety of boilers, to keep a pilot light burning all night in the garage proved a hard sell.”[16] It was too late, anyway. The combustion-driven automotive industry had achieved critical mass.The Afterlife of the Steam CarThe Ford Model T of 1908 is the most obvious signpost for the mass-market success of the combustion car. But for the moment that steam was left in the dust, we can look much earlier, to the Oldsmobile “curved dash,” which first appeared in 1901 and reached its peak in 1903, when 4,000 were produced, three times the total output of all steam car makers in that pivotal year of 1900. Ransom Olds, son of a blacksmith, grew up in Lansing, Michigan, and caught the automobile bug as a young man in 1887. Like many contemporaries, he built steamers at first (the easier option), but after driving a Daimler car at the 1893 Chicago World’s Fair, he got hooked on combustion. His Curved Dash (officially the Model R) still derived from the old-fashioned “horseless carriage” style of design, not yet having adopted the forward engine compartment that was already common in Europe by that time. It had a modest single-cylinder, five-horsepower engine tucked under the seats, and an equally modest top speed of twenty miles-per-hour. But it was convenient and inexpensive enough to outpace all of the steamers in sales.[17]The Oldsmobile “Curved Dash” was celebrated in song.The market for steam cars was reduced to driving enthusiasts, who celebrated its near-silent operation (excepting the hiss of the burner), the responsiveness of its low-end torque, and its smooth acceleration without any need for clunky gear-shifting. (There is another irony in the fact that late-twentieth century driving enthusiasts, disgusted by the laziness of automatic transmissions, would celebrate the hands-on responsiveness of manual shifters.) The steam partisan was offended by the unnecessary complexity of the combustion automobile. They liked to point out how few moving parts the steam car had.[18] To imagine the triumph of steam is to imagine a world in which the car remained an expensive hobby for this type of car enthusiast.Several entrepreneurs tried to revive the steamer over the years, most notably the Doble brothers, who brought their steam car enterprise to Detroit in 1915, intent on competing head-to-head with combustion. They strove to make a car that was as convenient as possible to use, with a condenser to conserve water, key-start ignition, simplified controls, and a very fast-starting boiler.But, meanwhile, car builders were steadily scratching off all of the advantages of steam within the framework of the combustion car. Steam cars, like electric cars, did not require the strenuous physical effort to get running that early, crank-started combustion engines did. But by the second decade of the twentieth century, car makers solved this problem by putting a tiny electric car powertrain (battery and motor) inside every combustion vehicle, to bootstrap the starting of the engine. Steam cars offered a smoother, quieter ride than the early combustion rattletraps, but more precisely machined, multi-cylinder engines with anti-knock fuel canceled out this advantage (the severe downsides of lead as an anti-knock agent were not widely recognized until much later). Steam cars could accelerate smoothly without the need to shift gears, but then car makers created automatic transmissions. In the 1970s, several books advocated a return to the lower-emissions burners of steam cars for environmental reasons, but then car makers adopted the catalytic converter.[19]It’s not that a steam car was impossible, but that it was unnecessary. Every year more and more knowledge and capital flowed into the combustion status quo, the cost of switching increased, and no sufficiently convincing reason to do so ever appeared. The failure of the steam car was not due to accident, not due to conspiracy, and certainly not due to any individual failure of the Stanleys, but due to the expansion of auto sales to people who cared more about getting somewhere than about the machine that got them there. Impatient people, born, ironically, of the steam age.

Read more
The Speaking Telegraph

[Previous Part] The telephone was an accident. Whereas the telegraph networks of the 1840s emerged out of a century-long search for the means to communicate by electricity, men only stumbled over the telephone while searching for a better telegraph. For this reason, it is easier to pin down a plausible, though not incontrovertible, date for the invention of the telephone – the American centennial year of 1876. This is not to say that the centennial telephone was without precursors. From the 1830s onward, scientific investigators were exploring ways to turn sound into electricity, and electricity into sound.  Electrical Sound In 1837, Charles Page, a Massachusetts physician and electrical experimenter, discovered an odd phenomenon. He placed an insulated helical wire between the arms of a permanent magnet, then placed each end of that wire into cups of mercury connected to a battery circuit. Each time he opened or closed the circuit by lifting one wire from its cup or dropping it back in, the magnet emitted a tone audible from as much as three feet away. Page called this galvanic music, and theorized that it was caused by a “molecular derangement” in the magnet.1 Page set off a wave of scientific investigations into two aspects of his discovery: the underlying and rather bizarre fact that ferrous materials change their shape when magnetized, and the more overt phenomenon, the production of sound by electricity.2 For the purposes of our story, two of these investigations are especially interesting. First were those of Philip Reis. Reis taught math and science to school boys at Garnier’s Institute, outside Frankfurt, but spent his spare time on his own electrical researches. Several electricians had by this time composed new variations on galvanic music, but Reis was the first to master the dual alchemy of translating sound into electricity and then back again. Reis realized that a diaphragm like the human eardrum could be caused to make and break an electric circuit as it vibrated. The first prototype of his telephon (far-speaker), built in 1860, consisted of a carved wooden ‘ear’ with a membrane made from a pig’s bladder stretched across it. Attached to the underside of the membrane was a platinum lead that would open and close a battery circuit as it vibrated. The receiver was a coil of wire wound about a knitting needle, sitting atop a violin. The violin’s body amplified the vibration from the changing shape of the needle as it was magnetized and demagnetized.3 A late-model Reis telephon Reis made numerous refinements to this early prototype, and he and others who tried it found that by singing or humming into it, they could transmit recognizable tunes. Words presented more difficulty, and often came across garbled or indistinct. Many of the reports of successful voice transmission involve simple commonplaces such as “good morning” or “how are you”, which might have easily been guessed. The fundamental problem was that Reis’ transmitter only opened and closed the circuit, it did not vary its strength. This allowed it to reproduce the frequency of a sound but at a fixed amplitude, which could not model all the subtleties of the human voice.4 Reis thought his work deserving of serious scientific recognition, but never got it. His device was a popular curiosity among the scientific elite, and copies found their way to most of the major centers of that elite: Paris, London, Washington D.C. But his scientific paper on its workings was rejected by Poggendorff’s Annalen, the premier physics journal of the time, and his efforts to promote it with the telegraph authorities also failed. He suffered from tuberculosis, which, as it worsened, prevented him from further serious work. In 1873 it finally claimed his life as well as his ambition. This is not the last time that this disease will haunt the story of the telephone.5 Around the same time that Reis was refining his telephon, Hermann von Helmholtz was putting the final touches on his seminal study of auditory physiology: On the Sensations of Tone (Lehre von der Tonempfindungen), published in 1862. Then a professor at the University of Heidelberg, Helmholtz was a giant of nineteenth-century science, making contributions to the physiology of vision, electrodynamics, thermodynamics, and more. Helmholtz’ work is only tangentially connected to our story, but too fascinating to pass by. In On The Sensations of Tone Helmholtz in effect did for music what Newton did for light – he showed how an apparently unitary sensory perception can be decomposed into component parts. He proved that differences in timbre, between the hum of a violin and the drone of a bassoon, come merely from differences in the relative strength of their overtones (tones at double, triple, etc. the frequency of the base note). But for our purposes the most interesting part of this work is the remarkable instrument that he designed for this demonstration: A variant of Helmholtz’ synthesizer, made at the Paris workshop of Rudolph Koenig. Helmholtz had the first of these devices made at a workshop in Cologne. It was, simply put, a synthesizer, which could generate sounds from the composition of simple tones. 6 Its most striking feature was the uncanny ability to replicate the vowel sounds that one normally heard only from human mouths. The synthesizer was driven by the pulse of a master tuning fork, vibrating at the base tone, which alternately opened and closed a circuit by dipping a platinum wire in a cup of mercury. Eight magnetized tuning forks, each tuned to a different overtone, rested between the arms of an electromagnet connected to that circuit. Each closing of the circuit turned on the electromagnets, keeping the magnetized forks humming. Next to each fork sat a cylindrical resonator, which could amplify its hum to an audible level. A lid at the far end normally closed the resonator, muffling the sound of its corresponding fork. By sliding open the lids to various degrees, one could alter the volume of each overtone, and thus ‘play’ the sound of a trumpet, a piano, or the vowel ‘o’.7 This device would play a bit part in the creation of a new kind of telephon. Harmonic Telegraphy One of the great lures to inventors in the latter half of the nineteen century was multiple telegraphy. The more telegraphic signals one could figure out how to squeeze onto a single wire, the more efficiently the telegraph network could be used. Several different methods of duplex telegraphy (sending two signals in opposite directions at the same time) were extant by the early 1870s. Shortly thereafter, Thomas Edison went one better with the quadruplex, which combined duplex and diplex (sending two signals in the same direction at the same time) to share a single wire four ways. But could one go further with the multiplexing of signals? What about an octruplex, or beyond? The fact that sound waves and electric currents were mutually interchangeable offered an intriguing possibility. What if one could use tones of varying pitch to make an acoustic, harmonic, or, most poetically, musical telegraph? If physical oscillations of varying frequencies could be translated into electrical ones, then split back out into the original frequencies on the other side, then one could send many signals simultaneously without interference. The sound itself was merely a means to an end, an intermediate medium that happened to shape the currents so as to allow multiple signals to coexist on a single wire. For simplicity I will refer to this concept as the harmonic telegraph, though some variety in terminology was used at the time. This was not the only road to multiplexing. In France, Emile Baudot had by 1874 devised a machine with a rotating distributor that would pick up signals from a series of telegraphic transmitters in turn. (What we would now call time-division multiplexing, as against frequency-division).8 But that approach has the singular disadvantage, from our perspective, of being very unlikely to lead to telephony. By this time the dominant name in American telegraphy was Western Union, a combination that had formed in the 1850s in order to eliminate the mutually-damaging competition among several large telegraph companies – prior to federal anti-trust laws, there was no difficulty in nakedly justifying a merger on this basis. One of the protagonists of our story rated it as “probably the largest corporate body that has ever existed.”9 With its thousands of miles of wire and the great expense required to build and maintain them, Western Union eyed any developments in multiple telegraphy with a keen interest. There was another party also on the lookout for advances in telegraphy. Gardiner Hubbard, a Boston lawyer with an entrepreneurial bent, was among the chief proponents of an ongoing effort to put American telegraphy under the control of the federal government. Hubbard believed that telegrams could be delivered as cheaply as letters, and was determined to undermine what he saw as Western Union’s cynical and extortionate monopoly. The “Hubbard Bill” would not have explicitly nationalized the existing telegraph companies, as most European governments had, but instead would have established a government-chartered telegraph service under the auspices of the Post Office. The end result, putting Western Union out of business, would likely have been the same. By the mid-1870s the bill had stalled, but Hubbard knew that control of a crucial new telegraph patent might give him the leverage to finally push his proposal through Congress.10 Gardiner Hubbard There were two factors here unique to the U.S.: First, Western Union’s continental scale. No European telegraph authority had such long lines, and thus so much incentive to pursue multiple telegraphy. Second, the still open question of government control of the telegraph. The last holdout in Europe was Britain, which nationalized its telegraph system in 1870. Thus nowhere else was there the tantalizing possibility of a technological breakthrough that could undermine the incumbent. It is probably for these reasons that the majority of the work on harmonic telegraphy happened in the U.S. There were three primary contenders for the prize. Two were already accomplished inventors – Elisha Gray and Thomas Edison. The third was a professor of elocution and teacher of the deaf named Bell. Gray Elisha Gray came of age on a farm in Ohio. Like many of his contemporaries, he tinkered with telegraphy in his boyhood, but after his father died when he was 12, he sought a trade with which to support himself. He apprenticed for a time as a blacksmith and then as a ship joiner, before learning at age 22 that he could acquire a deep education in the physical sciences at Oberlin College while continuing to work in carpentry. After five years of study, he plunged into a career of telegraphic invention. His first patent was a self-adjusting relay, which,by using a second electromagnet instead of a spring to reset the armature, eliminated the need to carefully tune each relay’s sensitivity according to the strength of its circuit.11 Etching of Elisha Gray, ca. 1878 By 1870 he was partner in an electrical equipment manufacturer, of which Gray was the electrician (i.e., chief engineer). In 1872 he and his partner moved their operation to Chicago and renamed it the Western Electric Manufacturing Company. Western Electric soon became Western Union’s primary source for telegraphic equipment, and would become a name of some significance in the history of the telephone. Early in 1874, Gray heard a strange sound coming from his bathroom. It sounded like the whine of a vibrating rheotome, only greatly amplified. The rheotome (literally ‘flow-cutter’) was a well-known electrical device that used a metal reed to rapidly open and close a circuit. When he looked into the bathroom, Gray found his son holding an induction coil connected to a rheotome in one hand, and rubbing his other hand against the zinc lining of the bathtub, which was humming with the same frequency. Gray, intrigued by the possibilities, withdrew from the day-to-day duties of Western Electric to spend his time again on invention. By the summer he had developed a full-octave musical telegraph, with which one could play tones on a diaphragm made from a metal washbin by pressing keys on a keyboard.12 Gray’s keyboard transmitter Gray’s ‘washbin’ receiver The musical telegraph was a novelty with no obvious commercial value. But Gray realized that the ability to pass multiple tones through a single wire opened two other possibilities to him. With a different transmitter that could capture sounds from the air, he could make a voice telegraph. With a different receiver that could disentangle the combined signal back into its composite tones, he could make a harmonic telegraph – i.e. a multiple telegraph based on sound. The latter was the obvious choice for Gray to focus on, given the clear demand from the telegraphy industry. This instinct was only confirmed in him when Gray learned about the Reis telephone, which had proved, as it seemed, a mere philosophical toy.13 For his harmonic telegraph receiver, Gray constructed a series of electromagnets, each abutting a metal strip. Each strip was tuned to respond to a given frequency, so that it would sound when the corresponding key was pressed on the transmitter. The transmitter operated on essentially the same principle as that used in the musical telegraph. Gray refined his apparatus over the next two years, and brought it to the great celebration of the nation’s first 100 years, the Centennial Exhibition in Philadelphia in the summer of 1876. There he demonstrated an ‘octruplex’ connection (i.e. eight simultaneous messages) on a specially prepared telegraph line to New York. This accomplishment received high praise from the exhibition judges, but was soon overshadowed by a greater marvel. Edison It did not take long for William Orton, Western Union’s president, to get wind of Gray’s progress, and it made him very nervous indeed. The best case, were Gray successful, would be a very expensive patent licensing deal. In the worst case, a Gray patent would form the basis for a rival company that would overthrow Western Union’s dominion. So in July 1875, Orton called on his ace in the hole, Thomas Edison. Edison had grown up with telegraphy, spending several years as a telegraph operator before striking off on his own as a inventor. He had created his greatest triumph to date, the quadruplex, while on retainer with Western Union the year before. Now Orton hoped he could go one better, and leapfrog whatever Gray had accomplished. He supplied Edison with a description of Reis’ telephone, and Edison also pored over Helmholtz’s Sensations of Tone, newly translated into English.14 Edison in 1878 with his phonograph Edison was then in his prime, and threw off inventive ideas as he worked like sparks from an anvil. Over the course of the next year he came up with two different approaches to acoustic telegraphy – the first, similar to Gray’s, used either tuning forks or tuned vibrating reeds to generate and pick up each frequency. Edison never got this to work in a satisfactory manner. The second approach, which Edison called “acoustic transfer,” was entirely different. Rather than using the vibrating reeds to transmit different frequencies, he used them to transmit at different frequencies, that is to say, at different intervals. He divided the use of the wire among the senders by time rather than by frequency. This required perfect synchronization of the vibrations in each sender/receiver pair, to prevent the signals from overlapping. By August 1876 he had an octruplex working on this principle, though the signal became unusable beyond 100 miles. He also had some ideas about how to improve Reis’ telephone, which he set aside for the time being.15 Then Edison heard about the sensation created at the Philadelphia Centennial Exhibition by a man named Bell. Bell Alexander Graham Bell was born in Edinburgh, Scotland, and came of age in London, under the tutelage of his grandfather. Like Gray and Edison, he developed a boyhood interest in telegraphy, but, like his father and grandfather before him, he found his true passion in the domain of human speech. Grandfather Alexander made his name on the stage, then became a teacher of elocution. Father Alexander Melville also taught, and went so far as to develop and publish a system of phonetics which he dubbed Visible Speech. The youngest Alexander (or Aleck, as his family called him) found his particular metier in teaching the deaf to speak. By the late 1860s he was pursuing courses in anatomy and physiology at University College, London, as well as a young woman, Marie Eccleston, whom he hoped to marry. But he then left behind both learning and love.  Both his brothers had died of tuberculosis, and Aleck’s father requested that he emigrate with the remaining family to the New World, in order to preserve the health of his only surviving son. Bell yielded and set sail in 1870 with reluctance and lasting resentment.16 After a brief stint in Ontario, Alexander found, through his father’s influence, a position at a deaf school in Boston. There lay the threads of his future life, waiting to be plucked. First was a girl, Mabel Hubbard, who had lost her hearing at the age of 5 during a bout of scarlet fever. Even after becoming Professor of Vocal Physiology and Elocution at Boston University, Bell took on private pupils for additional income, and Mabel was among his first. She was then just shy of 16, ten years younger than Bell, but within months he had fallen for her. We shall return to this young woman momentarily.17 Sometime in 1872, Bell had also renewed his interest in telegraphy. Several years before, while still in London, Bell had learned about Helmholtz’s experiments. Bell was confused at the time about what Helmholtz had actually done, believing he had not only generated but transmitted complex sounds electrically. So Bell, too, conceived the germ of harmonic telegraphy – that a wire could be shared by several signals by sending them at different frequencies. Perhaps inspired by the news that Western Union had acquired the duplex invention of Joseph Stearns, a fellow Bostonian, Bell now revived this idea, and, like Edison and Gray, began striving to make it reality.18 One day, while visiting Mabel’s home, he touched a second thread of fate, for while standing by the piano, he taught her family a trick he had learned in his youth. If you sing a pure note into a piano, he showed them, the corresponding wire will hum and play the note back to you. He mentioned to Mabel’s father that a tuned telegraphic signal could do the same, and explained how this could be used for multiple telegraphy.  Bell could not have found a listener more attuned to these words. He resonated with excitement, and immediately grasped the crucial insight: “there is only one air and so there need be but one wire“, i.e. the undulations of the current in the wire could, in miniature, represent all the undulations of air caused by a complex sound. That listener was, of course, Gardiner Hubbard.19 Telephone Here the story gets very complicated, and I fear to try my reader’s patience. I will try to trace the key threads without getting mired in detail. Bell, now backed by Hubbard and the father of another of his students, worked sedulously and in secret on his harmonic telegraph. He alternated bouts of furious work with periods of rest when his health failed him, while still trying to keep up with his duties at the university, the promotion of his father’s system of Visible Speech, and his tutoring of private students. He also acquired a new assistant, Thomas Watson, a skilled mechanic in the Boston shop of Charles Williams, the hub of the local electrical community 20. Hubbard drove Bell on, and did not scruple to use his daughter’s hand as a goad, going so far as to refuse the possibility of marriage until Bell perfected his telegraph. In the summer of 1874, however, while relaxing near his family’s home in Ontario, Bell had a revelation. Several notions that had been percolating in the back of his mind now flowed together into a new creation – a telephone. Among the important influences on his thought was the phonautograph, a device that used a cadaver ear (really) to trace sound waves on smoked glass. This gave Bell the conviction that sounds of arbitrary complexity could be reduced to a point moving through space, like current moving through a wire. The technical details need not detain us here, since they bear no relation to any telephonic device ever built, or that was ever likely to be practical. But they set Bell’s mind off in a new direction.21 Conceptual sketch of Bell’s initial ‘harp’ telephone concept (never built) Bell set aside the idea for a time, pressing on with the harmonic telegraph as his partners expected. But he grew tired of the grind of tweaking his instruments; wearied of the many obstacles between a proof of concept and practical system, and more and more his heart was drawn to the telephone. The human voice, his first passion. In the summer of 1875, he discovered that he could use his vibrating reeds not just to open and close a circuit rapidly like a telegraph key, but, when moved within a magnetic field, to actually to generate a continuous undulatory current. He told Watson about his ideas for telephony, and together they built their first telephone model on this principle -a diaphragm that, when vibrating in the field of an electromagnet, produced an undulatory current in the magnet’s circuit. This device managed to transmit some kind of muffled vocal sounds. Hubbard, unimpressed, told Bell to return to the real work.22 Bell’s rudimentary ‘gallows’ telephone from the summer of 1875 Bell did, however, convince Hubbard and his other partners that the undulatory current idea was worthy of patent, and could be of use in multiple telegraphy. As long as the patent was being filed, there was no reason not to mention that, by the way, it could be used for voice communication, too. Then, in January, Bell added to the draft patent a new mechanism for generating the undulatory current: variable resistance. His idea was to attach the vibrating diaphragm that received the sound to a platinum lead that dipped in and out of a cup of acid which contained a second, fixed lead. When the moving lead dipped deeper, more surface area would be in contact with the fluid, decreasing the resistance of the current flowing between the two leads, and vice versa.23 Bell’s sketch of the liquid variable resistance transmitter concept. The vibrations of the diaphragm [m] moved the lead [w] up and down in the liquid. The deeper in the liquid, the stronger the current in the circuit formed by the battery [S], the moving lead, and the stationary lead [R].Hubbard, knowing Gray was hot on Bell’s heels, rushed the undulatory current patent application to the Patent Office on the morning of February 14 without waiting for Bell’s final approval. Gray’s attorney arrived that afternoon with a caveat for his client. It, too, contained a proposal for an undulatory current, variable resistance liquid transmitter. It too, noted the possible applications for both multiple telegraph and voice communication.24 But it was hours too late to create interference for Bell’s application. Had their order of arrival been reversed, lengthy hearings to untangle priority of conception would have been necessary before any patent could have been issued. In the event, on March 7, U.S. patent 174,465, “Improvement in Telegraphy”, was issued in Bell’s name, and laid the cornerstone for the future dominance of the Bell system.25 The high drama of this moment overlays a deep irony, however. For neither Bell nor Gray had actually built a working telephone as of February 14, 1876. Neither had even tried, other than Bell’s brief attempt the previous July, which did not involve variable resistance transmission at all. This should warn us of the danger in giving undue weight to patents as milestones in the history of technology. This crucial moment for the telephone as a business concern had almost nothing to do with the telephone as a device. Only after filing the patent did Bell and Watson have time to return to dabbling with the telephone, in defiance of Hubbard’s continued insistence that they remain focused on multiple telegraphy. Bell and Watson tried for several months to get his liquid variable resistance idea to work, and a telephone built on that principle was used to convey the famous sentence, “Mr. Watson, come here, I want to see you.”26 But the two men ran into constant problems with the reliability of these transmitters. So instead Bell had Watson build new transmitters using the magneto principle that they had experimented with back in the summer of 1875 – using the movement of the diaphragm in a magnetic field to induce the undulating current directly.27 This had the major advantage of reliability and simplicity. But the major disadvantage that the strength (or lack thereof) of the telephone signal derived directly from the vibrations of the air created by the speaker’s voice. This put a sharp limit on the effective range of the magneto transmitter. In contrast, in the variable resistance device the speaker’s voice modulates a battery-generated current, which can have whatever strength desired. The new magneto devices worked much better than those of the previous summer, and convinced Gardiner that there might be something to the telephone after all. Among his many activities, he sat on the committee for the Massachusetts education and science exhibit at the upcoming Centennial Exhibition. He used his influence there to get Bell a slot on the exhibition floor, and on the schedule of the judges evaluating electrical inventions. Bell/Watson Centennial magneto transmitter. The vibrating metal diaphragm D moving in the magnetic field of the magnet H induced a current in the circuit. Bell/Watson Centennial iron-box receiver The judges arrived at Bell’s exhibit just after examining Gray’s harmonic telegraph. He left the judges at the receiver and retired to one of his magneto transmitters, one hundred yards across the gallery. Bell’s interlocutors were amazed to hear his songs and words emerge from the little iron box. One of the judges was Bell’s fellow Scotsman, William Thompson (later dubbed Lord Kelvin), who ran with excitement across the hall to tell Bell that he had heard his words, and later declared the telephone “the most wonderful thing he has seen in America.” The Emperor of Brazil was also present, and pressed his ear to the box, then leaped up from his chair to exclaim, “I hear, I hear!”28 The splash that Bell made at the Centennial galvanized Edison into acting on his previously vague ideas for telephonic transmission. He immediately attacked the glaring weakness in Bell’s device: the feeble magneto transmitter. He knew from his work with the quadruplex that the resistance of carbon granules varied with pressure. After a great deal of experimentation with different configurations, he developed a variable resistance transmitter based on this fact. Instead of moving a metal lead in a liquid, the pressure waves of the speaker’s voice compressed a carbon “button”, altering its resistance and thus the strength of the current in the circuit. This was much simpler, more reliable, and easier to maintain then the liquid transmitters conceived by Bell and Gray, and was a vital contribution to the long-term success of the telephone.29 Yet Bell arrived first at the telephone, despite the obvious advantages, in skill and experience as electrical inventors, held by his rivals. He arrived first, not because he had any special insight that they had missed – both thought of the telephone, but they thought it insignificant compared to the opportunities available to pursue an improved telegraph. Bell arrived first because he cared more about the human voice than about telegraphy, cared enough to defy the will of his partners until he could prove that his telephone would work. What of the harmonic telegraph, on which Gray, Edison and Bell had expended so much effort and ingenuity? Nothing would come of it, yet. Keeping mechanical vibrators on both ends of the wire in perfect tune with one another proved extremely hard, and no one knew how to amplify the combined signal to work over longer distances. Not until well into the twentieth century, after electronic technology that originated with radio allowed precise, unwavering frequency tuning and low-noise amplification, did the idea of overlaying many signals on a single wire come to fruition.30 Farewell to Bell Despite the success of the telephone at the Centennial, Hubbard was still not terribly interested in building a telephone system. Sometime during the following winter, he offered William Orton, Western Union’s president, all of the rights to the telephone as embodied in Bell’s patents for $100,000. Motivated by some combination of distaste for Hubbard and his postal telegraph schemes, confidence in his own resources and Edison’s ongoing work on the telephone, and belief that the telephone was of scant importance next to the telegraph, Orton refused. Other attempts by Hubbard to sell were similarly rejected, in large part for fear of the inevitable and costly patent litigation that would follow commercialization.31  Therefore in July 1877, Bell and his partners formed the Bell Telephone Company to begin establishing telephone service themselves. That same month Bell finally married young Mabel Gardiner at her family home, having achieved success enough to win her father’s blessing. Aleck with wife Mabel and their two surviving children – two sons died in infancy (ca. 1885) Sometime during the following year, Orton changed his mind about the prospects of the telephone, and formed his own company, the American Speaking Telephone Company, counting on patents from Edison, Gray and others to protect the company from legal action by Bell. This was a mortal threat to the Bell interests.  Western Union had two major advantages. First: its vast reserves of capital. The Bell Company was a capital-hungry business, because it rented its equipment to the customer, who would take many months to pay back its initial cost. Second: access to Edison’s superior transmitter. Anyone who compared it to Bell’s could not help but favor the greater clarity and loudness of the voices it conveyed. The Bell company had no choice but to bring a patent infringement suit against their new rival. Had Western Union secured unequivocal rights to the only available high-quality transmitter, they might have had powerful leverage in reaching an agreeable settlement. But Bell’s team unearthed an earlier caveat lodged by a German immigrant named Emil Berliner for a similar device, and acquired the rights. It would take many years of legal battle before Edison’s patent was given priority. Seeing that the trial was not going to go its way, in November 1879 Western Union agreed to grant all of its telephone patent rights, equipment, and existing subscribers (some 55,000 of them) to the Bell Telephone Company. All they asked in exchange was for twenty percent of the telephone rental income for the next seventeen years, and that Bell keep its nose out of the telegraph business.32 Like the Morse telegraph, the Bell telephone system was catalyzed, rather than built, by the work of its namesake. The Bell company quickly replaced Bell’s instruments with superior ones based at first on Berliner’s patent and later on patents acquired from Western Union. By the time of the settlement, Bell’s primary duty in the Bell Telephone Company was providing testimony in patent suits, of which there were many. By 1881 he withdrew from the business entirely. Like Morse, and unlike Edison, he was no system builder. Theodore Vail, a managerial dynamo recruited by Gardiner from the Post Office, took charge of the company and oversaw its rise to national dominance. The initial pattern of growth of the telephone network was very different from the telegraph’s. The latter leaped from one commercial hub to the next in bounds of 100 miles or more, seeking the highest concentrations of high value customers, and only later filling out the network with connections to smaller local markets. Telephone networks, on the other hand, grew like crystals from tiny local seeds, a handful of customers in independent clusters in each town and neighborhood that gradually, over decades, interconnected into regional and national structures. There were two reasons for this, two obstacles to larger-scale telephony. First was the problem of distance. Even with the stronger variable resistance transmitters descended from Edison’s, there was no comparison between the reach of the telegraph and telephone. The far more complex signal of the telephone made it much more susceptible to noise, and the electrical properties of its fluctuating currents were much less well understood than those of the direct currents used in telegraphy. Second was the problem of connection. The telephone as designed by Bell was, like the telegraph, a one-to-one device, it could only directly connect two endpoints along a single wire. For the telegraph this was not a serious problem. A single office could serve many customers and messages could easily be forwarded from a central office down another line. One could not easily ‘forward’ a telephone conversation, however. In the first instance, the only way that a 3rd or later subscriber could talk to two existing ones was to join what was later called a “party line.” That is to say, a single wire with all the subscribers’ instruments attached to it, so that anyone could talk to (or eavesdrop on) anyone else. We’ll return to the problem of distance in due time. In our next installment, we will delve into the problem of connection and its consequences for the switch. [Next Part] Further Reading Robert V. Bruce, Bell: Alexander Graham Bell and the Conquest of Solitude (1973) David A. Hounshell, “Elisha Gray and the Telephone: On the Disadvantages of Being an Expert”, Technology and Culture (1975). Paul Israel, Edison: A Life of Invention (1998) George B. Prescott, The Speaking Telephone, Talking Phonograph, and Other Novelties (1878)

Read more
Lost Generation: The Relay Computers

Our previous installment described the rise of automatic telephone switches, and of the complex relay circuits to control them. Now we shall see how scientists and engineers  developed such relay circuits into the first, lost, generation of digital computers. The Relay at Zenith Recall to mind the relay, based on the simple idea that an electromagnet could operate a metal switch. It was conceived independently several times by natural philosophers and telegraph entrepreneurs in the 1830s. The inventors and mechanics of the middle century then forged it into a robust and essential component of telegraph networks. But it reached the zenith of its development in the telephone networks, miniaturized and diversified into a myriad of forms by a new breed of engineers with formal training in mathematics and physics. Not only the automatic switching systems, but virtually all of the equipment in the telephone networks of the early twentieth century involved relays in some fashion. Their earliest use was in the ‘drops’ on the manual switchboards that first appeared in the late 1870s. When a caller cranked the magneto on their phone to signal the central office, it triggered a drop relay, allowing a metal shutter on the switchboard to fall, thus indicating the incoming call to the operator. When the operator inserted her jack into the plug, it reset the relay, allowing her to lift the shutter back up to be held in place by the reactivated magnet. By 1924, two Bell engineers wrote, a typical manual office, serving 10,000 subscribers, “would have from 40,000 to 65,000 relays” with a combined magnetic strength “sufficient to lift ten tons.” A large machine-switched office would have double that number.  The U.S. telephone system taken as a whole, then, was a tremendous machine of many millions of relays, growing in number all the time as Bell steadily automated office after office. The connection of a single call would engage from a handful up to several hundred of them, depending on the number and types of offices involved. An endless menagerie of relays marched forth from the factories of Western Electric, Bell’s manufacturing arm, to supply the needs of this vast system. Bell engineers bred varieties enough to sate the most jaded pigeon fancier or kennel club. They optimized for speed, sensitivity, and small size. In 1921, Western Electric produced almost five million of the things, in 100 main types. Most common was the generalist Type E, a flat, roughly rectangular device weighing a few ounces, made primarily of stamped-metal parts for ease of manufacturing.  A casing around the relay body protected the contacts from dust and the electrical circuits from interference from their neighbors – the relays were typically tightly packed together by the hundreds and thousands in towering racks at the central offices. The type E was made in 3,000 different variants, each with its special configuration of wire windings and contacts.1 Soon these relays were incorporated into the most complex switching circuit yet known. Crossbar Switch In 1910, Gotthilf Betulander had an idea. Betulander was an engineer at Royal Telegrafverket, a state-owned corporation that controlled most (and within a decade virtually all) of the Swedish telephone market. He believed he could greatly improve the efficiency of Telegrafverket’s operations by building an automatic switching system entirely of relays: a matrix of relays sitting at each intersection in a lattice of metal bars which connected to the phone lines. It would be faster, more reliable, and easier to maintain than the sliding and rotating contacts then used. Moreover, Betulander realized that he could separate the selection and connection portions of the system into independent relay circuits. The former would be used only to set up the speech circuit, and then could be freed for use in another call. Betulander had independently arrived at what was later dubbed “common control”. He called the relay circuit that stored the incoming number the recorder (the register, in American parlance), and the circuit that actually found and ‘marked’ an available connection in the lattice the marker. He patented his system and saw a handful put into use in Stockholm and London. Then, in 1918, he heard about an unexploited American innovation, a crossbar switch conceived by Bell engineer John Reynolds five years before. It worked much like his own, but used only n + m relays to serve the n x m junction points of the lattice, making it much more feasible to scale up for larger offices.  To make a connection it trapped piano wire ‘fingers’ against a holding bar, allowing the selecting bar to move on to connect another call. By the following year Betulander had integrated this idea into his own switching apparatus. Most telephone engineers, however, found Betulander’s system strange and excessively complex. When it came time to choose a switching system for automating Sweden’s major cities, Telegrafverket chose one developed by another native company, Ericsson.  The Betulander switch survived only in a scaled-down model adapted to rural markets – because the relays were more reliable than the motor-operated machinery of the Ericsson switch, no dedicated service personnel were required at each office.2 American telephone engineers, however, came to a different conclusion. A 1930 Bell Labs mission to Sweden was “fully convinced of the merits of the crossbar connecting unit”, and upon their return they immediately set to work on what came to be known as the “No. 1 crossbar system” to replace the panel switch in large cities.3 By 1938, two were installed in New York City, and they soon became the switching system of choice for urban markets until the arrival of electronic switching over thirty years later. For our purposes, the most interesting component of the No. 1 crossbar was the new, more sophisticated marker developed by Bell. Its job was to find an idle path from the caller to the callee through the multiple linked crossbar units that formed the exchange. It had to test the idle and busy state of each connection. This required conditional logic. As telephone historian Robert Chapuis wrote4: Selection is conditional, since an idle link is retained only if it gives access to a crossbar which has as its outlet an idle link to the next stage. When several sets of links meet the conditions, a ‘preferential logic’ singles out one [consisting] of the links which are the lowest numbered… The crossbar switch is a beautiful case study in the cross-fertilization of technological ideas. Betulander conceived of his all-relay switch, then improved it with Reynolds’ switching fabric, and proved it could work. AT&T engineers then re-absorbed this hybrid creation, made their own refinements, and thus produced the No. 1 crossbar system. This system then itself became a component of two early computing machines, one of which inspired a landmark paper in the history of computing. Mathematical Labor In order to understand how and why relays and their electronic cousins helped bring about a revolution in computing, we must take a brief detour into the world of mathematical labor.5 We will then understand the latent demand for a better way to calculate. By the early twentieth century, the entire edifice of modern science and engineering rode upon the backs of thousands of mathematical laborers, known as computers. Charles Babbage had recognized the problem as far back as the 1820s, and thus proposed his difference engine (and even he had antecedents). His primary concern was to automate the construction of mathematical tables, such as those used for navigation (e.g. computing the value of the trigonometric functions by polynomial approximations at 0 degrees, 0.01 degrees, 0.02 degrees, etc.). The other main demand on computational labor at that time came from astronomers: to process raw telescope observations into fixed positions on the celestial sphere (based on the time and date when they were made), or to determine the orbit of a new object (such as Halley’s Comet). Since Babbage’s time, the problem had grown much more acute. Electrical power companies wanted to understand the behavior of transmission systems with extremely complex dynamic properties. Bessemer-steel guns that could throw a shell over the horizon (and whose aim, therefore, could no longer be corrected by eye) created the demand for ever more precise ballistics tables. New, computationally-intensive statistical tools (such as the method of least-squares) spread across the sciences, and into the ever-growing bureaucracies of the modern state. Offices devoted to computation, like telephone offices usually staffed largely or entirely by women, sprung up in universities, government departments, and industrial corporations. Mechanical calculators did exist, but they only ameliorated the labor problem, they did not eliminate it.  They could perform simple arithmetic operations quickly, but any interesting science or engineering problem involved many hundreds or thousands of such operations, each of which the (human) computer had to enter by hand, carefully transcribing all of the intermediate results. There were several forces that helped bring about new approaches to the problem of mathematical labor. Young scientists and engineers grinding out their own calculations late into the night sought relief for their cramping hands and drooping eyelids. Program administrators felt the pinch of the ever-increasing wages for their platoons of computers, especially after World War I.  Finally, many problems at the frontiers of science and engineering were intractable to manual calculation. One series of machines that arose from these pressures was overseen by an electrical engineer at the Massachusetts Institute of Technology (MIT) named Vannevar Bush. Differential Analyzer Our story, in recent chapters so often impersonal, even anonymous, now returns to the realm of personality. Fame is partial with her favor, and has not seen fit to bestow any upon the creators of the panel switch, the type E relay, the crossbar marker circuit. There are no biographical anecdotes we can summon to illuminate the lives of these men; the only readily available remains of their lives are the stark fossils of the machines they created.6 Now, however, we are to deal with men lucky enough to have left a deeper impression in the record of the past – not just bones, but feathers. But we will no long meet individuals toiling away in their private attic or workshop – a Morse and Vail, or Bell and Watson. By the end of World War I, the era of the heroic inventor was largely over. Thomas Edison is, by convention, the transitional figure – inventor-for-hire at the start of his career, proprietor of an “invention factory” by its end.7 Institutions – universities, corporate research departments, government labs – now governed the development of most significant new technologies. The men we will meet in the remainder of this chapter all belonged to such institutions. One of them was the aforementioned Vannevar Bush, who arrived at MIT in 1919, at the age of 29. Just over twenty years later he joined the chief directors of the American effort in the Second World War, and helped to unleash a flood of government spending that would permanently transform the relationship between government, academia, and the development of science and technology. But for our current purposes, what matters is the series of machines developed in his lab to attack the problem of mathematical labor, from from the mid-1920s onward.8. MIT, newly relocated from central Boston to the banks of the Charles in Cambridge, was closely attuned to the needs of industry. Bush himself had his hand in several business ventures in the field of electronics, in addition to his professorship. It should come as no surprise, then, that the problem that initially drove Bush and his students to build a new computing device came from the electrical power industry: modeling the behavior of long-distance power lines under sudden loads. But it was obvious that this was only one of many possible applications: tedious mathematical labor was everywhere around them. Bush and his collaborators first built two machines that they called product integraphs. But the best-known and most successful of the MIT machines was the third, the differential analyzer, completed in 1931. In addition to power transmission problems, it computed electron orbits, the trajectories of cosmic rays in the earth’s magnetic field, and much more. Researchers across the world, hungry for computational power, built dozens of copies or variants throughout the 1930s, some of them even constructed from Meccano (British equivalent to the American Erector Set). The differential analyzer was an analog computer. It computed mathematical functions by turning metal rods which represented some quantity by the direct physical analog of their rotational velocity. A motor drove the independent variable rod (typically representing time), driving the other rods (representing various derived variables) in turn by mechanical linkages that computed a function over the input velocity.  The final outputs were plotted as curves on paper. The most important components were the integrators, wheels turned by discs that could compute the integral of a curve without any tedious hand calculation. The differential analyzer. The integrating units (one with its case open) are visible at the bottom, the curve-plotting tables at the top, and the computing rods at the center. None of this machinery involved the abrupt on/off discontinuity of relays, or any kind of digital switch, so one may fairly wonder what all of this has to do with our story. The answer is contained in the fourth machine in MIT’s family of computing machines. In the early 1930s, Bush began wooing the Rockefeller Foundation for funding for a sequel to the analyzer. Warren Weaver, head of the foundation’s Natural Sciences Division, was at first unconvinced. Engineering was not part of his portfolio. Bush, however, touted the limitless potential of his new machine for scientific applications – especially in mathematical biology, a pet project of Weaver’s. He also promised numerous improvements on the existing analyzer, especially “the possibility of enabling the analyzer to switch rapidly from one problem to another in the manner of the automatic telephone exchange.”9 In 1936 his attentions paid off, in the form of a $85,000 grant to build this new device, which would be known as the Rockefeller differential analyzer. As a practical calculating device, the Rockefeller analyzer was not a great success. Bush, now Vice President and Dean of Engineering for MIT, had little spare time to oversee its development. In fact, he soon absented himself entirely, to take on the presidency of the Carnegie Institute, in Washington, D.C. He felt the looming shadow of war, and had firm ideas about how science and industry could be made to serve America’s military needs. He wanted, therefore, to be close to the centers of power, where he could better influence matters. In the mean time, the lab struggled with the technical challenges posed by the new design, and soon became distracted with pressing war work. The Rockefeller machine was not completed until 1942, years behind schedule. The military found its services useful for churning out ballistic firing tables. It was, however, soon overshadowed by the purely digital computers that our story is building towards – those that represented numbers not directly as physical quantities, but abstractly in the positions of switches. It so happens, though, that the Rockefeller analyzer itself contained quite a few such switches, in a relay circuit of its own. Shannon In 1936, Claude Shannon was just twenty years old, but newly graduated from the University of Michigan with a dual degree in electrical engineering and mathematics, when he came across an advertisement on a corkboard. Vannevar Bush was looking for a new research assistant to work on the differential analyzer. He did not hesitate to apply, and soon began working on some of the problems presented by the new analyzer just then taking shape.10 Shannon was a very different sort of man from Bush. He was no man of business, no academic empire builder, no administrator of men and money. He was a life-long lover of games, puzzles, and diversions: chess, juggling, mazes, cryptograms. Like most men of the era, he dedicated himself to serious business during the war – taking a government contract position at Bell Labs that would shield his frail body from the draft.11 His studies on fire-control and cryptography during this time led, in turn, to his seminal paper on information theory (which is outside the scope of the present tale). In the 1950s, though, the war and its consequences behind him, he returned to teach at MIT, spending his spare time on his amusements: a calculator that operated solely in roman numerals; a machine that, when turned on, reached out a hand to switch itself back off. The Rockefeller machine that he confronted now, although logically similar in structure to the 1931 analyzer, had entirely different physical components. Bush realized that the architecture of rods and mechanical linkages in the older machine prevented its efficient use. It took many hours of work by skilled mechanics to set up a problem before any actual computation could be done. The new analyzer did away with all that. At its center, replacing the table of mechanical rods, sat a crossbar switch – a surplus prototype donated by Bell Labs. Rather than relying on power transmitted from a central driveshaft, an electric motor (or servo) drove each integrator unit independently. Setting up a new problem was simply a matter of configuring the relays within the crossbar lattice in order to wire up the integrators in the desired pattern. A punched paper tape reader (borrowed from another telecommunications device, the teletypewriter) read in the machine configuration, and a relay circuit translated the signals from the tape reader into control signals for the crossbar – as if it were setting up a series of telephone calls between the integrators. In addition to being much faster and easier to configure, the new machine operated with greater speed and precision than its predecessor, and could handle more complex problems. Already this computer that we would now consider primitive, even quaint, presented to observers the intimation of some great – or perhaps terrible – mind at work:12 In effect, it is a mathematical robot. an electrically driven automaton which has been fashioned not merely to relieve human brains of the drudgery of difficult calculation and analysis, but actually to attack and solve mathematical problems which are beyond the reach of mental solution. Shannon focused his attention in particular on the translation of the paper tape into instructions to the ‘brain’, and the relay circuit that carried out this operation. He noticed a correspondence between the structure of that circuit, and the mathematical structures of Boolean algebra, which he had studied in his senior year at Michigan. It was an algebra whose operands were true and false, and whose operators were and, or, not, etc.; an algebra that corresponded to statements of logic. After spending the summer of 1937 working at Bell Labs in Manhattan – an ideal place to think about relay circuits – Shannon turned this insight into a master’s thesis entitled “A Symbolic Analysis of Relay and Switching Circuits.” Along with a paper by Alan Turing written the year before (discussed briefly below), this formed the first foundation for a science of computing machines.  Shannon built several computing/logic machines in the 1940s and 50s: the THROBAC roman-numeral calculator, a machine to play chess endgames, and the Theseus maze-solver (pictured). Shannon found that one could directly translate a system of equations of propositional logic into a physical circuit of relay switches, through a rote procedure.  “In fact,” he concluded, “any operation that can be completely described in a finite number of steps using the words if, or, and, etc. can be done automatically with relays.”13 For example, two relay-controlled switches wired in series formed a logical and – current would flow through the main wire only if both electromagnet circuits were activated to close the switches. Likewise two relays in parallel formed an or – current would flow through the main circuit if either electromagnet were activated. The outputs of such logic circuits could, in turn, control the electromagnets of other relays, to make more complex logical operations, such as (A and B) or (C and D).  Shannon concluded the thesis with an appendix containing several example circuits constructed according to his method. Because the operations of Boolean algebra were very similar to arithmetic operations in base two (i.e. using binary numbers), he showed how one could construct from relays an “electric adder to the base two,” – we would call this a binary adder. Just a few months later, another Bell Labs scientist built just such an adder on his kitchen table. Stibitz George Stibitz, a researcher in the mathematics department at Bell Labs’ headquarters at 463 West Street in Manhattan, took an odd assortment of equipment home with him on a dark November evening of 1937. Dry battery cells, two tiny switchboard light bulbs, and a pair of flat-construction Type U relays, retrieved from a junk bin. With this, some spare wire, and a few of other bits of scrap, he built in his apartment a device that could sum two single-digit binary inputs (represented as the presence or absence of an input current), and signal the two-digit output on the bulbs: on for one, off for zero.14 The Stibitz binary adder Stibitz, trained as a physicist, had been asked to look into the physical properties of relay magnets, and having no prior familiarity with relays in general, began to study up on how Bell used them in its telephone circuits. He soon noticed the similarity between some of those arrangements and the arithmetic of binary numbers. Intrigued, set off on his little kitchen-table side project. At first, Stibitz’ tinkerings with relays generated little interest among the higher-ups at Bell Labs. Sometime in 1938, however, the head of his research group asked him whether his calculators might be used to do arithmetic on complex numbers (of the form a + bi, where i is the square root of negative one). It turned out that several computing offices within Bell Labs were groaning under the constant multiplication and division of such numbers required by their work. The multiplication of a single complex number required four arithmetic operations on a desk calculator, sixteen for division. Stibitz asserted that yes, he could solve this problem, and set out to design a machine that would prove it. The final design, fleshed out by veteran telephone engineer Samuel Williams, was dubbed the Complex Number Computer, or Complex Computer for short, and was put into service in 1940. It computed with 450 relays, and stored intermediate results in ten crossbar switches. Input was entered and responses received via teletypewriter, of which three were installed at the Bell Labs departments with the greatest need. Relays, crossbar, teletype: here was a product of the Bell system, through and through. The Complex Computer had its finest hour on September, 11, 1940. Stibitz gave a paper on his the computer at a meeting of the American Mathematical Society at Dartmouth College, and he arranged for a teletypewriter to be set up at McNutt Hall, linked by telegraph back to the Complex Computer in Manhattan, some 250 miles away. Attendees could step up to the machine, enter a problem at the keyboard, and see the solution typed out as if by magic, less than a minute later. Among the attendees who took at a turn at the teletype were John Mauchly and John von Neumann, each of whom have an important part to play later in our story. They had seen a brief glimpse of a future world. Because later computers were so expensive,15 administrators could not afford to let them sit idle while the user scratched his chin at the console, pondering what to type in next. Not for another twenty years would computer scientists figure out how to make general-purpose computers appear to be always awaiting your next input, even while working on someone else’s, and it would take nearly another twenty years for this interactive mode of computing to enter the mainstream. Stibitz at a Dartmouth interactive terminal in the 1960s. Dartmouth was a pioneer in interactive computing, and Stibitz took a professorship there in 1964. Admirable as the Complex Computer was in its metier, it was by no means a computer in the modern sense. It could carry out arithmetic in complex numbers and perhaps be adjusted to solve other related problems, but it was not designed for general-purpose problem-solving, and had no programmability: it could neither be instructed to perform its operations in arbitrary order, nor instructed to carry them out repeatedly. It was a calculator that happened to perform certain calculations with much greater convenience than its predecessors. The advent of the Second World War, however, called forth from Bell, with the guidance of Stibitz, a further series of computers, later known as Model II through Model VI (the Complex Computer becoming, retroactively, Model I). Most were built at the behest of the National Defense Research Committee, headed by none other than Vannevar Bush, and Stibitz pushed their design towards increasing generality of function and programmability. The Ballistic Calculator (later Model III), for example, designed to aid in the development of anti-aircraft fire-control systems, went into service in 1944 at Fort Bliss, Texas. It had 1,400 relays, and could execute a program of mathematical operations defined by a sequence of instructions stored on a loop of paper tape. Input data came from a separate tape feed, and it had additional feeds for tabular data – to quickly look up the values of, for instance, trigonometric functions, without having to do an actual calculation. Bell engineers developed special ‘hunting circuits’ to scan forward and back on the tape to find the address of the desired table value, independent of the calculation. Stibitz estimated that, with its relays chattering away day and night, the Model III replaced the work of 25-40 computing ‘girls’.16 Relay frames for the Bell Model III The Model V, completed too late to see war service, offered even greater flexibility and power to its users. Measured in terms of the computational labor it replaced, it had roughly ten times the computing capacity of the Model III. With 9,000 relays, it contained multiple computing units fed from multiple input stations where a user could set up a problem. Each such station held one tape reader for the data input and five for the instructions – allowing for the invocation of multiple sub-routines during the execution of the main tape. A master control unit (in effect an operating system) allocated instructions to each of the computing units depending on its availability, and programs could execute conditional branches (i.e. jump to one part of the program or another, depending on the current state of execution). Here was no mere calculator. Annus Mirabilis: 1937 The year 1937 (roughly speaking)17 stands as a pivotal moment in the history of computing. We have already seen how, in that year, Shannon and Stibitz both noticed a homology between circuits of relays and mathematical functions, an insight which led Bell Labs to design a whole series of important digital machines. In a kind of exaptation – or even transubstantiation – the humble telephone relay had also become an embodied abstraction of mathematics and logic, without changing its physical form. At the beginning of that same year, British mathematician Alan Turing’s “On Computable Numbers With an Application to the Entscheidungsproblem” appeared in the January 1937 issue of the Proceedings of the London Mathematical Society. In it, he described a universal computing machine: one that could, he argued, perform actions logically equivalent to all those carried out by a human mathematical laborer. Turing, who had arrived at Princeton University the previous year for graduate studies, was also intrigued by relay circuits, and, like Bush, he worried about the looming threat of war with Germany. So he took up a cryptographic side project, a binary multiplier that might be used to encrypt war-time messages, built from relays that he had crafted in the Princeton machine shop.18 Sometime in 1937, Howard Aiken composed a proposal for an “Automatic Calculating Machine.” A Harvard graduate student in electrical engineering, Aiken had ground through his fair share of calculations with the aid only of a mechanical calculator and printed books of mathematical tables. He proposed a design that would eliminate such tedium – unlike existing calculating machinery, it would automatically and repeatedly carry out the processes assigned to it, and use the outputs of previous steps as inputs to the next.19 Meanwhile, outside the English-speaking world, similar developments were in motion. Across the Pacific, Nippon Electric Company (NEC) telecommunications engineer Akira Nakashima had been exploring the relationship between relay circuits and mathematics since 1935. Finally, in 1938, he independently proved the same equivalency between relay circuits and Boolean algebra that Shannon had found the year before.20 In Berlin, Konrad Zuse, a one-time aircraft engineer benumbed by the endless calculations the job required, was looking for funding for his second computing machine. He had never gotten his purely mechanical V121 device to work reliably, so instead he planned to make a relay computer, designed with the aid of his friend, telecommunications engineer Helmut Schreyer. The ubiquity of the telephone relay, the insights of mathematical logic, the desire of bright minds to escape from dull work, were intertwining to create a vision of a new kind of logic machine. The Lost Generation The fruits of 1937 took several years to ripen. War proved a most potent fertilizer, and with its arrival relay computers began to spring up wherever the necessary technical expertise existed. Mathematical logic had provided a trellis across which electrical engineering then stretched its vines, developing new forms of programmable computing machines, the first draft of the modern computer as we know it. In addition to Stibitz’s machines, the United States could by 1944 boast the Harvard Mark I / IBM Automatic Sequence Controlled Calculator (ASCC), the eventual fruit of Aiken’s 1937 proposal. It was a collaboration between academia and industry gone sour, hence the two names, depending on who is claiming the credit. The Mark I/ASCC used relay control circuits, but followed the structure of IBM’s mechanical calculators for its core arithmetic unit. It was built in time to serve the Navy Bureau of Ships during the war. Its successor, the Mark II, went into operation in 1948 at the Naval Proving Ground, and was fully converted to relay-based operation: it boasted 13,000 of them.22 Zuse built several relay computers during the war, of increasing complexity and sophistication, culminating in the V4, which, like the Bell Model V, included facilities for invoking sub-routines and conditional branching. Due to material shortages, no one in Japan exploited the discoveries of Nakashima and his compatriots until after its post-war recovery. But in the 1950s, the newly formed Ministry of International Trade and Industry (MITI) sponsored two relay machines, the second a 20,000 relay behemoth. Fujitsu, which had collaborated in the construction of these machines, developed its own commercial spin-offs.23 These machines are almost entirely forgotten. There is only one name that has survived: ENIAC. The reason had nothing to do with sophistication or capability, it had everything to do with speed. The computational and logical properties of relays that Shannon, Stibitz, and others had uncovered pertained to any kind of device that could be made to act as a switch. And it just so happened that another such device was available, an electronic switch that could be switched on and off hundreds of times faster than a relay. The importance of World War II to the history of computing should already be evident. For the rise of electronic computing, it was essential. The outbreak of war on a scale more terrible than any before known unleashed the resources needed to overcome the evident weaknesses of the electronic switch, signaling that the reign of the electro-mechanical computers would be brief. Like the titans, they were to be overthrown by their children. As with the relay, electronic switching was born from the needs of the telecommunications industry.  To find out where it came from, we must now rewind our story to the dawn of the age of radio. Sources

Read more
Extending Interactivity

In the early 1960s, interactive computing began to spread out from the few tender saplings nurtured at Lincoln Lab and MIT – spread in two different senses. First, the computers themselves sprouted tendrils, that reached out across buildings, campuses, and towns to allow users to interact at a distance, and to allow many users to do so at the same time. These new time-sharing systems blossomed, accidentally, into platforms for the first virtual, on-line societies. Second, the seeds of interactivity spread across the country, taking root in California. One man was responsible for sowing those first transplants, a psychologist named J.C.R. Licklider. Joseph Appleseed Joseph Carl Robnett Licklider — known to friends as “Lick” — specialized in psychoacoustics, a field that bridged the gap between imaginary states of mind and the measurable physiology and physics of sound. We met him briefly before, as a consultant in the 1950s FCC Hush-a-Phone hearings. He had honed his skills at the Psycho-Acoustics Laboratory at Harvard during the war, devising techniques to improve the audibility of radio transmissions inside noisy bombers. J.C.R. Licklider, a.k.a, ‘Lick’ Like so many American scientists of his generation, he found ways to continue to meld his interests with military needs after the war, but not because he had a special interest in weaponry or national defense. The only major civilian sources of money for scientific research were two private institutes founded by the industrial titans of the turn of the century: the Rockefeller Foundation and Carnegie Institute. The National Institutes of Health had only a few million dollars to spend, and the National Science Foundation was created only in 1950, with a similarly modest budget. To get funding for interesting science and technology in the 1950s, your best bet was the Department of Defense. So, in 1950, Licklider joined an acoustics lab at MIT directed by the physicists Leo Beranek and Richard Bolt, and funded almost entirely by the U.S. Navy.1 Once there, his expertise on the interface between the human senses and electronic equipment made him a natural early recruit to MIT’s new air defense project. As part of the Project Charles study group, tasked with figuring out how to implement the Valley Committee air defense report, Licklider pushed for the inclusion of human factors research, and got himself appointed co-director of radar-display development for Lincoln Laboratory. There, at some point in the mid-1950s, he crossed paths with Wes Clark and the TX-2, and instantly caught the interactive computing bug. He was captivated by the idea of being in total control of a powerful machine that would instantly solve any problem addressed to it. He began to develop an argument for “man-computer symbiosis,” a partnership between human and computer that would amplify humankind’s intellectual power, in the same way that industrial machines had amplified its physical power. He noted that some 85% of his own work time2 …was devoted mainly to activities that were essentially clerical or mechanical: searching, calculating, plotting, transforming, determining the logical or dynamic consequences of a set of assumptions or hypotheses, preparing the way for a decision or an insight. Moreover, my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility, not intellectual capability.  …the operations that fill most of the time allegedly devoted to technical thinking are operations that can be performed more effectively be machines than by men. The overall concept did not stray too far from Vannevar Bush’s Memex, an intellectual amplifier that he sketched in his 1945 “As We May Think,” though Bush’s mix of electro-mechanical and electronic components gave way to a pure electronic digital computer as the central intellectual engine. That computer would use its immense speed to shoulder all the brute-force clerical work involved in any scientific or technical project. People would be unshackled from that drudgery, freed to spend all of their attention on forming hypotheses, building models, and setting goals for the computer to carry out. Such a partnership would provide tremendous benefit to researchers such as himself, of course, but also to national defense, by helping American scientists stay ahead of the Soviets. Vannevar Bush’s Memex, an early concept for an automated information retrieval system to augment intellectual power Soon after this Damascene encounter, Lick brought his new devotion to interactive computing to a new position, at a consulting firm run by his old colleagues, Bolt and Beranek. As a sideline from their academic physics work, the two had dabbled with consulting projects for years; reviewing, for instance, the acoustics of a movie house in Hoboken, New Jersey. Landing the acoustics analysis for the new United Nations building in New York City, however, brought them a slew of additional work, and so they decided to leave MIT and consult full-time. Having acquired a third partner in the meantime, architect Robert Newman, they now went by Bolt, Beranek and Newman (BBN). By 1957, having grown into a mid-sized firm with dozens of employees, Beranek felt that they risked saturating the market for acoustics work. He wanted to extend their expertise beyond sound to the full range of interaction between humans and the built environment, from concert halls to automobiles, across all the senses. And, so, naturally, he sought out his old colleague Licklider, and recruited him on generous terms as the new vice-president of psychoacoustics. But Beranek had not reckoned with Licklider’s wild enthusiasm for interactive computing. Rather than a psycho-acoustics expert, he had acquired… not a computer expert, exactly, but a computer evangelist, eager to bring others to the light. Within the year, he had convinced Beranek to lay out tens of thousands of dollars buy a computer, a meager little thing called the LGP-30, made by a defense contractor called Librascope. Having no engineering expertise himself, he brought on another SAGE veteran, Edward Fredkin, to help configure the machine. Despite the fact that the computer did little but distract Licklider from his real work while he tried to learn to program it, he convinced the partners to put down still more money3 to buy a much better computer a year-and-half later: DEC’s brand new PDP-1. Licklider sold B, B, and N on the idea that digital computing was the future, and that somehow, sometime, their investment in building expertise in the field would pay off. Shortly thereafter, Licklider, almost by accident, found himself in the perfect position for spreading the culture of interactivity across the country, as head of a new government computing office. ARPA In the Cold War, every action brought it’s reaction. Just as the first Soviet atomic bomb had spurred the creation of SAGE, so did the first Soviet satellite in orbit, launched in October 1957, trigger a flurry of responses from the American government. All the more so because, while the Soviets had trailed the U.S. by four years in exploding a fission weapon, in rocketry it seemed to have leaped ahead, beating the Americans to orbit (by about four months, as it turned out). One of the responses to Sputnik to create, in early 1958, an Advanced Research Projects Agency (ARPA) within the Defense department. In contrast to the more modest sums available for civilian federal science funding, ARPA was given an initial budget of $520 million, three times the budget of the National Science Foundation, which had itself been tripled in size in response to Sputnik. Though given a broad charter to work on any advanced projects deemed fit by the Secretary of Defense, it was initially intended to focus on rocketry and space – a vigorous answer to Sputnik. By reporting directly to the Secretary of Defense, ARPA was to rise above debilitating and counterproductive inter-service rivalries and develop a unified, rational plan for the American space program. But in fact, all of its projects in that field were soon stripped away by rival claimants4: the Air Force had no intention of giving up control over military rocketry, and the National Aeronautics and Space Act, signed in July 1958, created a new civilian agency to take over all non-weaponized ventures into space. Having been created, ARPA nonetheless found reasons to survive, acquiring major research projects in ballistic missile defense and nuclear test detection. But it also became a general workshop for pet projects that the various armed services wanted investigated. Intended to be the dog, it had instead become the tail. The first foray by ARPA into computing was, in a sense, busy work. In 1961, the Air Force had two idle assets on its hands and needed something for them to do. As the first SAGE direction centers neared deployment, the Air Force had brought on, RAND Corporation, based in Santa Monica, California, to train personnel and prepare the twenty-odd computerized air defense centers with operational software. RAND spun off a whole new entity, System Development Corporation (SDC), just to handle this task. SDC’s newly acquired software expertise was a valuable resource for the Air Force, but SAGE was winding down and they were running out of work to do. The Air Force’s second idle asset was a (very expensive) surplus AN/FSQ-32 computer which had been requisitioned from IBM for SAGE but turned out to be unneeded. The Department of Defense solved both problems by assigning ARPA new research task of command-and-control, to be inaugurated with a $6 million grant to SDC to study command-and-control problems using the Q-32. ARPA soon decided to regularized this research program as part of a new information processing research office. Around the same time, it had also received a new assignment to create a program in behavioral science. For reasons that are now obscure, ARPA leadership decided to recruit J.C.R. Licklider to oversee both programs. The idea may have come from Gene Fubini, director of research for the Department of Defense, who would have known Lick from his time working on SAGE. Like Beranek, Jack Ruina, then head of ARPA, had no idea what he was in for when he brought Lick in for an interview. He thought he was getting a behavioral science expert with a dash of computing knowledge on the side. Instead he got the full force of the man-computer symbiosis vision. Computerized command-and-control required interactive computing, Licklider argued, and thus the primary thrust of ARPA’s command-and-control research program should be to push forward the cutting edge of interactive computing. And to Lick that meant time-sharing. Time-Sharing Time-sharing systems originated with the same basic principle as Wes Clark’s TX series: computers should be convenient for the user. But unlike Clark, the proponents of time-sharing believed that a single computer could not be used efficiently by a single person. A researcher might sit for several minutes pondering the output of a program before making a slight change and re-running it. During that interval the computer would have nothing to do, its great power going to waste, at great expense. Even the hundred-millisecond intervals between keystrokes loomed as vast gulfs of wasted time for the computer, in which thousands of computations could have been performed. All of this processing power need not go to waste, if it could instead be shared among many users. By slicing up the computer’s attention so that it could serve each user in turn, the computer designer could have his cake and eat it – provide the illusion of an interactive computer completely at the user’s command, without wasting most of the capacity of a very expensive piece of hardware. The concept was latent in SAGE itself, which could serve dozens of different operators simultaneously, each monitoring his own sub-sector of airspace. After meeting Clark, Licklider immediately saw the potential to combine the shared user base of SAGE with the interactive freedom of the TX-0 and TX-2 into a potent new mix, and this formed the basis of his advocacy for man-computer symbiosis, which he proposed to the Department of Defense in a 1957 paper entitled “The Truly Sage System, or Toward Man-Machine System for Thinking.” In that paper described a computer system for scientists very similar in structure to SAGE, with a light-gun input, and “simultaneous (rapid time-sharing) use of the machine computing and storage facilities by many people.” Licklider, though, lacked the engineering chops to actually design or build such a system. He managed to learn the basics of programming at BBN, but that was as far as his skills went. The first person to reduce time-sharing theory to practice was John McCarthy, an MIT mathematician. McCarthy wanted constant access to a computer in order to craft his tools and models for manipulating mathematical logic, the first steps, he believed, towards artificial intelligence. He put together a prototype 1959, consisting  an interactive module bolted onto the university’s batch-processing IBM 704 computer. Ironically, this first “time-sharing” installation had only one interactive console, a single Flexowriter teleprinter. By the early 1960s, however, the MIT engineering faculty as a whole had become convinced that they should invest wholesale in interactive computing. Every student and faculty member with an interest in programming who got their hands on it, got hooked. Batch-processing made very efficient use of the computer’s time, but could be hugely wasteful of the researcher’s – the average turnaround time for a job on the 704 was over a day. A university-wide committee formed to study the long-term solution for the growing demand for computing resources at MIT, and time-sharing advocates predominated. Clark fought a fierce rearguard action, arguing that the move to interactivity should not mean time-sharing. As a practical matter, he argued that time-sharing meant sacrificing interactive video displays and real-time interaction, crucial features of the projects he had been working on with the MIT biophysics lab. But more fundamentally, Clark seemed to have a deep philosophical resistance to the idea of sharing his workspace. As late as 1990, he refused to connect his computer into the Internet, and stated outright that networks “are a mistake” and “don’t work.”5 He and his disciples formed a sub-sub-culture, a tiny offshoot within the already eccentric academic culture of interactive computing. But their arguments in favor of small, un-shared computer workstations did not find purchase with their colleagues.6 Given the cost of even the smallest individual computer at the time, such an approach seemed economically infeasible to the other engineering faculty. Moreover, most assumed at that time that computers – the intellectual power plants of a dawning information age – would benefit from economies of scale, in the same way that physical power plants did. In the spring of 1961, the final report of the long-range study committee sanctioned large-scale time-sharing systems as the way of the future at MIT. By that time, Fernando Corbató, known to colleagues as “Corby,” was already working to expand the scope of McCarthy’s little experiment. A physicist by training, he learned about computers while working on Whirlwind in 1951, while a grad student at MIT. 7 After completing his doctorate he became an administrator for MIT’s newly formed Computation Center, built around the IBM 704. Corbató and his team (initially Marge Merwin and Bob Daley, two of the best programmers in the Center) called their time-sharing system CTSS, for Compatible Time-Sharing System – so-called because it could run simultaneously with the 704’s normal batch-processing operations, seamlessly snatching computer cycles for users as needed. Without this compatibility the project would indeed have been impossible, because Corby had no funding for a new computer on which to build a time-sharing system, and shutting down the existing batch-processing operation was not an option. At the end of 1961, CTSS could support four terminals. By 1963, MIT hosted two instances of CTSS on 3.5 million dollar transistorized IBM 7094 machines, with roughly ten times the memory capacity and processing power of their 704 predecessor. The system’s supervisor software passed through the active users in a roughly round-robin fashion8, servicing each for a fraction of a second before moving on to the next. Users could store programs and data in their own private, password-protected area in the computer’s disk storage, for later use.9 Corbató in his trademark bow-tie, in the IBM 7094 computer room Each computer could serve roughly twenty terminals. That was enough to not only support a couple of small terminal rooms, but also to begin spreading access to the computer out across Cambridge. Corby and other key individuals had office terminals, and, at some point, MIT began providing home terminals to technical personnel so that they could do system maintenance at odd hours without having to come on-campus. All of these early terminals consisted of a typewriter with some modifications to support reading from and writing to a telephone line, plus a continuous feed of perforated paper instead of individual sheets. Modems connected the terminals via the telephone system to a private exchange on the MIT campus, via which they could reach the CTSS computer. The computer thus extended its sensory apparatus over the telephone, with signals that went from digital to analog and back. This was the first stage in the integration of computers into the telecommunications network. The mixed state of AT&T with respect to regulation facilitated this integration. The core network was still regulated, and required to provide private lines at fixed rates, bu a series of FCC decisions had eroded its control over the periphery, and thus it had very little say over what was attached to those lines. MIT needed no permission for its terminals. A typical mid-1960s computer terminal, the IBM 2741. The desired goal of Licklider, McCarthy, and Corbató had been to increase the availability of computing power to individual researchers. They had chosen the means, time-sharing, for purely economic reasons – no one could imagine buying and maintaining a computer for every single researcher at MIT. But this choice had produced unintended side-effects, which could never have been realized within Clark’s “one man, one machine” paradigm. A common file area and cross-links between users accounts allowed users to share, collaborate, and build on each other’s work. In 1965, Noel Morris and Tom Van Vleck facilitated this collaboration and communication, with a MAIL program that allowed users to exchange messages. When a user sent a message, the program appended it to a special mailbox file in the recipient’s file area. If a user’s mailbox file had any contents, the LOGIN program would indicate it with the message “YOU HAVE MAIL BOX.” The contents of the machine itself were becoming an expression of the community of users, and this social aspect of time-sharing became just as prized at MIT as the initial premise of one-on-one interactive use. Seeds Planted Lick, having accepted ARPA’s offer and left BBN to take command of ARPA’s new Information Processing Techniques Office (IPTO) in 1962, quickly set about doing exactly what he had promised – focusing ARPA’s computing research efforts on spreading and improving time-sharing hardware and software. He bypassed the normal process of waiting for research proposals to arrive on his desk, to be authorized or rejected, instead going into the field himself and soliciting the research proposals he wanted to authorize. His first step was to reconfigure the existing SDC command-and-control research project in Santa Monica. Word came down to SDC from Lick’s office that they should curtail their work on command-and-control research, and instead focus their efforts on turning their surplus SAGE computer into a time-sharing system. According to Lick, the basic substrate of time-shared man-machine interaction must come first, and command-and-control would follow. That this prioritization aligned with his own philosophical interests was a happy coincidence. Jules Schwartz, a SAGE veteran, architected the new time-sharing system. Like its contemporary, CTSS, it became a virtual social space, including among its commands a DIAL function for direct text messaging between on-line users, as can be seen in this example exchange between John Jones and a user identified by the number 9: DIAL 9 THIS IS JOHN JONES, I NEED 20K IN ORDER TO LOAD MY PROG FROM 9 WE CAN GET YOU ON IN 5 MINUTES. FROM 9 GO AHEAD AND LOAD Next, to provide funding for the further development of time-sharing at MIT, Licklider found Robert Fano to lead his flagship effort: Project MAC, which lasted into the 1970s.10 Though the designers initially hoped that the new MAC system would support 200 simultaneous users or more, they had not reckoned with the ever-escalating sophistication and complexity of user software, which easily consumed all improvements in hardware speed and efficiency. When launched to MIT in 1969, the system could support about 60 users on its two central processing units (CPUs), roughly the same number per CPU as CTSS. However, the total community of users was much larger than the maximum active load at any given time, with 408 registered users in June 1970.11 Project MAC’s Multics system software also embodied several major advances in design, some of which are still considered advanced features in today’s operating systems: a hierarchical file system with folders that could contain other folders in a tree structure; a hardware-enforced distinction between execution in user and system mode; dynamically linked programs that could pull in software modules as needed during execution; and the ability to add or remove CPUs, memory banks, or disks without bringing down the system. Ken Thompson and Dennis Ritchie, programmers on the Multics project, later created Unix (a pun on the name of its predecessor) to bring some of these concepts to simpler, smaller-scale computer systems. Lick planted his final seed in Berkeley, at the University of California. Project Genie12, launched in 1963, begat the Berkeley Timesharing System, a smaller-scale, more commercially-oriented complement to the grandiose Project MAC. Though nominally overseen by certain Cal faculty members, it was graduate student Mel Pirtle who really led the time-sharing work, aided by other students such as Chuck Thacker, Peter Deutsch, and Butler Lampson. Some of them had already caught the interactive computing bug in Cambridge before arriving at Berkeley. Deutsch, son of an MIT physics professor and the prototypical computer nerd, implemented the Lisp programming language on a Digital PDP-1 as a teenager before arriving at Cal as an undergrad. Lampson, for his part, had programmed on a PDP-1 at the Cambridge Electron Accelerator as a Harvard student. Pirtle and his team built their time-sharing system on a SDS 930, made by Scientific Data Systems, a new computer company founded in 1961 in Santa Monica.13 SDS back-integrated the Berkeley software into a new product, the SDS 940. It became one of the most widely used time-sharing systems of the late 1960s. Tymshare and Comshare, companies that commercialized time-sharing by selling remote computer services to others, bought dozens of SDS 940s for their customers to use. Pirtle and his team also decided to try their hand in the commercial market, founding Berkeley Computer Corporation (BCC) in 1968, but BCC fell into bankruptcy in the 1969-1970 recession. Much of Pirtle’s team ended up at Xerox’s new Palo Alto Research Center (PARC), where Thacker, Deutsch and Lampson contributed to landmark projects such as the Alto personal workstation, local networking, and the laser printer. Mel Pirtle, center, with the Berkeley Timesharing System Of course, not every time-sharing project of the early 1960s sprung from Licklider’s purse. News of what was happening at MIT and Lincoln Labs spread through the technical literature, conferences, academic friendships, and personnel transfers. Through these channels other, windblown, seeds took root. At the University of Illinois, administrators of the Control Systems Lab pivoted out of their reliance on classified defense work by developing the PLATO interactive education system. Clifford Shaw created the JOHNNIAC Open Shop System (JOSS), which the Air Force funded in order to improve the ability of RAND employees to perform quick numerical analyses.14 The Dartmouth Time-Sharing System had a direct connection to events at nearby MIT, but was otherwise the most exceptional, being a purely civilian-funded effort sponsored by the National Science Foundation, on the basis that experience with computers would be a necessary part of a general education for the next generation American leaders. By the mid-1960s, time-sharing had not taken over the computing ecosystem. Far from it. Traditional batch-processing shops predominated in sales and use, especially outside university campuses. But it had found a niche. Taylor’s Office In the summer of 1964, some two years after arriving at ARPA, Licklider moved on again, this time to IBM’s research center north of New York City. For IBM, shocked to have lost the Project MAC contract to rival computer maker General Electric after years of good relations with MIT, Lick would provide some in-house expertise in a trend that seemed to be passing it by. For Lick, the new job offered an opportunity to convert the ultimate bastion of conventional batch computing to the new gospel of interactivity.15 He was succeeded as head of IPTO by Ivan Sutherland, a young computer graphics expert, who was succeeded in turn, in 1966, by Robert Taylor. Licklider’s own 1960 “Man-Machine Symbiosis” paper had made Taylor a convert to interactive computing, and he came to ARPA at Lick’s recommendation, after a stint running a computer research program at NASA. His personality and background formed him in Licklider’s mold, rather than Sutherland’s. A psychologist by training and no technical expert in computer engineering, he compensated with enthusiasm and clear-sighted leadership. One day in his office, shortly after taking over the IPTO, a thought dawned on Taylor. There he sat, with three different terminals, through which he could connect to the three ARPA-funded time-sharing systems in Cambridge, Berkeley, and Santa Monica. Yet they did not actually connect to one other – he had to intervene physically, with his own mind and body, to transfer information from one to the other. 16 The seeds sewn by Licklider had borne fruit. He had created a social community of IPTO grantees, that spanned many computing sites, each with its own small society of technical experts, gathered around the hearth of a time-sharing computer. The time had come, Taylor thought, to network those sites together. Their individual social and technical structures, once connected, would form a kind of super-organism, whose rhizomes would span the entire continent, reproducing the social benefits of time-sharing on the next higher scale. With that thought began the technical and political struggle that would give birth to ARPANET. [previous] [next] Further Reading Richard J. Barber Associates, The Advanced Research Projects Agency, 1958-1974 (1975) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late: The Origins of the Internet (1996) Severo M. Ornstein, Computing in the Middle Ages: A View From the Trenches, 1955-1983 (2002) M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)

Read more
Steam and Electricity, Part 1: Electric Light

So, steam power had by the last third of the nineteenth century wrought revolutions in mining, manufacturing, and transportation on land, the rivers, and the oceans. That would seem to be enough. But the inventors of the nineteenth century would wrest yet one more revolution from steam, by generating from it electric light, and then electric power. The dream of electric power began in the 1830s. A fever for electricity and its marvels swept Europe in response to the discoveries and demonstrations of the likes of William Sturgeon and Michael Faraday. The electric battery had existed already for decades; it could amuse and amaze, but had not found much practical use. The appearance of electromagnets and electric motors promised to change all that, by converting the electrical power of the battery cell into mechanical work. Enthusiasts painted a phantasmagorical picture of a coming electric age that would supplant belching steam power with the quiet whirr of electricity.[1] Nicholas Callan, an Irish professor of natural philosophy and regular contributor to Sturgeon’s Annals of Electricity, argued that with zinc batteries and electromagnetic engines, …an electro-magnetic engine as powerful as any of the steam engines on the Kingstown Railway, may be constructed for the sum of £250 ; secondly, that the weight of such an engine will not exceed two tons ; thirdly, that the annual expense of working and repairing it will not be more than £300. If my calculations be correct, the expense of propelling the railway carriages by electro-magnetism, will be scarcely one fourth of the cost of steam.[2] James Joule, later famous for as one of England’s most prominent physicists, but employed at the time as the manager of his family’s brewery, initially shared these enthusiasms. He wrote in 1839, I can hardly doubt that electro-magnetism will ultimately be substituted for steam to propel machinery. If the power of the engine is in proportion to the attractive force of its magnets, and if this attraction is as the square of the electric force, the economy will be in the direct ratio of the quantity of electricity, and the cost of working the engine may be reduced ad infinitum.[3] Yet it fell to Joule himself to burst this bubble decisively with the sharp tools he and his contemporaries had developed within the newly burgeoning science of energy. By realizing that in making electric power a battery must consume some metal, and measuring the amount of work that could be produced a given amount of that metal (typically zinc at the time), it was possible to show decisively that the batteries of the time could never supplant coal. A given mass of zinc generated less work than the same mass of coal, despite costing twenty times more.[4] Joule in later years [Henry Roscoe, The Life & Experiences of Sir Henry Enfield Roscoe (Macmillan: London and New York, 1906), 120]. There still remained, however, the possibility that the electric circuit could do something wholly new, that had no steam-powered equivalent. The first such application to come to light was electro-plating, the use of an electric current to induce a metal in solution (such as gold or silver) to coat another metal object. The Italian chemist Luigi Brugnatelli was the first to demonstrate that this could be done, but the technique did not become widely known and used until the end of the 1830s. The second was electric light. Arc In 1808, Humphrey Davy—poet, philosopher, inventor, and showman—then at the height of his fame as a public lecturer at the Royal Institution, gave a lecture in which he demonstrated the power of electricity to create a bright and persistent light: When small pieces of charcoal from the willow, that had been intensely ignited, were acted upon by Voltaic electricity in a Torricellian vacuum… from the charcoal a flame seemed to issue of a most brilliant purple, and  formed, as it were, a conducting chain of light of nearly an inch in length…[5] The flow of electricity created a glowing arc as it leapt the gap between the two pieces of charcoal. A year later he repeated the experiment in air, not a vacuum, with a battery four times larger (consisting of 2,000 cells). According to one observer, “[t]he spark, the light of which was so intense as to resemble that of the sun, struck through some lines of air, and produced a discharge through heated -air of nearly three inches in length, and of a dazzling splendour.”[6] A contemplative Humphry Davy, perhaps concocting some lines of romantic verse. [Thomas Phillips, National Portrait Gallery, London] The phenomenon made for a brilliant demonstration; Davy’s audiences, eager for sensible displays of the power of science, loved this kind of electrical parlor trick. But no one thought of it as a practical form of artificial light. The battery drew down its charge quickly and the charcoal burned itself away under the heat of the arc. Soon enough, between the weakening current and the shrinking charcoal, the gap grew too large to be bridged by the current, and the arc failed. Even were that not the case, a battery with hundreds or thousands of cells, each with its own four-inch-square metal plates, was far too costly for everyday use. Real progress towards electric light did not begin until the 1840s. Inventors in France and Britain developed lamps with hard coke rods that burned more slowly and evenly than charcoal, and regulator mechanisms using an electromagnet to force the rods closer together whenever the current weakened, maintaining the correct gap. With these features, as well as improved battery cells, arc lights could burn continuously for hours, and found use as novelty lighting for hotel lobbies and theater special effects; the rising sun for the opera “Le Prophète,” for example.[7] Two early arc lamp designs [Henry Schroeder, History of Electric Light (Washington: Smithsonian Institution, 1923), 21]. Other inventors developed still better regulators in the 1850s, but the expense and short life of the batteries remained as insurmountable barriers to wider use. Dynamo The answer lay with steam power. Far from striking down coal and inaugurating a new era of clean energy, electric power would become successful only by partnering with steam. The fact that motion could create an electric charge had been known for millennia. The very concept of electricity was named after amber (elektron in Greek), because that material would attract objects after being rubbed. But to create a machine that could efficiently transform the motion of a steam engine into a usable current, an effective generator, was another matter. In 1820, Hans Oersted showed that an electric current could create mechanical force via a magnet. In the early 1830s, Michael Faraday then showed the reverse; that a magnet could induce a current. His generator, consisting of a metal disk spinning between the arms of a magnet, produced a weak current across the disk, capable of little more than making the needle of a galvanometer jump. Similar generators, called magnetos, went through years of incremental improvements over the next twenty years, without seeing much use outside the laboratory, except for a few sold to the electroplating industry. But they did demonstrate that rotary motion (such as from a steam engine) could be used to generate a current.[8] Farady’s experimental magneto [Henry Schroeder, History of Electric Light (Washington: Smithsonian Institution, 1923), 8]. In the mid-1850s, Frederick Holmes, a London chemistry professor, constructed a magneto with an armature of six disks each mounted with coils of wire on its perimeter that spun between seven banks of magnets, and showed that it could power an arc lamp. Holmes believed that his new device could replace oil lamps in England’s lighthouses, and petitioned Trinity House, the organization responsible for the oversight of the houses, to try it out. With the encouragement of Faraday, their scientific advisor, the Elder Brethren of that house agreed to trial an arc light powered by a magneto of Holmes’ design weighing more than five tons, which was driven in turn by a three-horsepower steam engine. The expensive, bulky, and sometimes balky apparatus did not take the lighthouse world by storm, but it provided the first glimpse of the potential for a fruitful union between steam and electricity.[9] In France, a company formed to develop arc lighting, the Société l’Alliance, made further advances. A researcher at the Conservatoire National des Arts et Métiers discovered through experimentation that the magneto wasted much of its output in sparks from the commutator (typically a metal brush) that converted the alternating current of the spinning magneto into the familiar unidirectional current of a battery-powered circuit. By removing the commutator to make an alternating current generator, Alliance achieved much greater efficiency and had more success in selling their instruments to French lighthouses than Holmes had to British ones. An Alliance arc light shone forth from Port Said at the Mediterranean entrance of the Suez Canal when it opened in 1869.[10] The 1871 ring dynamo of Zénobe Gramme [Henry Schroeder, History of Electric Light (Washington: Smithsonian Institution, 1923), 28]. But the true leap forward for practical arc lighting—and practical electric power more generally—came with the self-exciting dynamo, created independently in 1866 by Charles Wheatstone and Samuel Varley in England and Werner von Siemens in Berlin. Up to this point, magnetos had spun their moving element within the field of one or more permanent magnets to induce a current. But the dynamo used permanent magnets only as a pilot light to ignite much more powerful electromagnets: it diverted some of the current generated by the spinning armature to electromagnetic coils in the surrounding stator, which in turn induced a far stronger current in the main circuit. Tests by England’s Trinity House in the 1870s showed that a Siemens dynamo weighed almost thirty times less than a Holmes magneto, while producing four times as much light per horsepower.[11] System Two obstacles still stood in the way of the widespread use of arc lighting. First, because they relied on an electromagnet wired into the circuit to regulating the spacing of the arc, only one lamp could be placed in the circuit from one generator; otherwise, variations in the current caused by one lamp would disrupt the control mechanisms on the others. Second, the lights simply didn’t last long enough; they could not last an entire night without shutting off the circuit to replace the carbons. Pavel Yablochkov, a retired Russian Army engineer living in Paris, solved the first of these problems with his “candles.” Rather than placing the carbons vertically he set them side by side, with an insulator in between to prevent an electric connection except at the tip where the arc was produced. This eliminated the need for a regulator to maintain spacing and therefore allowed wiring many lamps together. Yablochkov (or Jablochkoff) candles were used for public illumination in Paris and London in the late 1870s, powered by a further refinement to the dynamo devised by Belgian Zénobe Gramme.[12] Charles Brush, an American, combined the improved generators coming out of Europe with a long-lasting and reliable arc lamp design that finally bring electric lighting into widespread commercial use. Brush worked a day job in Cleveland trading iron ore on the Great Lakes while inventing in his spare time in the workshop of his friend’s telegraph supply company. Like others had decades before him, he used an electromagnet to regulate the distance between the electrodes of the arc, but he added a “ring-clutch” which could feed out these long carbon rod in small increments each time the current weakened, like the lead of a mechanical pencil. He also found that rods made of a different kind of coke, derived from petroleum refining, and then electro-plated with copper, could be drawn longer and thinner than traditional carbon rods, for a longer burn. This allowed his lamps to provide about eight hours of steady light, then sixteen when he created a dual-carbon lamp.[13] EM.251230; Arc lamp. | Scanned from print. " data-medium-file="https://cdn.accountdigital.net/FmcmDWZ6dh35gtUIXNP4ca5UeG-Q" data-large-file="https://cdn.accountdigital.net/Fo4ecT5lxxIRK0HBrAYk3poRp0ib" loading="lazy" width="517" height="1024" src="https://cdn.accountdigital.net/Fo4ecT5lxxIRK0HBrAYk3poRp0ib" alt="" class="wp-image-14628" style="width:263px;height:auto" srcset="https://cdn.accountdigital.net/Fo4ecT5lxxIRK0HBrAYk3poRp0ib 517w, https://cdn.accountdigital.net/Fra26LMoLPdJke2yFVlQU8ZCH8zS 76w, https://cdn.accountdigital.net/FmcmDWZ6dh35gtUIXNP4ca5UeG-Q 151w, https://cdn.accountdigital.net/Fra26LMoLPdJke2yFVlQU8ZCH8zS8 768w, https://cdn.accountdigital.net/Fig0vPMMvKu0MqvqidhZgrcdDKGC 800w" sizes="(max-width: 517px) 100vw, 517px">A dual-carbon Brush arc lamp [Smithsonian Institution]. A key early client was Philadelphia businessman John Wanamaker, who operated the Grand Depot, of one of the first “department stores,” which would sell you almost everything under a single roof. On Christmas Day 1878, he threw the switch on twenty-eight new Brush lamps, powered by six generators. Three years later, he collaborated with other Philadelphia grandees to bring Brush lighting to the city’s streets. A brick-built power station near City Hall equipped with eight forty-five horsepower steam engines, each with its own dynamo, powered forty-nine arc lights set on red-painted iron poles along Chestnut Street from the Delaware to the Schuylkill.[14] Detail from the cover of Scientific American, April 02, 1881, showing a Brush power plant, Brush lamps, and Brush lights illuminating a New York City street. Brush’s electric light provides an opportunity to reflect on how much the steam engine’s technological role had changed over the previous century. From a free-standing power source for simple mechanical pumps, it had evolved into an embedded component of complex technological systems consisting of many interconnected and interdependent innovations: steamships, factories, railroads, and now city lighting, with still more complex electrical power systems to come. The steam engine had become a kind of mechanical mitochondrion, a life form captured and put to use to drive the workings of a still more complex organism, in many cases a pre-existing one (water-powered textile factories and horse-drawn railways, for example). These organisms could not succeed without the evolution of their component parts (engines, dynamos, lamps and circuits, in the case of Brush’s electric light) to a point where they could work in harmony with sufficient economy and simplicity to make the integrated whole of practical use. Having achieved that point, electric arc lighting systems spread across the public spaces of the cities of North America, Europe, and even as far away as India and Australia, and everywhere it went it dazzled the public with its brilliant white light. When the town of Wabash, Indiana mounted Brush lights atop its courthouse in 1880, a correspondent from the Chicago Tribune reported a nigh-religious response: [p]eople stood overwhelmed with awe, as if in the presence of the supernatural… Men fell on their knees, groans were uttered at the sight and many were dumb with amazement. We contemplated the new wonder in science as lightning brough down from the heavens.[15] Gas This was not the first time in living memory that the public had witnessed the dawn of a new era in public illumination. Prior to electricity, coal gas lamps had been the cutting-edge lighting technology of the nineteenth century. Gas lamps burned the fumes emitted from coal when it was cooked in air-free retorts: a toxic but flammable mix of methane, carbon monoxide, hydrogen and other gases. Natural philosophers had discovered that coal could be distilled into a flammable gas as early as the seventeenth century, but it was first developed it into a commercial light source in the first decade of the nineteenth.[16]   Illustration from 1821 of a retort house where coal was cooked to make illuminating gas. Factories were early adopters of the technology, which allowed them to operate long into the night, especially in the short days of a Northen European winter, and thus get more use out of their expensive machinery. Just as the steam engine had worn down the distinctions between seasons that determined the ebb and flow of water power, gas illumination eroded the ancient and powerful distinction between night and day more rapidly than any event since the taming of fire. We may to some degree consider the demand for artificial light as a natural result of humanity’s aversion to darkness, yet to some degree it was also a byproduct of modernity: the rise of capital-intensive, indoor industry and office work that depended on reading and writing created more work that could be done after sunset and more financial incentive to do it. Among the earliest uses was at the cotton mill of George Lee in Salford, near Manchester, lit in 1805 by fifty gas lamps installed by Boulton and Watt, under the supervision of the same William Murdoch who had developed the sun-and-planet gear for that firm over twenty years earlier.[17] By mid-century, gas fumes were being stored in tanks and then piped out to factories, stores, street lights, offices, and wealthier homes in most of the major cities of the West. A gas mantle provided brighter light than a candle or oil lamp at lower marginal cost (once the original installation cost was defrayed) and with less risk of fire (since it was attached to a fixed pipes which could not tip over). By the early 1880s, however, arc lighting was rapidly supplanting gas for public and commercial illumination: city streets, department stores, amusement parks, factories, and more. A reporter present at the lighting of Chestnut Street in Philadelphia noted that the existing public lighting appeared “yellow, dim, and sickly” by comparison, and the electric light could be cheaper even than gas.[18] Brush’s success drew competitors who copied and improved upon his creation, making gas still less attractive. Most notable was Elihu Thomson of Philadelphia, who figured out how to make a highly-efficient self-regulating dynamo that would maintain a steady current regardless of the number of working lamps, allowing individual lamps to fail or be switched off without the need for bypass circuits or some other compensating resistor.[19] Electric Jablochkoff candles in London, side-by-side with the relatively feeble pre-existing gas lamps. For all of its impressive advantages in brightness, clarity, and cost, however, arc lighting created a spectacle that was entirely unsuited to homes and offices. No one wanted a glaring, hot two-thousand candlepower arc lamp (about twenty times as bright as a typical modern light bulb) next to their desk or sofa. A different path to electric light would have to be taken in order to domesticate it. Incandescence The phenomenon of electric incandescence had also been known for many decades. An electric current sent through certain materials, such as a strip of platinum or rod of carbon, would cause that material to glow with a warm, mild light, of an entirely different character from the dazzling arc. Dozens of inventors throughout the nineteenth century tried to turn this effect into a practical electric light, but all suffered from the same basic limitation: the incandescent material burned or melted far too quickly to make a useful light source. By 1878, several inventors had made some basic progress toward a practical system of incandescent electric light: Moses Farmer had developed a dynamo and incandescent bulbs that he used to light his own home in Cambridge, Massachusetts in the 1860s. Moses made little effort to commercialize his home experiment, but his partner William Wallace continued to manufacture Farmer’s dynamo design. In early 1879, Joseph Swan, an English industrial chemist, demonstrated a bulb with a filament of carbonized thread in a vacuum (to prevent the carbon from burning up). Sawyer, another native Yankee like Farmer, also developed a carbon incandescent lamp in a nitrogen-filled bulb, and plans for an electrical distribution system, but his excessive love of alcohol and rash temper made it impossible for him to secure steady partnerships and funding.[20] Thomas Edison, inspired by a demonstration of arc lamps lit by dynamos at Wallace’s factory, launched his own electric light company in 1878. Already a successful and famous inventor due to his work on the telegraph, telephone, and phonograph, Edison’s reputation alone sufficed to tank the price of gas company stocks when he announced that he had entered the fray. He brought to bear both profound energy and far more capital than any of his rivals, with financial backing from the Western Union telegraph company and J.P. Morgan’s sprawling banking empire.[21] Edison with his phonograph in April 1878, a few months before he embarked on his quest for electric light. At his “invention factory” in Menlo Park, New Jersey, he and his employees made an exhaustive search of materials to find an ideal filament. Everyone knew that long life was crucial, but Edison, already looking beyond the bulb (which he called a “burner,” by analogy to the gas light) to the complete electric system, had a further insight: he wanted a filament of high resistance. Swan and Sawyer had created low-resistance filaments to minimize loss of energy in the circuit to heat, but Edison realized that to effectively distribute electricity across a city, it was more important to minimize the cost of the copper wiring and generators: due to Ohm’s Law, high resistance meant low current, which meant thin and inexpensive wires.[22] Francis Upton and Charles Batchelor, two of Edison’s most trusted employees, carried out a series of experiments on a wide variety of materials: paper, fishing line, cotton thread, lampblack, cardboard, wood shavings of all kinds (from boxwood to spruce), cork, coconut shells, and more before finally settling on carbonized bamboo as the most effective. It resisted current at hundreds of ohms and proved capable of burning for hundreds of hours without failing.[23] Then, like Brush but at an even more ambitious scale, Edison’s lab built a complete electrical system around his successful bulb. Out of Menlo Park came a new dynamo with a drum-shaped armature, a new vacuum pump design to remove the air from the bulb’s glass envelope as efficiently as possible, screw sockets for securely installing bulbs at any angle, meters, and switches. Newly designed conduits and junction boxes distributed of electricity along a “feeder-and-main” system which reduced the cost of copper by sending multiple thin feeder circuits out from the generator to the main circuits that powered the lights, rather than using a single thick trunk line.[24]   All of this ingenuity fed into the famous Pearl Street station in downtown New York, chosen because of its proximity to over one thousand existing gas customers whom Edison hoped to convert to electric light. When the station switched on in September 1882, among its initial customers were the offices of the New York Times, whose pages praised the light as a vastly superior alternative to gas: …more brilliant than gas and a hundred time steadier… As soon as it is dark enough to need artificial light, you turn the thumbscrew and the light is there, with no nauseous smell, no flicker, and no glare… The light was soft, mellow, and grateful to the eye, and it seemed almost like writing by daylight to have a light without a particle of flicker and with scarcely any heat to make the head ache.[25] In fact, generating the magical glow of the electric lamps required heat, glare, and noxious fumes aplenty, but they were hidden away from the customers in the lower floors of the Pearl Street station, where Babcock & Wilcox boilers fed steam to Armington & Sims engines which, in turn, spun Edison Electric’s “Jumbo” dynamos, named after P.T. Barnum’s famous elephant.[26] In the 1930s, historian cultural critic Lewis Mumford identified a divide between the grim steam-and-iron regime of the “paleotechnic” and the clean, bright, and electric “neotechnic.”[27] But to some extent this was a false distinction. Electricity distributed and subdivided steam power, it made it invisible, but, contrary to the dreams of the early electric enthusiasts, it did not replace it.[28] A cutaway view of Pearl Street station with steam from the boilers below stoking the engines that power the dynamos above. Yet this is still not the whole of the truth of the relation between electricity and steam. Edison explicitly designed his lighting system as a one-to-one replacement for gas illumination. But his dreams extended far beyond an electrified equivalent of gas lighting to an all-encompassing system of power: “The same wire that brings the light to you,” Edison proclaimed in 1878, long before he even had a working incandescent bulb, “will also bring power and heat. With the power you can run an elevator, a sewing machine or any other mechanical contrivance that requires a motor, and by means of the heat you may cook your food.”[29] Though far from reality in 1878, this vision did indeed come true, and it placed new demands on steam power that would require its reinvention, and the replacement of the century-old reciprocating steam engine with something altogether new.

Read more
Tangent: The Automated Dungeon Master

This post is the first in a probable series of ‘tangents’, not part of a continuing series like The Switch or The Backbone. In fact, this particular tangent veers well off of this blog’s normal subject matter, as it deals primarily with fantasy role-playing games. The Magic of Dungeons and Dragons Since I was a young child in the 1980s, I have been fascinated by Dungeons and Dragons (D&D). My older brother at some point had an interest in the game but evidently lost it, and bequeathed to me a smattering of box sets and hardback manuals from Tactical Studies Rules (TSR), the company that produced the game. Most immediately accessible and captivating was the vivid scarlet D&D Basic Set, emblazoned with Larry Elmore’s painting of a warrior with a glowing sword confronting a fearsome dragon, crouched atop its horde of ill-gotten gains. It’s hard to explain how much excitement and wonder I derived from this little cardboard box and the saddle-stitched rules manuals that it contained. D&D gave structure and consistency to the kind of make-believe play that I, like so many other children, already instinctively engaged in. D&D developed within the culture of competitive miniature wargaming, and can be played as something like a traditional game, with a goal against which victory or defeat is measured. In this style of play, much like in many of the wargames from which it derived, one player is assigned the role of referee (called the Dungeon Master), and given the responsibility for resolving the actions made by the other players. TSR regularly ran tournament scenarios at conventions in this mode. Whatever group of adventurers best survived the tricks, traps, and monsters found within the dungeon and accumulated the most treasure was declared the winner. The truest magic of D&D, and other role-playing games (RPGs)1, however appears only when played in a second-order mode, where the same players and Dungeon Master (DM) meet up repeatedly (every week, for example), to play one continuous game over the course of many sessions. In keeping with the game’s wargaming roots, this iterated game is known as a campaign. Just as a military campaign consists of an extended series of movements and battles by a single army, so a role-playing campaign consists of repeated excursions into the same imaginary world. There might be a long-term goal by which the players “win” the game, but there can never be a loser. The main point is not pursuing a winning strategy, but experiencing adventures in a place that never was. The DM’s role as referee and adjudicator therefore becomes secondary to his role as world creator and simulator. A portion of the map of the City-State of the Invincible Overlord (1976), one of the first published D&D settings, and already showing the desire to create a fully realized world for players to inhabit. Secondary Worlds And Their Discontents And now you must allow me a tangent within a tangent, to veer off into a discussion of imaginary worlds in literature. After all, the desire to recreate the kind of battles and adventures found in their favorite fantasy literature is what spurred Dave Arneson and Gary Gygax to invent D&D in the first place. The gold standard for imaginative expansiveness in such literature, is, of course, J.R.R. Tolkien’s Middle-Earth2. Tolkien spent decades refining the outlines and deepening the detail of the mythos he had created, intending to develop a wholly English equivalent to the Nordic eddas or Greek epics. He called the new world that he had brought forth from within his mind a sub-creation (within God’s primary creation) or secondary world. When one reads The Lord of the Rings, one encounters not just a story, but a whole imaginary place in which that story unfolds. Middle-Earth is not, of course, the first imaginary world in fiction. But prior to Tolkien, such places were generally flat and tissue-thin. They existed as a means to the ends of allegory or satire, or as the backdrop to a children’s story with no pretense to realism, like Frank Baum’s Oz with its four symmetrical lands, each overseen by its resident witch and inhabited by a cutesy tribe with monochrome clothes and houses. In Tolkien, by contrast, one finds a land populated with richly developed kingdoms and peoples, each with its own history and culture, its own myths and songs. His stories feel like true tales from a place that never was. tb J.R.R. Tolkien, “Bilbo woke up early with the sun in his eyes” The creators of D&D immediately began creating their own secondary worlds as settings for the adventures of their players – Gary Gygax developing Greyhawk, and Dave Arneson Blackmoor. As the game’s popularity spread, other DMs followed by creating their own worlds (often drawing from literary inspiration), or based their campaigns on published settings. For example, the phenomenally popular Forgotten Realms, published by TSR in 1987, and based on an imaginary world developed by Ed Greenwood since his childhood. Immersion in a rich secondary world of this sort is what transforms a D&D campaign from a fun romp into a vicarious literary tale. Rather than simply reading about the adventures of Frodo, Legolas, Aragorn, and the rest, one can experience such an adventure, exploring first-hand the wonders of a place like Middle Earth. When D&D is played in a tournament setting, it is understood implicitly by all parties that there is nothing beyond the pre-set scenario. One cannot leave the dungeon and decide to go do something else. The only choice players have is in how they will advance through it. A campaign, on the other hand, grants open-ended agency to the players within the DM’s secondary world. Each player can create their own persona, known as a character, to inhabit for the duration of the campaign, be it weeks or months or years. We will call this responsiveness to player agency openness. Some traditional board games, such as Cluedo/Clue, have allowed players to assume the role of a particular character, but they have very little openness, providing only a relatively minuscule decision space in which to operate. In a D&D campaign, a player can choose to take any plausible action, within the context of the world that their character inhabits. Your character comes across a mysterious stranger in the village common? You can strike up an idle conversation, pilfer their purse, start a fight, or attempt to win their affection. Whatever you can think of you can do, and it is left to the DM to decide the results of your actions, with consequences that might ripple out across that village, its surrounding countryside, or even the entire world. And that secondary world, swirling within the luminiferous ether that surrounds and penetrates the minds of players and DM, feels real to the players as their characters explore it, because it is self-consistent. The world persists, though gradually changing, even when the player characters are not around. They might leave that village after assaulting the mysterious stranger, and return to find that he has turned the villagers against them with scurrilous lies about their sordid deeds. Or they might find instead that a band of orcs has ransacked the village and killed their favorite barkeep. Over the course of the campaign, players could establish a mercantile empire, build a castle, or even found a kingdom, and the world will respond accordingly. Moreover, the world’s various parts form a harmonious whole across time and space. If the villagers follow a particular set of customs, it is likely that the castle just down the river does as well – or perhaps not, if the castle-dwellers are recent conquerors of the region, Norman seigneurs ruling over Saxon churls. A well-crafted and well-run game world, like Tolkien’s Middle Earth, has the feeling of a lived-in place, with natural contours of culture and politics carved out over the centuries, from the flow of the currents of history across the bedrock of geography. Yes, well-run and well-crafted, there’s the rub. Tolkien had two decided advantages over the poor, put-upon dungeon master in realizing his masterpiece. First, he only had to paint in the details along one path through his creation. Despite his many decades of labor, at his death most regions of his world still consisted of little more than names on a map. While the reader of The Lord of the Rings gets to visit Rivendell, Edoras, and Minas Tirith, she learns nothing about Harlindon, Rhûn or Anfalas, and about those regions Tolkien was nearly as ignorant as his reader. His characters politely steered clear of these uncharted regions. Player characters are rarely so obliging. Secondly, Tolkien wrote and revised his work over the course of years, and was able to pause for many minutes, or even hours, to consult prior chapters, maps, or other reference materials before deciding what would happen next. Even then, he could later return to the same passage and revise his decision, months or years later. A dungeon master, however, must respond to character actions as they come, usually within seconds, or the game session very quickly becomes tedious for his players. Both differences come down to agency – player characters have got it, literary characters don’t. To be able to simulate a self-consistent secondary world on the fly in this fashion is a rare feat of skill and effort. As circumstance demands, the DM must serve as a geographer, demographer, economist, physicist, and more. It strains the muscles of improvisation – how many different personalities and physiogonomies can one devise to define the innkeepers of each and every village through which the campaigners pass? To say nothing of the minstrels, mercenaries, miscreants, and so forth? And then, having established the facts of some place, the burden shifts to the memory – just what was the name of that tavern in Elkenburg, again? Meticulous note-taking is a great help, of course. Nonetheless, the effort required grows and grows as the players accumulate a history of people met and places visited.3 The fact is that few are the dungeon masters who can create such verisimilitude in the face of total, open-ended player agency. Instead, most fall back on what is known pejoratively as railroading – preparing events and sites, like Tolkien, along a predetermined right-of-way, and steering the players along it. The rest of the campaign world can serve as a feeble Potemkin village, a mere facade that will collapse at the first touch. Railroading gets its bad reputation from deplorable DM actions such as hurling ever more impossible monsters at the characters if they deviate from their intended path – or, as I experienced once as a player in my youth, throwing an invisible and impenetrable force field between them and the rest of the world. But, in practice, it manifests itself far more often as a soft social contract. The players know that the DM has invested hours of preparation into developing a particular scenario, and they go along for the good of everyone involved4. Whether hard or soft, railroading ameliorates the problem of on-the-fly world creation by constraining the players’ agency within a limited sphere of action. But even a soft railroad is a huge investment of time for the DM to prepare and requires considerable skill to execute well. Some DMs opt instead for a “no prep” style of play with neither a world nor a scenario prepared in advance. But to do this successfully also requires its own set of skills – mainly improvisational – and usually a great deal of experience. D&D can indeed be a magical experience, but a would-be player may be hard-pressed to find a DM with the time and talent necessary to truly enchant. That is to say nothing of finding like-minded fellow players. Nothing can spoil a serious fantasy epic more quickly than a so-called friend who can’t help but crack “yo momma” jokes every five minutes. The reality of a D&D session thus often pales in comparison to what one imagines it could be, indeed, should be. Paper Worlds Given the difficulties of dungeon mastery, it did not take long after the invention of D&D for players and game publishers alike to begin looking for ways to capture the magic of that game without the need for a referee – or, in fact, for other players. That is to say, to automate the dungeon master. Wouldn’t it be something to become immersed into an open, self-consistent secondary world, any time, any place, all by yourself – as an active agent, not just a passive reader? For a long time, the most popular and readily accessible way to find such a “mechanical dungeon master” was through a kind of paper flow chart called a gamebook. In 1975, Ken St. Andre, an Arizona State University (ASU) grad in his late twenties, had fallen in love with the idea of D&D, but was unhappy with the complexity of its rules. He therefore decided to publish one of the very first alternative fantasy role-playing games, Tunnels and Trolls. He distributed copies he printed himself at the ASU print shop before finding a publisher, a one-man operation in Scottsdale called Flying Buffalo. Flying Buffalo’s founder, Rick Loomis, founded the company to moderate play-by-mail multiplayer sessions of a game he had invented called Nuclear Destruction, collecting a fee from each participant. He entered the publishing business after he acquired the rights to a card game (unrelated except for its theme) called Nuclear War. Loomis was happy to sell St. Andre’s remaining copies of Tunnels & Trolls, and, when the initial copies sold out quickly, Loomis acquired the license to publish additional print runs himself, under the Flying Buffalo imprint. In the spring of the following year, a player in St. Andre’s Tunnels & Trolls games named Steve McCallister suggested the idea of a solitaire dungeon adventure. He got the idea from the programmed instruction books that were popular at the time, which offered multiple choice questions, and then had the reader turn to the back for immediate feedback on the chosen answer. Loomis liked the idea and wrote the first solitaire role-playing adventure, Buffalo Castle, which he published in May 1976.  Buffalo Castle consisted of about 150 paragraphs (each identified by a page number and a letter), each containing a few terse sentences of description and options for the player to continue to one of several other paragraphs5. For example, “You have entered Room Six. There is a large fountain in the middle of the room. You may drink from it if you wish.If you take a drink, go to 8C. If you wish to leave by the north door, go to 4C. If you wish to leave by the east door, go to 16D.” The player character starts off faced with the choice of three doors by which to enter the castle, and proceeds from there. The adventure uses elements of the Tunnels & Trolls rules, especially combat, but forbids the use of magic, and all the complexity that entails. The castle is unforgiving, and most characters will likely die, but a lucky hero might manage to escape with some valuable plunder.  Buffalo Castle did not attract much attention, but later in the 1970s, a series of books called Choose Your Own Adventure (CYOA) did, selling hundreds of millions of copies over the ensuing decades. (Though the predecessors of Choose Your Own Adventure date back to the 1960s, the available evidence suggests that Buffalo Castle was developed entirely independently). While Flying Buffalo continued to produce solitaire Tunnels & Trolls adventures, not until 1982 did the D&D-style gamebook break out into a wider market. In that year, Steve Jackson and Ian Livingston decided to try their hand at fusing a role-playing adventure with the CYOA formula. Jackson and Livingston were D&D fans whose company Games Workshop, served for several years as the game’s European distributor. They did not try to publish their new series themselves, however. Aiming for a wider market than the hobby press, they sold the rights to Puffin Books. The series they created, Fighting Fantasy, blended narrative, exploration, and game in a similar fashion to Buffalo Castle, but in self-contained, mass-market paperbacks. The series was a resounding success, selling three million copies by 1985.6 A whole sub-genre of gamebooks followed, all attempting to emulate the experience of role-playing games, primarily D&D. Successful series included Wizards, Warriors, & You, GrailQuest, Lone Wolf, and Sorcery!7. The CYOA books had lacked any elements of a game, or of a persistent world. You simply made choices that lead you deterministically through a garden of forking paths, to one final outcome or another. Fighting Fantasy and its derivatives, on the other hand, reintroduced some or all of the D&D-like elements from Buffalo Castle: player character statistics (such as health and strength); randomness, including dice-based combat; and an inventory of items, which could affect the player’s combat ability. But the new wave of solitaire adventures devoted much more attention to creating a rich secondary world for the player to explore than Buffalo Castle had. Inventory items in Fighting Fantasy, for example, can have other in-game consequences outside combat (a locked door, for example, that can be opened – leading to a new paragraph – only if you possess the right key). Rather than brief pamphlets, the newer books were thick paperbacks, typically with 300-400 paragraphs (two or three times more than Buffalo Castle). Each paragraph also consisted of more text, deepening the sense of immersion in a real environment. Here, for example, is how a dungeon room is described in The Warlock of Firetop Mountain, the first of the Fighting Fantasy books: The locked door bursts open and a nauseating stench hits your nostrils. Inside the room the floor is covered with bones, rotting vegetation and slime. A wild-haired old man, clothed in rags, rushes at you screaming. His beard is long and grey, and he is waving an old wooden chair-leg. Is he simply insane as he appears, or has this been some kind of trap? You may either shout at him to try to calm him down (turn to 263) or draw your sword and attack him (turn to 353). (Spoiler: you should talk to the man, who is full of useful information). The Lone Wolf series took the idea of an ongoing D&D campaign and ran with it, allowing players to take a single character through a continuous story over dozens of books, carrying over skills and items acquired from one book to the next. The entire saga was set in Magnamund, a world devised by the author, Joe Dever, for his D&D games. The apotheosis of the development of the gamebook into an immersive secondary world, however, was Dave Morris and Jamie Thomson’s Fabled Lands series, published in 1995 and 1996, just as the gamebook trade was in decline. The series thus terminated early, after only six of a planned twelve volumes. Rather than a self-contained story arc, each published volume of Fabled Lands covers a region of the world, with its own towns, villages, castles, and wilderness areas to explore. The player can move between books by walking from region to region, by taking a ship, or even by teleportation via magical gates. There are many quests and adventures that one can discover, some of them contained within one book, others spanning the world. But the player is also free to ignore all that, and simply wander around and explore. A system of keywords also allows the world to change in response to the player’s actions. For example, in the first book, The War-Torn Kingdom, you can assassinate a pretender to the throne or help him ascend to the crown (assuming you don’t ignore him altogether). Whichever you choose, you will gain certain keywords which cement your alliances in future interactions with either faction. This is not to mention the sub-systems in the game that let players join a religion, buy houses, acquire ships, and engage in seaborne trade. No one has ever come closer than this to creating a secondary world on paper that grants the kind of full, open-ended agency that a refereed game of D&D can. And yet, it is still well short of the mark. At any given point in a gamebook, the player typically has only two to three different options. Even the richest nexus points in Fabled Lands, such as major cities, rarely have more than a half-dozen choices on offer. These don’t come close to exhausting all the possibilities that an imaginary protagonist in the same situation would have. Consider the raving old man in his fetid room in Firetop Mountain. The player is given only two options – shout at him or run him through. It’s not hard to come up with many more directions that a D&D campaign could branch out to from this decision point. One could, for example, offer the old man a clean set of clothes, attempt to restrain him, or back quickly out of the room and shut the door. If you do choose to talk, Firetop Mountain gives you no ability to direct the conversation and potentially alter how he responds to you. The man provides the same predetermined information every time. And there are no lasting consequences of the encounter. In the hands of a DM, the man might end up as an ally who accompanies the player characters through the dungeon, or you might track down his family and return him to the bosom of hearth and home. If you slay the man, that same family might instead track you down; if you anger the man, he might follow you and attempt to steal your treasure. Within a gamebook, agency and openness are more severely curbed than all but the most infamous of railroads.  Digital Worlds Therefore, players began looking instead to computers for an automated dungeon master. Personal computers burst onto the marketplace in the late 1970s, and became a part of most middle class households in the U.S. by the middle of the 1990s. A computer program could obviously provide dynamic responses to player actions much more easily than a static printed work. It could, in theory truly simulate a secondary world, without all of the limitations of a paper flowchart, and without any cumbersome keywords or checkboxes. A glimmer of this promise appeared in one of the very first D&D-inspired computer games, Adventure. Will Crowther, an engineer at Bolt, Beranek and Newman (BBN) and creator of some of the foundational software of the ARPANET, wrote the game for BBN’s PDP-10 minicomputer in the mid-1970s. The game had no graphics (very few computer terminals could support them at the time anyway), so all interaction happened in textual form, just like a gamebook. Crowther wrote a parser for the game that accepted two word commands in the form “verb noun”. You could thus tell the computer, in plain English, what you wanted to do, and it would tell you the consequences of your action. But this was not quite the dream of the digital DM come true. Yes, you could type anything. But most of the time the computer would refuse to understand you. For example, the game starts with the following description: YOU ARE STANDING AT THE END OF A ROAD BEFORE A SMALL BRICK BUILDING. AROUND YOU IS A FOREST. A SMALL STREAM FLOWS OUT OF THE BUILDING AND DOWN A GULLY. The game will accept “ENTER BUILDING”, “DRINK WATER” or “GO SOUTH”, but not “CLIMB TREE”, “SWIM”, “BUILD FIRE”, “HUNT”, “WAIT NIGHT”, etc. Adventure is, in effect, a flowchart in disguise, one that hides its outgoing branches, forcing the player to guess at them instead. It spawned its own genre of computer games, the adventure game. Though some later games, notably Zork, which was co-authored by another member of Crowther’s D&D campaign, provided more sophisticated parsers, none provided a substantial leap in verisimilitude over the gamebook. By the 1980s, the genre was almost entirely divorced from its D&D roots, focusing mainly on puzzles (usually combining inventory objects with the environment in some non-obvious way), rather than exploration, character development, or heroic exploits. The first computer RPGs to incorporate graphics appeared in the mid-1970s, on the PLATO IV system, a mainframe capable of supporting hundreds of graphical terminals developed at the University of Illinois. Shortly thereafter, similar titles reached a much wider on audience on the first personal computers. These games were typically pure dungeon crawlers – taking advantage of the gridded nature of dungeon corridors to simplify the problem of rendering graphics – and focused on combat, the easiest part of D&D to rigidly codify in algorithmic form. Among the best of these early efforts was Wizardry: Proving Grounds of the Mad Overlord, written by two Cornell University students, Andrew Greenberg and Robert Woodhead. Greenberg and Woodhead had access to a PLATO terminal at Cornell, and borrowed heavily from PLATO precursors like Dungeon and Oubliette in developing their game. Released for the Apple II in 1981, Wizardry epitomized one model for the computer RPG – a series of battles in a nameless dungeon, with intermittent rests to recuperate. All of its interest derives from resource management, careful mapping, and combat tactics. There is no hint of a wider setting, and only the barest gesture towards giving meaning and motivation to the player’s actions beyond killing everything in sight. The state of the dungeon does not even persist when the players return to the surface to shop for equipment, refilling instantly with the same monsters and treasure that they contained on the first visit. Another 1981 Apple II release, however, followed a different path, a path toward an immersive, digital secondary world. Ultima was written by a teenage D&D fan from the Houston suburbs named Richard Garriott. It is, in fact, a digital reproduction of the world he had created for his D&D campaign, which he called Sosaria. In the game, the player can receive quests from kings, delve into dungeons, visit towns to buy equipment, learn rumors while drinking in taverns, rescue princesses, acquire a vehicle, and even attempt to steal from the townsfolk. An overarching quest, to acquire four magic gems in order to travel back in time and defeat the evil wizard Mondain, ties the whole together. Ultima is sketchy and weakly cohesive in many places, including a strange interlude where the player takes the controls of a spaceship and battles enemies that look suspiciously like TIE fighters. It is also highly structured and symmetric – each castle and town is identical in structure, and there are four continents, each with two castles, one dungeon, and one landmark. Yet for all that, it’s an incredible achievement, a tiny imaginary world crammed onto two floppy disks by a 19-year-old University of Texas student. Ultima As the Ultima series evolved, and Garriott built a company around its success, Sosaria evolved into Britannia, and the games developed ever greater depth and sophistication in both the richness of their setting and the interactivity and openness of their gameplay. It  culminated in 1992 with Ultima VII, built by a large team of specialized writers, programmers, and artists at Origin Systems in Austin, Texas. Origin put a huge amount of effort into the game’s writing in order to give charm and character to every, well, character, in every corner of the world of Britannia. The landscape is littered with all sorts of side quests, little problems for the player character to solve independent of any progress toward the end of the game: a missing husband imprisoned for the theft of an apple, a thief disguised as a monk, two brothers in a dispute over religious belief. Ultima VII, with its keyword-based dialog system. As Garriott’s series mired itself in the morass of failure that was Ultima IX, the mantle of rich digital secondary worlds was taken up by two landmark games of the late 1990s – Fallout and Baldur’s Gate8. These games attempted to have it all, and largely succeeded – detailed and satisfying tactical combat; a wide open world to explore and discover, with friends and enemies to be made depending on the player’s choices; story beats seeded through the game to naturally lead the player toward the conclusion without the feeling of railroading; and, oh yeah, side quests. These are games in which one can immerse oneself like a warm bath, games in which one can wander for hours and hours and still find delightful new surprises: new nooks and crannies to explore, new choices to make, new people to meet. All of this delight, unfortunately, cost the creators of these games a great deal of time and money. In a D&D campaign, all of the richness of the world and its denizens is conjured up gratis in the minds of the players by the spoken words of the DM. In Baldur’s Gate, however, every choice, every possibility offered to the player in the name of openness had to be put there by someone. The game is, in effect, a very elaborate, lovingly illustrated and sound-tracked, flowchart. Every temple, every dungeon, every line of dialogue, every character animation, every side quest, came from the toil of artists, writers, and programmers. The cost to provide such things ballooned over the years as the expectations for the visual and auditory richness of games continued to ratchet up, from the simple tiles of Ultima to the hand-painted landscapes of Baldur’s Gate, and beyond. As the lead designer for the latter game said in a recent interview, explaining the cost of a writer’s simple flourish of imagination9: What you write on a page takes ten seconds, but all the resources that have to go into that—modeling, texturing, voiceover, music, all the rest—suddenly that ten seconds of writing becomes tens of thousands of dollars of assets. It is for this reason, combined with the niche appeal of RPGs, that the “Ultima” branch of computer RPGs was largely abandoned after the early 2000s in favor of more cost effective genres10. There was, however, one more possible approach to building a secondary world available. After all, a computer need not merely ingest data that was already provided to it. What if instead of paying all those writers, modelers, and artists, you got the computer to build the world for itself? Digital Synthesis It is a fairly easy task to get a computer to generate a dungeon maze.11 It’s not much harder to stock it with random monsters and loot, with escalating difficulty and value, respectively, as the player descends to deeper levels of the dungeon. A whole sub-genre was built on these facts, called “Rogue-likes,” after the 1980 game Rogue, which was freely distributed on Unix systems throughout the following decade. Later variations on the theme such as Hack, Moria, NetHack, and Angband also spread across the fringes of nerdom throughout the 1980s. Though sometimes graphics packs were available, by default these games rendered the dungeon in ASCII characters, with letters for monsters, carats for stairs, and an @ for the player character. A NetHack ASCII dungeon The genre exploded into the wider popular culture with the release of Diablo in 1996, which took the basic idea of procedurally-generated dungeon environments, and loot and added sophisticated visuals and sound and a graphical user interface. But the basic appeal of all these games was one-dimensional dungeon-delving. They offered no agency to the player beyond the cycle of kill and loot, and no wider setting to explore. What if the same basic concepts behind Rogue-like dungeon generation could be applied to an entire world? This was the the conceit of the Elder Scrolls series, which launched in 1994 with Arena, followed by the even-more-ambitious Daggerfall in 1996. Rather than generating new content on the fly as players explore, as most Rogue-likes do, the creators of Daggerfall pre-generated the major adventuring sites of the provinces of High Rock and Hammerfell on their own workstations, and then manually tweaked the results. In total, the game contains some four thousand dungeons and five thousands settlements (villages, towns, and cities) that the player can visit, across a total area of some sixty thousand square miles. Despite these astonishing figures, however, most of this vast area is utterly stale and lifeless, from a gameplay point of view. In (roughly contemporary) games like Ultima VII, Fallout, or Baldur’s Gate, exploring the world is a joy, because one never knows what characters, stories, adventures, or other surprises one will find around each corner. In Daggerfall, other than a smattering of random monsters to fight, there is nothing to do or see in the wilderness between dungeons and towns. And there is nothing particularly exciting about discovering a new dungeon or entering a new town either, since one is much like another, only with a different assortment of random monsters or shops. The direction in the game comes from quests (randomly generated against one of several hundred templates) which direct players to a particular house in a particular town to fetch a dingus, or to a particular dungeon to kill five snarks. Unlike the very constricted scope of Rouge-likes, Daggerfall gives the player a whole world in which to exercise their agency, but fails to provide many interesting things to do there. The map of just one of forty-four regions in Daggerfall. Each dot is a dungeon, temple, home, or town. Much more recently, the tremendous growth in the capabilities of machine learning has provided new hope for the dream of a procedurally-generated role-playing experience. A computer that can  generate natural language responses to natural language inputs, a pipe dream just a decade ago, now seems near reality. This past year, Nick Walton, a student at Brigham Young University and a D&D fan, saw that these new tools could be used to simulate a dungeon master. The result was AI Dungeon, based on Open AI’s GPT-2 machine learning model. Trained on gigabytes of input, GPT-2 is uncannily good at producing sensible text by simply predicting the next word that should follow a given list of input words. Walton fine-tuned the model by feeding it transcripts from a variety of games from the Adventure family tree. By doing so he created a kind of meta-Adventure, that could branch off in (for all practical purposes) an infinite number of directions.  This is a parser that is not mere window dressing for a flow chart. Like a human dungeon master, it generates a novel response to each input from the player. However, there is a painful sacrifice to be made for this gain in openness, for AI Dungeon has only the feeble self-consistency of a dream or hallucination. The game has no model of the world to test its responses against. It is a kind of prism, refracting the player’s words through a textual lens, and then bouncing the result back to the screen. The game will respond without blinking to absurd player actions that make no sense in the context of an adventure a fantasy realm, e.g. “fly to Tahiti” or “write an essay about Dungeons and Dragons.” And, like a river, it will not allow you to step into the same place twice. Here, for example, is my own attempt at an adventure as the wizard Iocounu. Notice how impressive it is until I try to retrace my steps12: You are Iucounu, a wizard living in the kingdom of Larion. You have a staff and a spellbook. You finish your long journey and finally arrive at the ruin you’ve been looking for. You look around and see that it’s indeed a ruined city. The walls are gone, but the buildings still stand tall as if they were built to last forever. The ruins themselves aren’t anything special; there is nothing here except rubble and dust. > enter nearest building You walk towards one of the buildings, which looks like it might be a guild hall or something similar. You open up the door and find yourself inside a large room with several tables filled with people sitting at them. Most of them seem to be mages studying some sort of book. > try a different building You head into another building which doesn’t look too interesting. You go through the front doors and then down a set of stairs. > go back to the guild hall You decide to check out the guild hall since you saw it from afar earlier. It seems like it would be a good place to get information on what’s going on in the rest of the city. > look around As soon as you step inside, you immediately feel very uncomfortable. There are so many people here that it feels like an invasion. They all stare at you and make faces at you. Some even begin to speak loudly about how “we” Hail to the Dungeon Master Because of scaling limitations, models like GPT-2 can consider no more than a few hundred words of previous text in constructing its next output. Google, however, just announced a new type of machine learning mechanism they call “reformer” that can open a much larger window on the past13. Could this, or some other breakthrough, breathe consistency into the phantasmagoria of AI Dungeon? Time will tell. Thus far, however, the effort to enfold the magic of D&D between the covers of a book, or to inscribe it into the electronic memory of a computer, has been a noble failure. Gamebooks and computer games are available at any time of day or night, ready to play. They will always give a consistent experience, and the better ones can provide hours and hours of enjoyment. They are never burned out, tired, or lazy. Their memory or imagination never falters. But without a human mind behind them, they cannot offer truly open-ended agency, at least without sacrificing all self-consistency. Economics plays a role, of course. There is no strict bound to the number of options that could be provided to the player of a computer RPG, given enough time and money – but in practical terms, the limit is quite sharp. Either your actions in the secondary world that the game invokes are strictly limited to the paths the creator has set out for you, or that world is nothing but a fever dream, a never-ending present without past or future. So, for now, we are stuck with the dungeon masters, with all their human foibles. Long may they live. Further Reading Shannon Appelcline, Designers & Dragons: The ’70s (2014) Jimmy Maher, The Digital Antiquarian (2011-present) Jon Peterson, Playing at the World (2012)  

Read more
The Electronic Computers, Part 4: The Electronic Revolution

We have now recounted, in succession, each of the first three attempts to build a digital, electronic computer: The Atanasoff-Berry Computer (ABC) conceived by John Atanasoff, the British Colossus projected headed by Tommy Flowers, and the ENIAC built at the University of Pennsylvania’s Moore School. All three projects were effectively independent creations. Though John Mauchly, the motive force behind ENIAC, knew of Atansoff’s work, the design of the ENIAC owed nothing to the ABC. If there was any single seminal electronic computing device, it was the humble Wynn-Williams counter, the first device to use vacuum tubes for digital storage, which helped set Atanasoff, Flowers, and Mauchly alike onto the path to electronic computing. Only one of these three machines, however, played a role in what was to come next. The ABC never did useful work, and was largely forgotten by the few who ever knew of it. The two war machines both proved themselves able to outperform any other computer in raw speed, but the Colossus remained a secret even after the defeat of Germany and Japan. Only ENIAC became public knowledge, and so became the standard bearer for electronic computing as a whole. Now anyone who wished to build a computing engine from vacuum tubes could point to the Moore School’s triumph to justify themselves. The ingrained skepticism from the engineering establishment that greeted all such projects prior to 1945 had now vanished; the skeptics either changed their tune or held their tongue. The EDVAC Report A document issued in 1945, based on lessons learned from the ENIAC project, set the tone for the direction of computing in the post-war world. Called “First Draft of a Report on the EDVAC,”1 it provided the template for the architecture of the first computers that were programmable in the modern sense – that is to say, they executed a list of commands drawn from a high-speed memory. Although the exact provenance of its ideas was, and shall remain, disputed, it appeared under the name of the mathematician John (János) von Neumann. As befit the mind of a mathematician, it also presented the first attempt to abstract the design of a computer from the specifications for a particular machine; it attempted to distill an essential structure from its various possible accidental forms.  Von Neumann, born in Hungary, came to ENIAC by way of Princeton, New Jersey, and Los Alamos, New Mexico. In 1929, as an accomplished young mathematician with notable contributions to set theory, quantum mechanics, and the theory of games, he left Europe to take a position at Princeton University. Four years later, the nearby Institute for Advance Study (IAS) offered him a lifetime faculty post. With Nazism on the rise, von Neumann happily accepted the chance to remain indefinitely on the far side of the Atlantic – becoming, ex post facto, among the first Jewish intellectual refugees from Hitler’s Europe. After the war, he lamented that “I feel the opposite of a nostalgia for Europe, because every corner I knew reminds me of… a world which is gone, and the ruins of which are no solace,” remembering his “total disillusionment in human decency between 1933 and September 1938.”2 Alienated from the lost cosmopolitan Europe of his youth, von Neumann threw his intellect behind the military might of his adoptive home. For the next five years he criss-crossed the country incessantly to provide advice and consultation on a wide variety of weapons projects, while somehow also managing to co-author a seminal book on game theory. The most secret and momentous of his consulting positions was for the Manhattan Project – the effort to build an atomic weapon – whose research team resided at Los Alamos, New Mexico. Robert Oppenheimer recruited him in the summer of 1943 to help the project with mathematical modeling, and his calculations convinced the rest of the group to push forward with an implosion bomb, which would achieve a sustained chain reaction by using explosives to driving the fissile material inward, increasing its density. This, in turn, implied massive amounts of calculation to work out how to achieve a perfectly spherical implosion with the correct amount of pressure – any error would cause the chain reaction to falter and the bomb to fizzle. Von Neumann during his time at Los Alamos Los Alamos had a group of twenty human computers with desk calculators, but they could not keep up with the computational load. The scientists provided them with IBM punched-card equipment, but still they could not keep up. They demanded still better equipment from IBM, and got it in 1944, yet still they could not keep up. By this time, Von Neumann had added yet another set of stops to his constant circuit of the country: scouring every possible site for computing equipment that might be of use to Los Alamos. He wrote to Warren Weaver, head of Applied Mathematics for the National Defense Research Committee (NDRC), and received several good leads. He went to Harvard to see the Mark I, but found it already fully booked with Navy work. He spoke to George Stibitz and looked into ordering a Bell relay computer for Los Alamos, but gave up after learning how long it would take to deliver it. He visited a group at Columbia University that had linked multiple IBM machines into a larger automated system, under the direction of Wallace Eckert (no relation to Presper), yet this seemed to offer no major improvement on the IBM set up that Los Alamos already had available. Weaver had, however, omitted one project from the list he gave to von Neumann:  ENIAC. He certainly knew of it: in his capacity as directory of the Applied Mathematics Panel, it was his business to monitor the progress of all computing projects in the country. Weaver and the NDRC certainly had doubts about the feasibility and timeline for ENIAC, yet it is rather shocking that he did not even mention its existence. Whatever the reason for the omission, because of it Von Neumann only learned about ENIAC due to a chance encounter on a train platform. The story comes from Herman Goldstine, the liason from the Aberdeen Proving Ground to the Moore School, where ENIAC was under construction. Goldstine bumped into von Neumann at the Aberdeen railway station in June 1944 – von Neumann was leaving another of his consulting gigs, as a member of the Scientific Advisory Committee to Aberdeen’s Ballistic Research Laboratory (BRL). Goldstine knew the great man by reputation, and struck up a conversation. Eager to impress, he couldn’t help mentioning the exciting new project he had underway up in Philadelphia. Von Neumann’s attitude transformed instantly from congenial colleague to steely-eyed examiner, as he grilled Goldstine on the details of his computer. He had found an intriguing new source of potential computer power for Los Alamos. Von Neumann first visited Presper Eckert, John Mauchly and the rest of the ENIAC team in September 1944. He immediately became enamored of the project, and added yet another consulting gig to his very full plate. Both parties had much to gain. It is easy to see how the promise of electronic computing speeds would have captivated von Neumann. ENIAC, or a machine like it, might burst all the computational limits that fettered the progress of the Manhattan Project, and so many other projects or potential projects.3 For the Moore School team, the blessing of the renowned von Neumann meant an end to all their credibility problems. Moreover, given his keen mind and extensive cross-country research, he could match anyone in the breadth and depth of his insight into automatic computing. It was thus that von Neumann became involved in Eckert and Mauchly’s plan to build a successor to ENIAC. Along with Herman Goldstine and another ENIAC mathematician, Arthur Burks, they began to sketch the parameters for a second generation electronic computer, and it was the ideas of this group that von Neumann summarized in the “First Draft” report. The new machine would be more powerful, more streamlined in design, and above all would solve the biggest hindrance to the use of ENIAC – the many hours required to configure it for a new problem, during which that supremely powerful, extraordinarily expensive computing machine sat idle and impotent. The designers of recent electro-mechanical machines such as the Harvard Mark I and Bell relay computers had avoided this fate for their machines by providing the computer with instructions via punched holes in a loop of paper tape, which an operator could prepare while the computer solved some other problem. But taking input in this way would waste the speed advantage of electronics: no paper tape feed could provide instructions as fast as ENIAC’s tubes could consume them.4 The solution outlined in the “First Draft” was to move the storage of instructions from the “outside recording medium of the device” into its “memory” – the first time this word had appeared in relation to computer storage5. This idea was later dubbed the “stored-program” concept. But it immediately led to another difficulty, the same that stymied Atansoff in designing the ABC – vacuum tubes are expensive. The “First Draft” estimated that a computer capable of supporting a wide variety of computational tasks would need roughly 250,000 binary digits of memory for instructions and short-term data storage. A vacuum-tube memory of that size would cost millions of dollars, and would be terribly unreliable to boot. The resolution to the dilemma came from Eckert, who had worked on radar research in the early 1940s, as part of a contract between the Moore School and the “Rad Lab” at MIT, the primary center of radar research in the U.S. Specifically, Eckert worked on a radar system known as the Moving Target Indicator (MTI), which addressed the problem of “ground clutter”: all the noise on the radar display from buildings, hills, and other stationary objects that made it hard for the operator to discern the important information – the size, location, and velocity of moving formations of aircraft. The MTI solved the clutter problem using an instrument called an acoustic delay line. It transformed the electrical radar pulse into a sound wave, and then sent that wave through a tube of mercury6, such that the sound arrived at the other end and was transformed back into an electrical pulse just as the radar was sweeping the same point in the sky. Any signal arriving from the radar at the same time as from the mercury line was presumed to be a stationary object, and cancelled. Eckert realized that the pulses of sound in the delay line could be treated as binary digits – with a sound representing 1, and its absence a 0. A single tube of mercury could hold hundreds of such digits, each passing through the line several times per millisecond, meaning that the computer need only wait a couple hundred microseconds to access a particular digit. It could access a sequential series of digits in the same tube much faster still, with each digit spaced out only a handful of microseconds a part. Mercury delay lines for the British EDSAC computer With the basic problems of the how the machine would be structured resolved, von Neumann collected the group’s ideas in the 101-page “First Draft” report in the spring of 1945, and circulated it among the key stakeholders in the second-generation EDVAC project. Before long, though, it found its way into other hands. The mathematician Leslie Comrie, for instance, took a copy back to Britain after his visit to the Moore School in 1946, and shared it with colleagues. The spread of the report fostered resentment on the part of Eckert and Mauchly for two reasons: first, the bulk of the credit for the design flowed to the sole author on the draft: von Neumann7. Second, all the core ideas contained in the design were now effectively published, from the point of view of the patent office, undermining their plans to commercialize the electronic computer. The very grounds for Eckert and Mauchly’s umbrage, in turn, raised the hackles of the mathematicians: von Neumann, Goldstine, and Burks. To them, the report was important new knowledge that ought to have been disseminated as widely as possible in the spirit of academic discourse. Moreover, the government, and thus the American taxpayer, had funded the whole endeavor in the first place. The sheer venality of Eckert and Mauchly’s schemes to profit from the war effort irked them. Von Neumann wrote, “I would never have undertaken my consulting work at the University had I realized that I was essentially giving consulting services to a commercial group.”8 Each faction went its separate ways in 1946: Eckert and Mauchly set up their own computer company, on the basis of a seemingly more secure patent on the ENIAC technology. They at first called their enterprise the Electronic Control Company, but renamed it the following year to Eckert-Mauchly Computer Corporation. Von Neumann returned to the Institute for Advance Study (IAS) to build a EDVAC-style computer there, and Goldstine and Burks joined him. To prevent a recurrence of the debacle with Eckert and Mauchly, they ensured that all the intellectual products of this new project would become public property. Von Neumann in front of the IAS computer, completed in 1951. An Aside on Alan Turing Among those who got their hands on the EDVAC report through side channels was the British mathematician Alan Turing. Turing does not figure among the first to build or design an automatic computer, electronic or otherwise, and some authors have rather exaggerated his place in the history of computing machines.9 But we must credit him as among the first to imagine that a computer could do more than merely “compute” in the sense of processing large batches of of numbers. His key insight was that all the kinds of information manipulated by human minds could be rendered as numbers, and so any intellectual process could be transformed into a computation. Alan Turing in 1951 In late 1945, Turing published his own report, citing von Neumann’s, on a “Proposed Electronic Calculator” for Britan’s National Physical Laboratory (NPL). It delved far lower than the “First Draft” into the details of how his proposed electronic computer would actually be built. The design reflected the mind of a logician. It would have no special hardware for higher-level functions which could be composed from lower level-primitives; that would be an ugly wart on the machine’s symmetry. Likewise Turing did not set aside any linear area of memory for the computer’s program: data and instructions could live intermingled in memory, for they were all simply numbers. An instruction only became an instruction when interpreted as such.10 Because Turing knew that numbers could represent any form of well-specified information, the list of problems he proposed for his calculator included not just the construction of artillery tables and the solution of simultaneous linear equations, but also the solving of a jig-saw puzzle or a chess endgame. Turing’s Automatic Computing Engine (ACE), was never built as originally proposed. Slow to get moving, it had to compete with other, more vigorous, British computing projects for the best talent. The project struggled on for several years before Turing lost interest. NPL completed a smaller machine with a somewhat different design, known as the Pilot ACE, in 1950, and several other early-1950s computers drew inspiration from the ACE architecture. But it had no wider influence and faded quickly into obscurity. None of this is to belittle Turing or his accomplishments, only to place them in the proper context. His importance to the history of computing derives not from his influence on the design of 1950s computers, but rather from the theoretical ground he prepared for the field of academic computer science, which emerged in the 1960s. His early papers in mathematical logic, which surveyed the boundaries between that which is computable and that which is not, became the fundamental texts of this new discipline. The Slow Revolution As news about ENIAC and the EDVAC report spread, the Moore School became a site of pilgrimage. Numerous visitors came to learn at the foot of the evident masters, especially from with in the U.S. and Britain. In order to bring order to this stream of petitioners, the dean of the school organized an invitation-order summer school on automatic computing in 1946. The lecturers included such luminaries as Eckert, Mauchly, von Neumann, Burks, Goldstine, and Howard Aiken (designer of the Harvard Mark I electromechanical computer). Nearly everyone now wanted to build a machine on the template of the EDVAC report.11 The wide influence of ENIAC and EDVAC in the 1940s and 50s evinced itself in the very names that teams from around the world bestowed on their new computers. Even if we set aside UNIVAC and BINAC (built by Eckert and Mauchly’s new company) and EDVAC itself (finished by the Moore School after being orphaned by its parents), we still find AVIDAC, CSIRAC, EDSAC, FLAC, ILLIAC, JOHNNIAC, ORDVAC, SEAC, SILLIAC, SWAC, and WEIZAC. Many of these machines directly copied the freely published IAS design (with minor modifications), benefiting from Von Neumann’s open policy on intellectual property. Yet the electronic revolution unfolded gradually, overturning the existing order piece by piece. Not until 1948 did a single EDVAC-style machine come to life, and that only a tiny proof-of-concept, the Manchester “baby,” designed to prove out its new Williams tube memory system.12 In 1949, four more substantial machines followed: the full-scale Manchester Mark I, the EDSAC, at Cambridge University, the CSIRAC in Sydney, Australia, and the American BINAC – though the last evidently never worked properly. A steady trickle of computers continued to appear over the next five years.13 Some writers have portrayed the ENIAC as drawing a curtain over the past and instantly ushering in an era of electronic computing. This has required painful-looking contortions in the face of the evidence. “The appearance of the all-electronic ENIAC made the Mark I obsolete almost immediately (although capable of performing successfully for fifteen years afterward),” wrote Katherine Fishman.14 Such a statement is so obviously self-contradictory one must imagine that Ms. Fishman’s left hand did not know what her right was doing. One might excuse this as the jottings of a mere journalist. Yet we can also find a pair of proper historians, again choosing the Mark I as their whipping boy,  writing that “[n]ot only was the Harvard Mark I a technological dead end, it did not even do anything very useful in the fifteen years that it ran. It was used in a number of applications for the navy, and here the machine was sufficiently useful that the navy commissioned additional computing machines from Aiken’s laboratory.”15 Again the contradiction stares, nearly slaps, one in the face. In truth, relay computers had their merits, and continued to operate alongside their electronic cousins. Indeed, several new electro-mechanical computers were built after World War II, even into the early 1950s, in the case of Japan. Relay machines were easier to design, build, and maintain, and did not require huge amounts of electricity and climate control (to dissipate the vast amount of heat put out by thousands of vacuum tubes). ENIAC used 150 kilowatts of electricity, 20 for its cooling system alone.16 The American military continued to be a major customer for computing power, and did not disdain “obsolete” electromechanical models. In the late 1940s, the Army had four relay computers and the Navy five. Aberdeen’s Ballistics Research Laboratory held the largest concentration of computing power in the world, operating ENIAC alongside Bell and IBM relay calculators and the old differential analyzer. A September 1949 report found that each had their place: ENIAC worked best on long but simple calculations; the Bell Model V calculators served best for complex calculations due to their effectively unlimited tape of instructions and their ability to handle floating point, while the IBM could process very large amounts of data stored in punched cards. Meanwhile certain operations such as cube roots were still easiest to solve by hand (with a combination of table look-ups and desk calculators), saving machine time.17 Rather than the birth of ENIAC in 1945, 1954 makes a better year to mark the completion of the electronic revolution in computing, the year that the IBM 650 and 704 computers appeared. Though not the first commercial electronic computers, they were the first to be produced in the hundreds,18 and they established IBM’s dominance over the computer industry, a dominance that lasted for thirty years. In Kuhnian19 terms, electronic computing was no longer the strange anomaly of 1940, existing only in the dreams of outsiders like Atansoff and Mauchly; it had become normal science. One of many IBM 650 machines, this one at Texas A&M University. Its magnetic drum memory (visible at bottom) made it relatively slow but also relatively inexpensive. Leaving the Nest By the mid-1950s, the design and construction of digital computing equipment had come unmoored from its origins in switches or amplifiers for analog systems. The computer designs of the 1930s and early 1940s drew heavily on ideas borrowed from physics and radar labs, and especially from telecommunications engineers and research departments. Now computing was becoming its own domain, and specialists in that domain developed their own ideas, vocabulary, and tools to solve their own problems. The computer in the modern sense had emerged, and our story of the switch thus draws near its close. But the world of telecommunications had one last, supreme surprise up its sleeve. The tube had bested the relay in speed by having no moving parts. The final switch of our story did one better by having no internal parts at all. An innocuous-looking lump of matter sprouting a few wires, it came from a new branch of electronics known as “solid-state.” For all their speed, vacuum tubes remained expensive, bulky, hot, and not terribly reliable. They could not have ever powered, say, a laptop. Von Neumann wrote in 1948 that “it is not likely that 10,000 (or perhaps a few times 10,000) switching organs will be exceeded as long as the present techniques and philosophy are employed.”20 The solid-state switch made it possible for computers to surpass this limit again and again, many times over; made it possible for computers to reach small businesses, schools, homes, appliances, and pockets; made possible the creation of the digital land of Faerie that now permeates our existence. To find its origins we must rewind the clock some fifty years, and go back to the exciting early days of the wireless. Further Reading David Anderson, “Was the Manchester Baby conceived at Bletchley Park?”, British Computer Society (June 4th, 2004) William Aspray, John von Neumann and the Origins of Modern Computing (1990) Martin Campbell-Kelly and William Aspray, Computer: A History of the Information Machine (1996) Thomas Haigh, et. al., Eniac in Action (2016) John von Neumann, “First Draft of a Report on EDVAC” (1945) Alan Turing, “Proposed Electronic Calculator” (1945)  

Read more
High-Pressure, Part I: The Western Steamboat

The next act of the steamboat lay in the west, on the waters of the Mississippi basin. The settler population of this vast region—Mark Twain wrote that “the area of its drainage-basin is as great as the combined areas of England, Wales, Scotland, Ireland, France, Spain, Portugal, Germany, Austria, Italy, and Turkey”—was already growing rapidly in the early 1800s, and inexpensive transport to and from its interior represented a tremendous economic opportunity.[1] Robert Livingston scored another of his political coups in 1811, when he secured monopoly rights for operating steamboats in the New Orleans Territory. (It did not hurt his cause that he himself had negotiated the Louisiana Purchase, nor that his brother Edward was New Orleans’ most prominent lawyer.) The Fulton-Livingston partnership built a workshop in Pittsburgh to build steamboats for the Mississippi trade. Pittsburgh’s central position at the confluence of Monangahela and Allegheny made it a key commercial hub in the trans-Appalachian interior and a major boat-building center. Manufactures made there could be distributed up and down the rivers far more easily than those coming over the mountains from the coast, and so factories for making cloth, hats, nails, and other goods began to sprout up there as well.[2] The confluence of river-based commerce, boat-building and workshop know-how made Pittsburgh the natural wellspring for western steamboating. Figure 1: The Fulton-Livingston New Orleans. Note the shape of the hull, which resembles that of a typical ocean-going boat. From Pittsburgh, The Fulton-Livingston boats could ride downstream to New Orleans without touching the ocean. The New Orleans, the first boat launched by the partners, went into regular service from New Orleans to Natchez (about 175 miles to the north) in 1812, but their designs—upscaled versions of their Hudson River boats—fared poorly in the shallow, turbulent waters of the Mississippi. They also suffered sheer bad luck: the New Orleans grounded fatally in 1814, the aptly-named Vesuvius burnt to the waterline in 1816 and had to be rebuilt. The conquest of the Mississippi by steam power would fall to other men, and to a new technology: high-pressure steam. Strong Steam A typical Boulton & Watt condensing engine was designed to operate with steam below the pressure of the atmosphere (about fifteen pounds per square inch (psi)). But the possibility of creating much higher pressures by heating steam well above the boiling point was known for well over a century. The use of so-called “strong steam” dated back at least to Denis Papin’s steam digester from the 1670s. It even had been used to do work, in pumping engines based on Thomas Savery’s design from the early 1700s, which used steam pressure to push water up a pipe. But engine-builders did not use it widely in piston engines until well into the nineteenth century. Part of the reason was the suppressive influence of the great James Watt. Watt knew that expanding high-pressure steam could drive a piston, and laid out plans for high-pressure engines as early as 1769, in a letter to a friend: I intend in many cases to employ the expansive force of steam to press on the piston, or whatever is used instead of one, in the same manner as the weight of the atmosphere is now employed in common fire-engines. In some cases I intend to use both the condenser and this force of steam, so that the powers of these engines will as much exceed those pressed only by the air, as the expansive power of the steam is greater than the weight of the atmosphere. In other cases, when plenty of cold water cannot be had, I intend to work the engines by the force of steam only, and to discharge it into the air by proper outlets after it has done its office.[3] But he continued to rely on the vacuum created by his condenser, and never built an engine worked “by the force of steam only.” He went out of his way to ensure that no one else did either, deprecating the use of strong steam at every opportunity. There was one obvious reason why: high-pressure steam was dangerous. The problem was not the working machinery of the engine but the boiler, which was apt to explode, spewing shrapnel and superheated steam that could kill anyone nearby. Papin had added a safety valve to his digester for exactly this reason. Savery steam pumps were also notorious for their explosive tendencies. Some have imputed a baser motive for Watt’s intransigence: a desire to protect his own business from high-pressure competition. In truth, though, high-pressure boilers did remain dangerous, and would kill many people throughout the nineteenth century. Unfortunately, the best material for building a strong boiler was the most difficult from which to actually construct one. By the beginning of the nineteenth century copper, lead, wrought iron, and cast iron had all been tried as boiler materials, in various shapes and combinations. Copper and lead were soft, cast iron was hard, but brittle. Wrought iron clearly stood out as the toughest and most resilient option, but it could only be made in ingots or bars, which the prospective boilermaker would then have to flatten and form into small plates, many of which would have to be joined to make a complete boiler. Advances in two fields in the decades around 1800 resolved the difficulties of wrought iron. The first was metallurgical. In the late eighteenth century, Henry Cort invented the “puddling” process of melting and stirring iron to oxidize out the carbon, producing larger quantities of wrought iron that could be rolled out into plates of up to about five feet long and a foot wide.[4] These larger plates still had to be riveted together, a tedious and error-prone process, that produced leaky joints. Everything from rope fibers to oatmeal was tried as a caulking material. To make reliable, steam-tight joints required advances in machine tooling. This was a cutting-edge field at the time (pun intended). For example, for most of history craftsmen cut or filed screws by hand. The resulting lack of consistency meant that many of the uses of screws that we take for granted were unknown: one could not cut 100 nuts and 100 bolts, for example, and then expect to thread any pair of them together. Only in the last quarter of the eighteenth centuries did inventors craft sufficiently precise screw-cutting lathes to make it possible to repeatedly produce screws with the same length and pitch. Careful use of tooling similarly made it possible to bore holes of consistent sizes in wrought iron plates, and then manufacture consistently-sized rivets to fit into them, without the need to hand-fit rivets to holes.[5] One could name a few outstanding early contributors to the improvement of machine tooling in the first decades of the nineteenth century Arthur Woolf in Cornwall, or John Hall at the U.S. Harper’s Ferry Armory. But the steady development of improvements in boilers and other steam engine parts also involved the collective action of thousands of handcraft workers. Accustomed to building liquor stills, clocks, or scientific instruments, they gradually developed the techniques and rules of thumb needed for precision metalworking for large machines.[6] These changes did not impress Watt, and he stood by his anti-high-pressure position until his death in 1819. Two men would lead the way in rebelling against his strictures. The first appeared in the United States, far from Watt’s zone of influence, and paved the way for the conquest of the Western waters. Oliver Evans Oliver Evans was born in Delaware in 1755. He first honed his mechanical skills as an apprentice wheelwright. Around 1783, he began constructing a flour mill with his brothers on Red Clay Creek in northern Delaware. Hezekiah Niles, a boy of six, lived nearby. Niles would become the editor of the most famous magazine in America, from which post he later had occasion to recount that “[m]y earliest recollections pointed him out to me as a person, in the language of the day, that ‘would never be worth any thing, because he was always spending his time on some contrivance or another…’”[7] Two great “contrivances” dominated Evans’ adult life. The challenges of the mill work at Red Clay Creek led to his first great idea:  an automated flour mill. He eliminated most of the human labor from the mill by linking together the grain-processing steps with a series of water-powered machines (the most famous and delightfully named being the “hopper boy”). Though fascinating in its own right, for the purposes of our story the automated mill only matters in so far as it generated the wealth which allowed him to invest in his second great idea: an engine driven by high-pressure steam. Figure 2: Evans’ automated flour mill. In 1795, Evans published an account of his automatic mill entitled The Young Mill-Wright and Miller’s Guide. Something of his personality can be gleaned from the title of his 1805 sequel on the steam engine: The Abortion of the Young Steam Engineer’s Guide. A bill to extend the patent on his automatic flour mill failed to pass Congress in 1805, and so he published his Abortion as a dramatic swoon, a loud declaration that, in response this rebuff, he would be taking his ball and going home: His [i.e., Evans’] plans have thus proved abortive, all his fair prospects are blasted, and he must suppress a strong propensity for making new and useful inventions and improvements; although, as he believes, they might soon have been worth the labour of one hundred thousand men.[8] Of course, despite these dour mutterings, he failed entirely to suppress his “strong propensity,” in fact he was in the very midst of launching new steam engine ventures at this time. Like so many other early steam inventors, Evans’ interest in steam began with a dream of a self-propelled carriage. The first tangible evidence that we have of his interest in steam power comes from patents he filed in 1787 which included mention of a “steam-carriage, so constructed to move by the power of steam and the pressure of the atmosphere, for the purpose of conveying burdens without the aid of animal force.” The mention of “the pressure of the atmosphere” is interesting—he may have still been thinking of a low-pressure Watt-style engine at this point.[9] By 1802, however, Evans had a true high-pressure engine of about five horsepower operating at his workshop at Ninth and Market in Philadelphia. He had established himself in that city in 1792, the better to promote his milling inventions and millwright services. He attracted crowds to his shop with his demonstration of the engine at work: driving a screw mill to pulverize plaster, or cutting slabs of marble with a saw. Bands of iron held reenforcing wooden slats against the outside of the boiler, like the rim of a cartwheel or the hoops of a barrel. This curious hallmark testified to Evans’ background as a millwright and wheelwright [10] The boiler, of course, had to be as strong as possible to contain the superheated steam, and Evans’ later designs made improvements in this area. Rather than the “wagon” boiler favored by Watt (shaped like a Conestoga wagon or a stereotypical construction worker’s lunchbox), he used a cylinder. A spherical boiler being infeasible to make or use, this shape distributed the force of the steam pressure as evenly as practicable over the surface. In fact, Evans’ boiler consisted of two cylinders in an elongated donut shape, because rather than placing the furnace below the boiler, he placed it inside, to maximize the surface area of water exposed to the hot air. By the time of the Steam Engineer’s Guide, he no longer used copper braced with wood, he now recommended the “best” (i.e. wrought) iron “rolled in large sheets and strongly riveted together. …As cast iron is liable to crack with the heat, it is not to be trusted immediately in contact with the fire.”[11] Figure 3: Evan’s 1812 design, which he called the Columbian Engine to honor the young United States on the outbreak of the War of 1812. Note the flue carrying heat through the center of the boiler, the riveted wrought iron plates of the boiler, and the dainty proportions of the cylinder, in comparison to that of a Newcomen or Watt engine. Pictured in the corner is the Orukter Amphibolos. Evans was convinced of the superiority of his high-pressure design because of a rule of thumb that he had gleaned from the article “Steam” in the American edition of the Encylopedia Britannica: “…whatever the present temperature, an increase of 30 degrees doubles the elasticity and the bulk of water vapor.”[12] From this Evans concluded that heating steam to twice the boiling point (from 210 degrees to 420), would increase its elastic force by 128 times (since a 210 degree increase in temperature would make seven doublings). This massive increase in power would require only twice the fuel (to double the heat of the steam). None of this was correct, but it would not be the first or last time that faulty science would produce useful technology.[13] Nonetheless, the high-pressure engine did have very real advantages. Because the power generated by an engine was proportional to the area of the piston times the pressure exerted on that piston, for any given horsepower, a high-pressure engine could be made much smaller than its low-pressure equivalent. A high-pressure engine also did not require a condenser: it could vent the spent steam directly into the atmosphere. These factors made Evans’ engines smaller, lighter, and simpler and less expensive to build. A non-condensing high-pressure engine of twenty-four horsepower weighed half a ton and had a cylinder nine-inches across. A traditional Boulton & Watt style engine of the same power had a cylinder three times as wide and weighed four times as much overall.[14]   Such advantages in size and weight would count doubly for an engine used in a vehicle, i.e. an engine that had to haul itself around. In 1804 Evans sold an engine that was intended to drive a New Orleans steamboat, but it ended up in a sawmill instead. This event could serve as a metaphor for his relationship to steam transportation. He declared in his Steam Engineer’s Guide that: The navigation of the river Mississippi, by steam engines, on the principles here laid down, has for many years been a favourite object with the author and among the fondest wishes of his heart. He has used many endeavours to produce a conviction of its practicability, and never had a doubt of the sufficiency of the power.[15]   But steam navigation never got much more than his fondest wishes. Unlike a Fitch or a Rumsey, the desire to make a steamboat did not dominate his dreams and waking hours alike. By 1805, he was a well-established man of middle years. If he had ever possessed the Tookish spirit required for riverboat adventures, he had since lost it. He had already given up on the idea of a steam carriage, after failing to sell the Lancaster Turnpike Company on the idea in 1801. His most grandiosely named project, the Orukter Amphibolos, may briefly have run on wheels en route to serve as a steam dredge in the Philadelphia harbor. If it functioned at all, though, it was by no means a practical vehicle, and it had no sequel. Evans’ attention had shifted to industrial power, where the clearest financial opportunity lay—an opportunity that could be seized without leaving Philadelphia. Despite Evans’ calculations (erroneous, as we have said), a non-condensing high-pressure engine was somewhat less fuel-efficient than an equivalent Watt engine, not more. But because of its size and simplicity, it could be built at half the cost, and transported more cheaply, too. In time, therefore, the Evans-style engine became very popular as a mill or factory engine in the capital- and transportation-poor (but fuel-rich) trans-Appalachian United States.[16] In 1806, Evans began construction on his “Mars Works” in Philadelphia, to serve market for engines and other equipment. Evans engines sprouted up at sawmills, flour mills, paper factories, and other industrial enterprises across the West. Then, in 1811, he organized the Pittsburgh Steam Engine Company, operated by his twenty-three-year-old son George, to reduce transportation costs for engines to be erected west of the Alleghenies.[17] It was around that nexus of Pittsburgh that Evans’ inventions would find the people with the passion to put them to work, at last, on the rivers. The Rise of the Western Steamboat The mature Mississippi paddle steamer differed from its Eastern antecedents in two main respects. First, in its overall shape and layout: a roughly rectangular hull with a shallow draft, layer cake decks, and machinery above the water, not under it. This design was better adapted to an environment where snags and shallows presented a much greater hazard than waves and high winds. Second, in the use of a high-pressure engine, or engines, with a cylinder mounted horizontally along the deck. Many historical accounts attribute both of these essential developments to a keelboatman named Henry Miller Shreve. Economic historian Louis Hunter effectively demolished this legend in the 1940s, but more recent writers (for example Shreve’s 1984 biographer, Edith McCall), have continued to perpetuate it. In fact, no one can say with certainty where most of these features came from because no one bothered to document their introduction. As Hunter wrote: From the appearance of the first crude steam vessels on the western waters to the emergence of the fully evolved river steamboat a generation later, we know astonishingly little of the actual course of technological events and we can follow what took place only in its broad outlines. The development of the western steamboat proceeded largely outside the framework of the patent system and in a haze of anonymity.[18] Some documents came to light in the 1990s, however, that have burned away some of the “haze,” with respect to the introduction of high-pressure engines.[19] the papers of Daniel French reveal that the key events happened in a now-obscure place called Brownsville (originally known as Redstone), about forty miles up the Monongahela from that vital center of western commerce, Pittsburgh. Brownsville was the point where anyone heading west on the main trail over the Alleghenies—which later became part of the National Road—would first reach navigable waters in the Mississippi basin. Henry Shreve grew up not far from this spot. Born in 1785 to a father who had served as a Colonel in the Revolutionary War, he grew up on a farm near Brownsville on land leased from Washington: one of the general’s many western land-development schemes.[20] Henry fell in love with the river life, and in by his early twenties had established himself with his own keelboat operating out of Pittsburgh. He made his early fortune off the fur trade boom in St. Louis, which took off after Lewis and Clark returned with reports of widespread beaver activity on the Missouri River.[21] In the fall of 1812, a newcomer named Daniel French arrived in Shreve’s neighborhood—a newcomer who already had experience building steam watercraft, powered by engines based on the designs of Oliver Evans. French was born in Connecticut 1770, and started planning to build steamboats in his early 20s, perhaps inspired by the work of Samuel Morey, who operated upstream of him on the Connecticut River. But, discouraged from his plans by the local authorities, French turned his inventive energies elsewhere for a time. He met and worked with Evans in Washington, D.C., to lobby Congress to extend the length of patent grants, but did not return to steamboats until Fulton’s 1807 triumph re-energized him. At this point he adopted Evans’ high-pressure engine idea, but added his own innovation, an oscillating cylinder that pivoted on trunions as the engine worked. This allowed the piston shaft to be attached to the stern wheel with a simple (and light) crank, without any flywheel or gearing. The small size of the high-pressure cylinder made it feasible to put the cylinder in motion. In 1810, a steam ferry he designed, for a route from Jersey City to Manhattan, successfully crossed and recrossed the North (Hudson) River at about six miles per hour. Nonetheless, Fulton, who still held a New York state monopoly, got the contract from the ferry operators.[22] French moved to Philadelphia and tried again, constructing the steam ferry Rebecca to carry passengers across the Delaware. She evidently did not produce great profits, because a frustrated French moved west again in the fall of 1812, to establish a steam-engine-building business at Brownsville.[23] His experience with building high-pressure steamboats—simple, relatively low-cost, and powerful—had arrived at the place that would benefit most from those advantages, a place, moreover, where the Fulton-Livingston interests held no legal monopoly. News about the lucrative profits of the New Orleans on the Natchez run had begun to trickle back up the rivers. This was sufficient to convince the Brownsville notables—Shreve among them—to put up $11,000 to form the Monongahela and Ohio Steam Boat Company in 1813, with French as their engineer. French had their first boat, Enterprise, ready by the spring of 1814. Her exact characteristics are not documented, but based on the fragmentary evidence, she seems in effect to have been a motorized keelboat: 60-80’ long, about 30 tons, and equipped with a twenty-horsepower engine. The power train matched that of French’s 1810 steam ferry, trunions and all.[24] The Enterprise spent the summer trading along the Ohio between Pittsburgh and Louisville. Then, in December, she headed south with a load of supplies to aid in the defense of New Orleans. For this important voyage into waters mostly unknown to the Brownsville circle, they called on the experienced keelboatman, Henry Shreve. Andrew Jackson had declared martial law, and kept Shreve and the Enterprise on military dutyin New Orleans. With Jackson’s aid, Shreve dodged the legal snares laid for him by the Fulton-Livingston group to protect their New Orleans monopoly. Then in May, after the armistice, he brough the Enterprise on a 2,000-mile ascent back to Brownsville, the first steamboat ever to make such a journey. Shreve became an instant celebrity. He had contributed to a stunning defeat for the British at New Orleans, carried out an unprecedent voyage. Moreover, he had confounded the monopolists: their attempt to assert exclusive rights over the commons of the river was deeply unpopular west of the Appalachians. Shreve capitalized on his new-found fame to raise money for his own steamboat company in Wheeling, Virginia. The Ohio at Wheeling ran much deeper than the Monongahela at Brownsvile, and Shreve would put this depth to use: he had ambitions to put a French engine into a far larger boat than the Enterprise. Spurring French to scale up his design was probably Shreve’s largest contribution to the evolution of the western steamboat. French dared not try to repeat his oscillating cylinder trick on the larger cylinder that would drive Shreve’s 100-horsepower, 400-ton two-decker. Instead, he fixed the cylinder horizontally to the hull, and then attached the piston rod to a connecting rod, or “pitman,” that drove the crankshaft of the stern paddle wheel. He thus transferred the oscillating motion from the piston to the pitman, while keeping the overall design simple and relatively low cost.[25] Shreve called his steamer Washington, after his father’s (and his own) hero. Her maiden voyage in 1817, however, was far from heroic. Evans would have assured French that the high-pressure engine carried little risk: as he wrote in the Steam Engineer’s Guide, “we know how to construct [boilers] with a proportionate strength, to enable us to work with perfect safety.”[26] Yet on her first trip down the Ohio, with twenty-one passengers aboard, the Washington’s boiler exploded, killing seven passengers and three crew. The blast threw Shreve himself into the river, but he did not suffer serious harm.[27] Ironically, the only steamboat built by the Evans family, the Constitution (née Oliver Evans) suffered a similar fate in the same year, exploding and killing eleven on board. Despite Evans’ confidence in their safety, boiler accidents continued to bedevil steamboats for decades. Though the total numbers killed was not enormous—about 1500 dead across all Western rivers up to 1848—each event provided an exceptionally grisly spectacle. Consider this lurid account of the explosion of the Constitution: One man had been completely submerged in the boiling liquid which inundated the cabin, and in his removal to the deck, the skin had separated from the entire surface of his body. The unfortunate wretch was literally boiled alive, yet although his flesh parted from his bones, and his agonies were most intense, he survived and retained all his consciousness for several hours. Another passenger was found lying aft of the wheel with an arm and a leg blown off, and as no surgical aid could be rendered him, death from loss of blood soon ended his sufferings. Miss C. Butler, of Massachusetts, was so badly scalded, that, after lingering in unspeakable agony for three hours, death came to her relief.[28] In response to continued public outcry for an end to such horrors, Congress eventually stepped in, passing acts to improve steamboat safety in 1838 and 1852. Meanwhile, Shreve was not deterred by the setback. The Washington itself did not suffer grievous damage, so he corrected a fault in the safety valves and tried again. Passengers were understandably reluctant for an encore performance, but after the Washington made national news in 1817 with a freight passage from New Orleans to just twenty-five days, the public quickly forgot and forgave. A few days later, a judge in New Orleans refused to consider a suit by the Fulton-Livingston interests against Shreve, effectively nullifying their monopoly.[29] Now all comers knew that steamboats could ply the Mississippi successfully, and without risk of any legal action. The age of the western steamboat opened in earnest. By 1820, sixty-nine steamboats could be found on western rivers, and 187 a decade after that.[30] Builders took a variety of approaches to powering these boats: low-pressure engines, engines with vertical cylinders, engines with rocking beams or fly wheels to drive the paddles. Not until the 1830s did a dominant pattern take hold, but when it did it, it was that of the Evans/French/Shreve lineage, as found on the Washington: a high-pressure engine with a horizontal cylinder driving the wheel through an oscillating connecting rod.[31] Landscape " data-medium-file="https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L" data-large-file="https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI?w=739" loading="lazy" width="1024" height="840" src="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9" alt="" class="wp-image-14432" srcset="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9 1024w, https://cdn.accountdigital.net/FrglBM_683opK7_ejlnGd3iJ8vBT 150w, https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L 300w, https://cdn.accountdigital.net/FoNGy7IhZD2VDImZqLwjumC3NT84 768w, https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI 1247w" sizes="(max-width: 1024px) 100vw, 1024px">Figure 4: A Tennessee river steamboat from the 1860s. The distinctive features include a flat-bottomed hull with very little freeboard, a superstructure to hold passengers and crew, and twin smokestacks. The western steamboat had achieved this basic form by the 1830s and maintained it into the twentieth century. The Legacy of the Western Steamboat The Western steamboat was a product of environmental factors that favored the adoption of a shallow-drafted boat with a relatively inefficient but simple and powerful engine: fast, shallow rivers; abundant wood for fuel along the shores of those rivers; and the geographic configuration of the United States after the Louisiana Purchase, with a high ridge of mountains separating the coast from a massive navigable inland watershed. But, Escher-like, the steamboat then looped back around to reshape the environment from which it had emerged. Just as steam-powered factories had, steam transport flattened out the cycles of nature, bulldozing the hills and valleys of time and space. Before the Washington’s journey, the shallow grade that distinguished upstream from downstream dominated the life of any traveler or trader in the Mississippi. Now goods and people could move easily upriver, in defiance the dictates of gravity.[32] By the 1840s, steamboats were navigating well inland on other rivers of the West as well: up the Tombigbee, for example, over 200 miles inland to Columbus, Mississippi.[33] What steamboats alone could not do to turn the western waters into turnpike roads, Shreve and others would impose on them through brute force. Steamboats frequently sank or took major damage from snags or “sawyers”: partially submerged tree limbs or trunks that obstructed the water ways. In some places, vast masses of driftwood choked the entire river. Beyond Natchitoches, the Red River was obstructed for miles by an astonishing tangle of such logs known as the Great Raft.[34] Figure 5: A portrait of Shreve of unknown date, likely the 1840s. The scene outside the window reveals one of his snagboats, a frequently used device in nineteenth century portraits of inventors. Not only commerce was at stake in clearing the waterways of such obstructions; steamboats would be vital to any future war in the West. As early as 1814, Andrew Jackson had put Shreve’s Enterprise to good use, ferrying supplies and troops around the Mississippi delta region.[35] With the encouragement of the Monroe administration, therefore, Congress stepped in with a bill in 1824 to fund the Army’s corps of engineers to improve the western rivers. Shreve was named superintendent of this effort, and secured federal funds to build snagboats such as the Heliopolis, twin-hulled behemoths designed to drive a snag between its hulls and then winch it up onto the middle deck and saw it down to size. Heliopolis and its sister ships successfully cleared large stretches of the Ohio and Mississippi.[36] In 1833, Shreve embarked on the last great venture of his life: an assault on the Great Raft itself. It took six years and a flotilla of rafts, keelboats and steamboats to complete the job, including a new snagboat, Eradicator, built specially for the task.[37] The clearing of waterways, technical advancements in steamboat design, and other improvements (such as the establishment of fuel depots, so that time was not wasted stopping to gather wood), combined to drive travel times along the rivers down rapidly. In 1819, the James Ross completed the New Orleans to Louisville passage in sixteen-and-a-half days. In 1824 the President covered the same distance in ten-and-a-half days, and in 1833 the Tuscorora clocked a run of seven days, six hours. These ever-decreasing record times translated directly into ever-decreasing shipping rates. Early steamboats charged upstream rates equivalent to those levied by their keelboat competitors: about five dollars per hundred pounds carried from New Orleans to Louisville. By the early 1830s this had dropped to an average of about sixty cents per 100 pounds, and by the 1840s as low as fifteen cents.[38] By decreasing the cost of river trade, the steamboat cemented the economic preeminence of New Orleans. Cotton, sugar, and other agricultural goods (much of it produced by slave labor) flowed downriver to the port, then out to the wider world; manufactured goods and luxuries like coffee arrived from the ocean trade and were carried upriver; and human traffic, bought and sold at the massive New Orleans slave market, flowed in both directions.[39] In 1820 a steamboat arrived in New Orleans about every other day. By 1840 the city averaged over four arrivals a day; by 1850, nearly eight.[40] The population of the city burgeoned to over 100,000 by 1840, making it the third-largest in the country. Chicago, its big-shouldered days still ahead of it, remained a frontier outpost by comparison, with only 5,000 residents. Figure 6: A Currier & Ives lithograph of the New Orleans levee. This represents a scene from the late nineteenth century, way past the prime of New Orleans’ economic dominance, but still shows a port bustling with steamboats. But both New Orleans and the steamboat soon lost their dominance over the western economy. As Mark Twain wrote: Mississippi steamboating was born about 1812; at the end of thirty years, it had grown to mighty proportions; and in less than thirty more, it was dead! A strangely short life for so majestic a creature.[41] Several forces connived in the murder of the Mississippi steamboat, but a close cousin lurked among the conspirators: another form of transportation enabled by the harnessing of high-pressure steam. The story of the locomotive takes us back to Britain, and the dawn of the nineteenth century.

Read more
The Pursuit of Efficiency and the Science of Steam

On April 19th, 1866, Alfred Holt, a Liverpudlian engineer who had apprenticed on the Liverpool & Manchester railroad before taking up steamship design in the 1850s, launched a singular ship that he dubbed the Agamemnon. As the third soon of a prosperous banker, cotton broker, and insurer, he had access to far more personal capital to launch this new enterprise than the typical engineer. This was a lucky thing for him, because the typical investor of the time considered his ambition—to enter the China tea trade on the basis of steam power—foolhardy. A typical oceangoing steamship used five pounds of coal per horsepower per hour and could not compete with sail over such long distances: they would either have to fill most of their potential cargo space with coal or make repeated, costly stops to refuel.[1] A contemporary photograph of Holt’s SS Agamemnon. Yet, in the end, Holt pulled off his gamble. He benefited from good timing (perhaps a mix of luck and foresight): the opening of the Suez Canal in 1869 give steamships a tremendous leg up in trade between Europe to the Indian and Pacific Oceans. But in designing ships dainty enough in their coal consumption to pay their way to the Pacific, he also benefited from the late convergence of two complementary developments that had each begun in the early 1800s but did not intersect until the 1850s. First was a series of incremental, empirical improvements to steam engine design: After the massive leap forward from Newcomen to Watt, further increases in steam engine efficiency would be less dramatic. Simultaneously, a theory of heat gradually developed that could explain what made engines more or less efficient, and thus point engineers in the most fruitful direction. Double-Cylinder Engines Boulton & Watt erected most of its early pumping engines in Cornwall. Trevithick developed his high-pressure “puffer” there. So, it is only fitting that the last major architectural innovation in piston steam engine design—featuring an entirely new structural component—was Cornish, too. In that region, an ample supply of British engineering talent met an always-eager demand for efficient engines. The ever-deeper mines for extracting metal ore needed ever more pumping power, despite significantly higher coal prices than the coal-rich North. Joseph Hornblower, born in the 1690s, was one of the first engineers to build Newcomen engines for the mines of Cornwall in the 1720s. Sixty years later, his grandson Jonathan built the first known double-cylinder engine (later called a compound engine). Cornwall’s homegrown natural philosopher, Davies Giddy (later Gilbert), served in the same office he later served for Richard Trevithick, as Hornblower’s scientific advisor. In principle, the idea was quite simple: instead of immediately condensing the remaining steam after the expansion cycle of the piston, the still-warm steam was fed into another cylinder to let it do still more work. However, this added friction, complexity, and cost to the machine. In practice, therefore, Hornblower’s attempted improvement provide no more efficient than a traditional Watt engine.[2] Hornblower double-cylinder engine from Robert Thurston, A History of the Growth of the Steam-Engine, p. 136. A generation later, however, another Cornishman took up the idea and carried it further. Arthur Woolf, like many eighteenth-century engineers, got his start as a millwright, but by 1797 was working for the firm of Jabez Carter Hornblower (brother to Jonathan), at a brewery in London, erecting a steam engine. He continued to serve as engineer for the brewery for a decade afterward, and witnessed the operation of Trevithick’s steam carriage in the city in 1803. Woolf realized that he could combine the double-cylinder engine of his former employer’s brother with Trevithick’s truly high-pressure engines (operating at forty pounds per-square-inch or more). The higher-pressure steam, still quite hot after expanding in the first cylinder, would be able to do more work in the second cylinder rather than simply “puffing” out into the atmosphere. Both Watt and Trevithick had (from opposite points-of-view) seen low- and high-pressure steam as rivals, but in Woolf’s machine they complemented one another.[3] But, as Hornblower had already learned, the path did not always run straight and easy from idea to execution. Woolf led himself astray with an entirely unsound theoretical model for the inner workings of his engine: he believed that steam at twenty pounds per square inch (psi) would expand to twenty times its volume before equaling the pressure of the atmosphere, steam at thirty psi would expand thirty times, and so on ad infinitum. This turned out to be a substantially exaggerated expectation, and led him to begin with a drastically undersized high-pressure cylinder, which let off far too little steam to effectively work its low-pressure mate. Rather than leading him to doubt his theory, the failure of this engine led him into a wild goose chase for a non-existent leak in his pistons.[4] Woolf’s double-cylinder engine, unlike Hornblower’s, did at last succeed, after years of trial and error, in achieving better efficiency than a Watt engine. But because it was more expensive to build (and thus buy), and more complex to operate, it found favor only in markets without easy access to other, cheaper options. One such example was France, to which Woolf’s erstwhile partner Humphrey Edwards, decamped in 1815: there he sold at least fifteen engines and licensed twenty-five more to a French mining company.  Woolf meanwhile returned to Cornwall in 1811, where he found the advantages of his double-cylinder engine soon surpassed by the incremental improvements made by other local engineers to the Boulton and Watt design. He abandoned it after 1824 and built single-cylinder engines until 1833, when he retired to the island of Guernsey.[5] Meanwhile, steam engine builders carried on with tweaks to get yet one more increment of efficiency out of their engines. They extracted advantages from adjustments to the regulatory machinery of the engine: elements like “release mechanisms,” “dashpots,” and “wrist plates.” The Corliss engine, designed by George Corliss in 1849, became an icon of American industrial design after his company produced a gargantuan specimen to power the 1876 Centennial Exhibition in Philadelphia. Mighty as it was, however, it did not represent a great leap forward in steam engine architecture. Corliss’ design drew its relative advantages over prior engines from a clever combination of previous innovations in the valves that allowed steam to enter and leave the cylinder, and especially in the valve gear that controlled them.[6] Corliss engine valve gear from H.W. Dickinson, A Short History of the Steam Engine, p. 140. In the meantime, the double-cylinder engine, having failed to prove itself in the 1810s and 1820s, lay dormant. It would be restored to life decades later, by the engineers most desperate to eke as much power as possible out of every ounce of coal: the designers of ocean steamships. But to facilitate the consummation of that match, a solid theory of the steam engine was wanted, one that would dispel, once and for all, the confusions like Woolf’s that continued to trip up engineers’ efforts at improvement. Measuring Power The lack of a sound theoretical basis for steam power is evident in the fitful history of cylinder “lagging,” or insulation. Steam engineers borrowed the term lag (a barrel stave) from coopers, because they often insulated early steam boilers with such timbers, held in place with metal straps (this is evident in images of early locomotives like Rocket, with their distinctive wooden cladding). A contemporary lithograph of Robert Stephenson’s engine Northumbrian. Note the wooden lagging on the boiler. As early as 1769, Watt had recognized the value of insulating not just the boiler, but also the working cylinder of the engine (emphasis mine): My method of lessening the consumption of steam, and consequently fuel, in fire-engines, consists of the following principles:—First, That vessel in which the powers of steam are to be employed to work the engine, which is called the cylinder in common fire-engines, and which I call the steam-vessel, must, during the whole time the engine is at work, be kept as hot as the steam that enters it; first by enclosing it in a case of wood, or any other materials that transmit heat slowly; secondly, by surrounding it with steam or other heated bodies; and, thirdly, by suffering neither water nor any other substance colder than the steam to enter or touch it during that time.[7] Yet, despite Watt’s imprimatur, steam engine builders lagged their cylinders sporadically throughout the first half of the nineteenth century; it was a matter of whim, not principle.[8] In this era, engineers tended to think of the steam engine as analogous to its predecessor, the water wheel. Steam replaced liquid water as the mechanical working fluid, but just as water drove the wheel by pushing on its vanes, in their minds steam performed work by expanding and pushing on the piston. A typical description of the time stated that “[t]he force of the steam-engine is derived from the property of water to expand itself, in an amazing degree, when heated above the temperature at which it becomes steam.”[9] Engineers knew that the cylinder ought to be kept hot to prevent condensation of the steam inside, but within this framework it was not obvious that it ought to be kept as hot as possible. Watt, emphasizing the contrast between the hot cylinder and the cool condenser, had drawn attention to the role of heat in the engine, but the introduction and success of high-pressure engines with no condenser, where the primary factor seemed to be the expansive force of steam, muddled matters once again. The gradual development of a new, more robust theory began with a practical problem: how to measure the amount of power an engine generates. This became a particularly pressing problem for Boulton & Watt in the late eighteenth century, as they expanded from the traditional business of pumping engines into the new market of driving cotton mills. The traditional way of measuring the output of a steam engine, in terms of “duty” (the pounds of water lifted by one foot per bushel of coal burned) had gradually been supplemented with the concept of “power,” typically expressed in horsepower: pounds lifted over a given distance, but over a given period of time rather than with a given amount of fuel. Thomas Savery had begun to grope towards the concept in his 1702 book on the virtues of his steam pump, The Miner’s Friend: I have only this to urge, that water, in its fall from any determinate height, has simply a force answerable and equal to the force that raises it. So that an engine which will raise as much water as two horses working together at one time in such a work can do, and for which there must be constantly kept ten or twelve horses for doing the same, then, I say, such an engine will do the work or labour of ten or twelve horses…[10] Note here that Savery proposes to measure the muscular equivalent of the engine not in terms of the output of just the pair of horses running the machinery, but in terms of the total stock of horses that a mine owner would require to maintain the same power over a long period of time. This model of horsepower in terms of economic equivalency did not stick, however, and by the late eighteenth century horsepower became fixed to Watt’s figure of 33,000 foot-pounds per minute. Yet this remained a measure of power best suited to pumping work: if a mine needed to raise 20,000 pounds of water per hour from a 200-foot-deep shaft, one could readily calculate the engine horsepower required. Cotton spinning machinery—which varied in size, function, and design—did not lend itself to such simple arithmetic. In order to properly size engines to mills, Boulton & Watt needed some way measure the horsepower produced by an engine while driving various combinations of machinery. From the beginning, Watt had attached gauges to his engines to measure the pressure inside the engine, by connecting a small indicator cylinder to the main engine cylinder so that steam could flow between them. The level of pressure in the indicator could serve as a proxy for power output. But to actually capture the data was a maddening exercise, because the pressure varied constantly as the piston worked up and down. A means of capturing this continuous data came from a long-time Watt employee, John Southern. He had joined the company as a draftsman in 1782, and despite a predilection for music that the strait-laced Watt found suspicious, quickly became indispensable.[11] Southern’s indicator, as envisioned by Terrell Croft, Steam-Engine Principles and Practice, p. 40. In 1796, Southern devised a simple device to solve the power measurement problem. He attached a piece of paper above the indicator, rigged so that it would move back and forth as the main piston operated. Then he attached a pencil to the tip of the pressure gauge. As the pressure went up and down, so would the pencil, while the paper moved left and right beneath it with the cycle of the engine. The result, when running smoothly, would be a closed shape, which Southern called an indicator diagram, and the averagepressure during the operation of the engine could be computed from the average distance between the top and bottom lines of that shape, which would in turn be proportional to the power. By calibrating the diagramwhile an engine was pumping water, where the power output was well-defined, Boulton & Watt could then determine the power produced by the same engine while operating a given set of mill machinery.[12] An ideal indicator diagram from Terrell Croft, Steam-Engine Principles and Practice, p.60. Thermodynamics Engineers now had a tool at hand for diagnosing the internals of a running engine. That tool, in turn, provided the seed for the birth of the science of thermodynamics, which began as the science of the steam engine. The first great leap in that direction was made by Sadi Carnot. Carnot’s story carries more than a whiff of the tragic. Though later honored as a founding father of thermodynamics, he achieved no recognition in his lifetime, and died of cholera as a still-young man in 1832. His father Lazare was an accomplished engineer and a major political figure in revolutionary France, but what we know of the son comes almost entirely from a fifteen-page biography sketched decades after the fact by his younger brother Hippolyte, which begins, pathetically, with the statement that: “the life of Sadi Carnot was not marked by any notable event…”[13] Carnot as an École student in 1813. In fact, Carnot’s short life was remarkably eventful. He grew up in Napoleon’s court, attended the elite engineering school École polytechnique at age 16, and was at the Chateau Vincennes during the 1814 assault on Paris that ended Napoleon’s first reign. He returned to Paris as a staff lieutenant in 1819, filling his free time with his passions: music, art, and scientific studies. There, in 1824, he produced his seminal work, Réflexions sur la puissance motrice du feu (Reflections on the Motive Power of Fire). In it he endeavored to explain how heat produces motion. I will allow him to elaborate in his own words: Every one knows that heat can produce motion. That it possesses vast motive-power no one can doubt, in these days when the steam-engine is everywhere so well known. To heat also are due the vast movements which take place on the earth. It causes the agitations of the atmosphere, the ascension of clouds, the fall of rain and of meteors, the currents of water which channel the surface of the globe, and of which man has thus far employed but a small portion.[14] As we have seen, the tendency of engineers to conceive of steam hydraulically, as a fluid that generated work through pressure much like water in a water wheel, had engendered some confusion about how to build and operate an engine most efficiently. Ironically, Carnot moved the understanding of the steam engine forward by taking the analogy of a steam engine to a water wheel even more seriously than his contemporaries. However, for him the key power-generating agent was not the pressure of steam, but the fall of heat. Just as a waterwheel required a head from which water descended by gravity to turn the wheel, so the steam engine required a reservoir of high heat, which then flowed down to a cold body and thereby did work. For Carnot this fall of heat in a steam engine was quite literal: it consisted of an imponderable fluid called caloric, that drained out from the hot body to the cool one: The production of motion in steam-engines is always accompanied by a circumstance on which we should fix our attention. This circumstance is the re-establishing of equilibrium in the caloric; that is, its passage from a body in which the temperature is more or less elevated, to another in which it is lower. …The steam is here only a means of transporting the caloric.[15] This caloric theory of heat as a substance still predominated in Carnot’s day, despite subversives like Count Rumford who advocated for a mechanical theory of heat, which understood heat purely as a form of motion. If the flow of heat from the hot to the cold body produced all the work in the steam engine, then making an efficient engine meant minimizing any spillage of heat that did no useful work. It also implied that to maximize the work produced by the engine, one must maximize the difference between the source of high temperature and the sink of low temperature—the height through which the caloric fluid falls. Carnot’s book was largely ignored. But his insights had their first chance to be rescued from obscurity shortly after his death. Émile Clapeyron, just a few years younger than Carnot, was an accomplished engineer who specialized in locomotives, and a fellow-graduate of the École Polytechnique. In 1834, he published a paper in the school’s journal showing that Carnot’s heat engine theory could be expressed in the language of calculus and seen graphically in the indicator diagram: the area inside the diagram (which could be expressed as an integral) corresponded to the work performed by the heat transfer in the engine. Clapeyron’s work revived Carnot’s abstractions, put them on a firmer mathematical basis, and publicized them to the community of engine builders. Yet once again, they reached a dead end. Steeped in the traditions of their craft, neither Clapeyron nor his peers seem not to have understood the heat engine theory as having practical applications to real-life engineering.[16] Vindication for Carnot would have to wait another fifteen years, when a series of exchanges between William Thomson (later Lord Kelvin), Rudolf Clausius, and James Joule shortly before and after 1850 resolved various problems with the Carnot-Clapeyron heat engine, including reconciling it with the mechanical theory of heat: what flowed from the hot to the cold body was not a literal fluid but an abstraction called energy, which could take on many forms, but could only perform useful work over a fall in temperature. Through the medium of energy, a certain quantity of heat was directly equivalent to a certain amount of power.[17] The scientist who best synthesized this new science of heat for a wider engineering audience was Thomson’s colleague at the University of Glasgow, Macquorn Rankine. Perfecting the Marine Engine Rankine’s position was something of a novelty: he was only the second person to hold a chair of Civil Engineering at Glasgow, a position established by Queen Victoria in 1840. From the days of Watt and beyond, the University of Glasgow had been more practical-minded than the great Oxbridge schools of the South. But the establishment of a faculty chair in engineering did not just indicate that the university supported more hardheaded tasks than absorbing classical learning, it also signaled a desire to elevate engineering into a more theoretical, scientific discipline.[18] PGP R 2115.24 " data-medium-file="https://cdn.accountdigital.net/FnlrZNj8fQTsaPvT22_9gd-YHEEq" data-large-file="https://technicshistory.com/wp-content/uploads/2023/11/william_john_macquorn_rankine_by_thomas_annan.jpg?w=739" loading="lazy" width="778" height="1023" src="https://cdn.accountdigital.net/FrEoBHbrvJHyzrEXe93NzuOg6OWg" alt="" class="wp-image-14597" style="width:408px;height:auto" srcset="https://cdn.accountdigital.net/FrEoBHbrvJHyzrEXe93NzuOg6OWg 778w, https://cdn.accountdigital.net/FmzgsPbBW_cLTscowMscuC3n_cwa 1556w, https://cdn.accountdigital.net/Fp_NjcBxPV0wCxM3zCCtq6WnAxpV 114w, https://cdn.accountdigital.net/FnlrZNj8fQTsaPvT22_9gd-YHEEq 228w, https://cdn.accountdigital.net/FqppXmUOcVWA7m11yEWUApiqe2FB 768w" sizes="(max-width: 778px) 100vw, 778px">A leonine Rankine. Rankine, embodying this new spirit, straddling the worlds of theory and practice, preached thermodynamics to the engineering world: his 1859 A Manual of the Steam Engine and Other Prime Movers (1859), a 500-page, densely mathematical treatise, explicated the new theory and its applicability to practical matters in great detail and popularized the term “thermodynamics.” However he also knew how to reach a wider audience: in an 1854 address to the Liverpool meeting of the British Association for the Advancement of Science (BAAS) he concisely expressed the laws of thermodynamics in terms of ordinary English and simple arithmetic: “As the absolute temperature of receiving heat is to the absolute temperature of discharging heat, so is the whole heat received to the necessary loss of heat.” That is, the more precipitous the fall of temperature from the high (receiving) to the low (discharging) point of the engine cycle, the more efficient the engine could be.[19] Among those in Rankine’s circle of influence in the 1850s was an experienced builder of marine steam engines in Glasgow named John Elder, who became the first to incorporate a double-cylinder engine into a successful steamship. Elder had marine engines in his blood: his father David had joined Robert Napier’s engine building firm and began designing steamboat engines in 1821. In addition to family tradition and his natural talents, Elder had two other advantages in this undertaking. First, he had access to Glasgow’s “thermodynamic network” (as the historian Crosbie Smith put it); he had tutors in the new thermodynamic science and probably got specific advice from Rankine to introduce steam jacketing to prevent condensation in the cylinder. Second, he had an eager buyer.[20] An anonymous engraving of John Elder. The Pacific Steam Navigation Company (PSNC) of Liverpool had overextended itself in the South American Pacific-coast trade, where high-quality steam coal could arrive only by a 19,000-mile round-trip supplied by sail. Profit margins were slim to none, and venture stayed in the black only by virtue of a government mail contract. This made the company willing to wait out teething problems in order to get a more efficient engine. From the time Elder and his partner took out their engine patent in January 1853, it took four years before PSNC ratified the superiority of their ship Valparaiso, which consumed 25% less coal than an equivalent single-cylinder model.[21] Elder’s success set the stage for Holt’s further vault forward in the 1860s. Among the latter’s achievements was to convince the Board of Trade that marine engines could operate safely at higher pressures; allowing a greater fall of temperature and thus more efficient use of fuel. This, in turn, set the stage for triple-expansion engines later in the century, to extract still more work from the heat as it falls from boiler to condenser. This polyphonic fugue of machinery heralded the age of steam’s baroque period, which engendered the fantasias of steampunk a century later. By about 1890, a triple-expansion engine, running at 160 pounds-per-square-inch, could consume one-and-a-half pounds of coal per-horsepower per-hour, less than a third of the going rate a few decades before, and about five times less than Watt’s engine.[22] SONY DSC " data-medium-file="https://cdn.accountdigital.net/FtSQ8BekQNbv8kPS9uifApbwjKgt" data-large-file="https://technicshistory.com/wp-content/uploads/2023/11/tmw_677_-_triple_expansion_compound_steam_engine.jpg?w=739" loading="lazy" width="1024" height="975" src="https://cdn.accountdigital.net/FnvB-w_zECM1YvpFkdg-gYKhv1iR" alt="" class="wp-image-14600" srcset="https://cdn.accountdigital.net/FnvB-w_zECM1YvpFkdg-gYKhv1iR 1024w, https://cdn.accountdigital.net/FuWr2VLuwR94tPMUvE7sdYLm4x64 2046w, https://cdn.accountdigital.net/FvPIsIFR9yKXU_LcLXTgOMpJG8pE 150w, https://cdn.accountdigital.net/FtSQ8BekQNbv8kPS9uifApbwjKgt 300w, https://cdn.accountdigital.net/FuYz2_GUHfI2LpjeXYePQ7ky7D4- 768w" sizes="(max-width: 1024px) 100vw, 1024px">Cutaway of an 1888 Austrian triple-expansion engine, in the Vienna Technical Museum [Sandstein / Creative Commons Attribution 3.0 Unported]. Yet even as it thrust the age of steam up towards its apex, thermodynamics pointed out the weak spot that would lead to its downfall. In his 1854 speech to the BAAS, Rankine had touted the advantages of the air engine, a device devised by the Scotsman Robert Stirling that used hot air as its working fluid.  As Rankine pointed out, the laws of thermodynamics have nothing in particular to do with steam, but hold “true for all substances whatsoever in all conditions…” Air had a decided advantage over steam insofar as it could be driven to very high temperatures without creating very dangerous pressures: “For example, at the temperature of 650 ° Fahr. (measured from the ordinary zero,) a temperature up to which air engines have actually been worked with ease and safety, the pressure of steam is 2100 pounds upon the square inch; a pressure which plainly renders it impracticable to work steam engines with safety….”[23] The Stirling air engine did not, in the event, prove to be the slayer of steam. Its use never expanded beyond occasional low-power domestic applications. But it brought the first adumbration of the coming eclipse. Stirling air engine – harbinger of doom? [Paul U. Ehmer / CC-BY-SA-4.0]

Read more
Twilight of the Age of Steam, Part 2: Petroleum and After

In the mid-nineteenth century, a new industry emerged, based on the refining of petroleum. The human use of petroleum is ancient, and may (for all we know) date well into pre-history. In the form found most often in nature, as thick pools of bituminous tar, this sticky, potentially flammable substance found use as a caulk, an embalming fluid, a lubricant, a weapon of war, and a medicine. Another source of petroleum, less widely exploited but also known for centuries, were oil seeps, where liquid petroleum emerged from the ground. Oil Creek, Pennsylvania, was named after the substance that tended to pool along its edge, and its skimmings were sold as a cure-all at least as far back as the eighteenth century. But until the middle of the nineteenth century, petroleum was never used extensively as a fuel source for heat or light.[1] Petroleum: A Product in Search of Solutions Why did it take so long for this substance, the load-bearing keystone of the industrial society of the twentieth and twenty-first centuries, to be tapped for its energy? There were several obstacles. The first was chemical: it was not obvious how to extract a useful fuel from raw petroleum, and the chemical knowledge and techniques needed to guide this distillation did not emerge until the nineteenth century. The second obstacle was industrial: to refine petroleum in a laboratory was one thing, to do it at the scale needed to create a national or global market was another. The final obstacle was logistical: tar (or bitumen, or asphalt) was heavy (it had to be mined, not pumped), and difficult to process. Oil appeared at the surface in fairly small quantities, and there was no obvious reason to believe, given geological knowledge at the time, that it represented leakage from a large, liquid reservoir below, nor were there obvious means to access such a reservoir if it existed. By the mid-nineteenth century the tools were at hand in North America and Europe to overcome all of these obstacles, given enough effort. Europeans had begun constructing large-scale chemical manufacturing plants, primarily for the making of soda, earlier in the century. Around 1860, the chemical industry began a massive expansion into synthetic organic compounds, primarily dyes extracted from the tarry residue left over after cooking coal to make illuminating gas. Chemical knowledge advanced rapidly over this same period, in part because of its increasing commercial value.  Meanwhile, salt miners had created the boring machinery needed for drilling deep into the earth, which they used to extract subsurface brine.[2] But a reason was still needed to put in the effort to apply these tools. The reason, as it turned out was a simple enough one: the ancient incantation of “let there be light.” As we have already seen, the nineteenth-century had brought forth an ever-growing demand for illumination. Modernity, with its contempt for the rhythms of nature, had taken a firm hold in the culture of Europe and North America. Town gas had brought on-demand light to urban public spaces and townhouses, but even the poor and those living far from the bustle of cities no longer accepted that the setting of the sun must mean the end of the day’s work and leisure, nor was the feeble light of a candle deemed good enough any longer.  Whale oil was the premium-grade illuminating fluid of the time. It gave off a clean, bright light, but it cost as much as $2.50 a gallon in the United States. With the remaining whales disappearing from every corner of the oceans under the harpoon thrusts of a hundred Ahabs, prices could only be expected to rise. Lamps burning animal fat or vegetable oil could serve if nothing else was available, but the most popular cheap substitute, at only 50 cents per gallon, was camphene, made from tree resin and alcohol. Volatile camphene, however, burned all to readily, bringing with it the risk of deadly fires or even explosions. It also had a rather unpleasant odor.[3] Figure 56: Typical nineteenth-century oil lamps. The glass reservoir above the base held the oil while the chimney kept the flame tall and protected from drafts. [National Parks Service, Gateway Arch] Abraham Gesner, a Canadian doctor and amateur geologist and chemist, was the first to figure out how to turn petroleum into a useful source of light. In 1849, he distilled a coal oil from bitumen that came from a huge lake of tar in Trinidad. He dubbed the oil kerosene: “keros” from wax (because of its wax-like solid form) and “-ene” to invoke the familiar camphene. Within a decade, he had set up a plant in New York making five thousand gallons of his illuminating oil a day, in competition with dozens of other companies making similar products.[4] But tar-based illuminants were always limited by the difficulty of extracting and transporting the heavy raw material from its limited, often remote, origin sites. A group of northeastern businessmen led by George Bissell, a New York lawyer and man of all parts, became convinced that liquid rock oil would solve this problem and provide an alternative to whale oil of equal quality at a much lower price. So it was that Oil Creek became the epicenter for an eastern Gold Rush. Bissell and his colleagues convinced Yale chemist Benjamin Silliman to have a crack at distilling the oozings of Oil Creek into an effective lighting source, a feat he accomplished in 1855. It took a further four years for the group to hit paydirt at Oil Creek, drilling down seventy feet through bedrock with the same techniques used to bore for salt water. Then, finally, the oil boom was on.[5] The Drake well near Oil Creek in 1859. From Ida Tarbell, History of the Standard Oil Company. The illuminant that Silliman had extracted from raw oil also acquired the name kerosene, due to its similarity to Gesner’s tar extract. It provided a cheaper, cleaner, and brighter alternative to whale oil. During the 1860s, an entire industry emerged to extract, transport, and burn this kerosene, and in the 1870s it came under the increasing domination of John Rockefeller’s Standard Oil. Then came electric light: a potential threat to the illuminating oil business, just as it was to the illuminating gas business. As long as it remained confined to dense town centers, electricity was more of a problem for gas than kerosene, given the latter’s advantage in portability. But the advent of long-distance electrical transmission around the turn of the century changed matters. It would take huge capital investments over many years to bring electricity to every town in the richer parts of the world, but it was only a matter of time. The oil magnates were on the lookout for a new kind of buyer for their product, before electric light relegated it to a niche fuel, useful only to the most rural and remote customers.[6] Petroleum and Internal Combustion: Symbiosis Conveniently, while the extractors and refiners of petroleum were looking for new buyers for their product, makers of combustion engines were looking to petroleum for a new source of fuel. Illuminating gas made for a convenient fuel supply in towns with the infrastructure for piping gas already in place, but as long as their engines depended on it, they could not make sales to more rural workshops. What’s more, a combustion engine, requiring neither firebox, nor boiler, nor water tank, had great potential as a lightweight motor for a moving vehicle: it could finally make practical the dream of the self-propelled carriage, a dream dreamed since the time of Nicolas Cugnot and even earlier, but which no one had been able to realize using bulky steam engines. Attempts to use more portable liquid fuels began in the 1870s: reliable, familiar kerosene was one possibility. But even more attractive was a volatile newcomer, gasoline: a light distillate of petroleum which as of yet had little commercial value. A liquid hydrocarbon fuel had to be mixed with air before putting into the cylinder, a process called carburetion, and lighter gasoline vaporized and carbureted more readily than heavier fuels. Several effective gasoline carburetors were invented in the mid-1880s at Deutz (home of the Otto engines), Karl Benz’s Benz & Cie, and the new workshop of Daimler and Maybach, who left Deutz together in 1882 to pursue the design of small, portable engines.[7]  Daimler and Maybach wanted to create a general-purpose light engine that could be used equally well in a workshop or a mobile vehicle. To do this, they needed to generate more horsepower with less weight than a typical stationary engine, and the easiest way to do that was to turn the engine faster, generating far more rotations-per-minute (rpm) (a typical Otto engine would run at 100 rpm or less). This, in turn, required a new ignition mechanism: the typical engine of the time used an ignition flame, re-lit on each cycle of the engine from a permanent pilot light, then snuffed out again. This mechanical process was too slow for the speeds Daimler and Maybach wanted, so their Standuhr engine (so-called because it resembled an upright pendulum clock) instead used a hot tube of metal that protruded into cylinder for ignition. A flame outside the cylinder kept the tube at the right temperature to ignite the fuel-air mixture at the desired point in the compression stroke (the more compressedthe mixture, the more easily it ignited). With a hot tube, surface carburetor and water cooling, the Standuhr reached 650 rpms. One could be found putting about Daimler’s property in September 1886 mounted to a carriage, providing, aptly, about one horsepower.[8] The Daimler Standuhr engine at the Mercedes-Benz Museum [morio / CC BY-SA 3.0 DEED]. Benz, on the other hand, set out to make a vehicle designed from start to finish a motor car, with a bicycle frame as the basis for the chassis, and a new engine custom-designed for the vehicle. He used electric rather than hot-tube ignition: a safer approach than the hot tube and with the potential for more precise control, but finnicky with the battery technology available at the time. He did not achieve the same engine speeds as Daimler, but also made a working vehicle.[9] Over the following years, Daimler, Benz, and other inventors (primarily in Germany, France, Britain, and the U.S.) steadily refined combustion automobile design, borrowing features from both the Daimler and Benz traditions and making many other design refinements (including improvements that made electrical ignition much more reliable) to produce something recognizable as the template for the modern automobile by about 1900. This new market for petroleum as a vehicle fuel rather than a light source was small, but promising, and poised to grow (dare I say it) explosively in the following decades. The 1901 Daimler “Mercedes”, typically taken as the common ancestor of the modern automobile. Diesel Gasoline engines created a new market for light, self-powered vehicles—automobiles and later aircraft—but did not supplant steam engines in the role of pulling heavy loads on steamships and locomotives. It was another genus of combustion engines that would snuff out the nimbus of steam that had shrouded ports and train stations throughout the nineteenth century. The fire piston, a simple but ingenious fire-starting device, was used in Southeast Asia and the surrounding islands for centuries—no one knows exactly how long. It consists of a hollow wooden cylinder into which a matching piston can be snugly fitted. The piston contains a small niche at the end to hold flammable tinder. When forced rapidly into the cylinder, the compression of the air will heat it up rapidly, igniting the tinder. Fire pistons were found across Europe in the mid-nineteenth century, and were often used, like Volta’s electric pistol, in scientific demonstrations. The principle of their operation provided the foundation for this new genus of combustion engine, the kind we now know after the name of its inventor, Rudolf Diesel.[10] A collection of fire pistons from various parts of Asia. From Henry Balfour, “The Fire-Piston” in Anthropological Essays Presented to Edward Burnett Tylor in Honour of his 75th Birthday (Oxford: Clarendon Press, 1907), plate II. Diesel had a peripatetic childhood: born in 1858 in Paris to parents from Augsburg in Bavaria, briefly exiled to London due to the anti-German feelings sparked by the Franco-Prussian War, he returned with his family to Paris, then moved in with cousins in Augsburg in 1873, where he enrolled in an industrial high school. It was likely at Augsburg that Diesel witnessed a demonstration of a fire piston with glass walls, exposing the magic moment of ignition. Did the sight of this spark kindle the concept for a new kind of engine deep in the recesses of young Diesels’ mind? It’s unlikely: he did not create his famous engine for another twenty years. But he remembered the event well enough, and considered it important enough, to relate it to his children years later.[11] After completing his schooling at Augsburg, Diesel moved on to study engineering at Munich’sPolytechnic School (founded by the “Mad” king Ludwig II, and now known as the Technical University of Munich). There he was disgusted to learn how thermodynamically inefficient were the steam engines that powered industrial society; he became fascinated by Carnot and wondered how to replicate a perfect Carnot heat engine, to convert the heat from coal directly into work “without intermediaries.” He also met an important mentor, Carl von Linde, a mechanical engineering professor with a sideline in refrigeration machines.[12] After college, with the aid of his mentor, Diesel entered the refrigeration business; he worked in France throughout the 1880s, selling refrigeration equipment and patenting his own ice-making machines, while toiling away at his solution to the heat engine puzzle. Another victim of the seductive reasoning that the problem with the steam engine was steam, Diesel was building an engine that would use ammonia for its working fluid instead. He presented his creation at the Exposition Universelle of 1889, in the shadow of Eiffel’s grand new tower. No great acclaim followed; this was not the Diesel engine that we know today. The exposition also marked the end to his life in France; with anti-German sentiment on the rise again, Diesel moved to Berlin in 1890.[13] At the same exhibition Diesel could have seen Benz’ three-wheeled Patent-Motorwagen and Daimler and Maybach’s Standuhr engine. Perhaps it is no coincidence then, that around this time he abandoned his ammonia engine and developed the idea for a new kind of internal combustion engine, one that would achieve new heights of efficiency through “the extreme compression of ordinary air.” How would it work? His goal was to achieve an ideal Carnot cycle by maintaining constant temperature in the chamber throughout combustion. His conceptual four-stroke engine would compress air to an astonishing 250 atmospheres, driving the temperature up to 900 degrees Celsius. The fuel could not be premixed with the air in the chamber, as in the gasoline engine, because it would then ignite too early. Instead, he would then inject the fuel just as the piston reached dead center in its compressing cycle. The fuel would then ignite immediately like the kindling in a fire piston, driving the piston backward, and giving up all of its heat to expanding the air back to its original volume, without any waste lost to the walls. Diesel filed for a patent on this idea in 1892.[14] The problem was the 250 atmospheres. This was, to his contemporaries, an absurd figure; as one Diesel biographer points out, “[a]s far as was known in 1892, pressures of that magnitude had only occurred in volcanoes and bombs.” Diesel expected to rely on help constructing his engine from his good friend Heinrich Buz, head of the Bavarian manufacturing plant Maschinenfabrik Augsburg, but Buz would have nothing to do with it until Diesel agreed to reduce his pressure requirement to about forty atmospheres, at which point Buz agreed to build an experimental engine.[15] It took Diesel and his colleagues at Maschinenfabrik Augsburg two years to get this experimental engine to run for a full minute straight, and a further three years (until early 1897), to get a really useable engine. The greatest engineering challenge was the fuel injector, which had to add just the right amount of fuel at just the right time within a window of less than one hundredth of a second, into a cylinder pushing back on the injector at far higher than atmospheric pressure. Gasoline would not do as a fuel for diesel engines because it would begin combustion too early: its volatility, an asset for easy mixing with air in an Otto engine, became a liability. Kerosene and other heavier oils worked well, vegetable oils and even coal dust were considered as alternatives.[16] Diesel’s 1894 engine, which ran for one minute straight [Tila Monto / CC BY-SA 3.0 DEED]. The 1897 engine was not the ideal heat engine that Diesel had first dreamed of, but at 26% efficiency (including both heat and mechanical losses), it surpassed any of its competitors: a typical Otto gasoline engine sat at around 15%. Because of the need to withstand high pressures in the chamber and for a separate pump to supply the fuel injector, it was heavy and bulky compared to Otto-style engines, but its enormous compression ratios allowed it produce high power output at low rpms, a valuable feature for overcoming momentum when starting heavy machinery or pulling a large load from a standstill. Over the long run Diesel engines would also prove more durable and reliable than their gasoline counterparts.[17] Diesel and Buz now had something much closer to a marketable engine, and they soon had licensing agreements for construction of engines in Scotland, France, and Germany – including an agreement with Deutz, the leaders of the German internal combustion market. Adolphus Busch, the St. Louis beer magnate, acquired the exclusive rights to make Diesels in the United States for one million German marks, while Emmanuel Nobel (nephew of Alfred), acquired similar rights in “all the Russias” for 800,000 marks.[18] The German cruiser Deutschland (sometimes called a pocket battleship), commissioned in 1933, was the first large warship powered by diesels. Because their high compression ratios created high torque at low rpms, diesel engines became the engine of choice for heavy-duty applications: ships, tractors, electrical power (Kiev used diesel for its streetcar system) and eventually trains. Each application brought its own challenges which required new inventive creativity to overcome: the size and weight of the early diesel engines, for example, as well as the inability to put them in reverse, made them unsuitable for shipboard use; while diesel-electric locomotives (which used electric motors powered by diesel-supplied electricity), overcame the complex mechanical transmission problems of bringing direct diesel power to all the wheels of the locomotive. It would fall to other men than Diesel to overcome these obstacles, allowing the diesel to continue to scale up in power and find new uses over the decades. In 1911, the Swiss manufacturing firm Winterthur built a diesel with a cylinder one-meter across that generated about 2,000 horsepower. The following year, the ten-thousand-ton diesel-powered Danish freighter Selandia debuted to great success, “…carrying cargo faster, farther, and cleaner than steam-powered freighters, on less fuel and without any stops for bunkering.” In 1934, the Pioneer Zephyr achieved a new record time on the Denver-to-Chicago run under diesel-electric power, at an average speed of seventy-seven miles per hour.[19] Diesel at Thomas Edison’s West Orange, New Jersey lab in 1912, looking rather stiff. [U.S. National Park Service] The later life of Diesel the man was less happy than that of his engine. A proud inventor, he was stung by the withering critiques of engineers who emphasized how little he had contributed to the creation of the diesel engine in its many useful forms. Profligate with money, he seemed compelled to live as if his means were inexhaustible rather than merely substantial. A pacifist, he was disturbed by the growing bellicosity of the European powers in the opening years of the twentieth century. Among his friends, Diesel could count Charles Parsons of steam turbine fame. While they dined together at Diesel’s home in June 1913, “…56,000 kilowatt Parsons turbines were being built into the five British battleships of the Queen Elizabeth class…and pairs of ultralight 1250-kilowatt Diesel engines were being installed in the Unterseebooten U-19 to U-23, the first Diesel-powered submarines of the German Navy.” Several months later, at age 55, a broke and broken Diesel hurled himself mid-sea from the deck of the steamer Dresden.[20] Gas Turbines By the end of the first third of the twentieth century, things looked increasingly grim for the age of steam. Having been upstaged by gasoline engines in the new markets for cars, trucks, and aircraft, been supplanted in the low-end of the power spectrum by easy-to-use combustion engines, and with its remaining share of train and sea-power steadily eroding, steam had one last, unconquered bastion. The sources of electric power remained dominated by a mix of coal-fueled steam and hydroelectric. Up through the 1930s, they more-or-less split global electrical production between them, with coal tending to take the larger share.[21] But combustion held one last insult to inflict upon steam, albeit a long-delayed one. A side effect of petroleum extraction operations was the emission of large quantities of hydrocarbon gas, called natural gas in contrast to the longstanding synthetic coal gas industry. In a sense it was the lightest petroleum distillate of all, pre-separated in the bowels of the earth and released from the borehole like an enormous, long-held flatulation. Like other forms of flatulence, natural gas was mostly treated by the petroleum industry as an undesirable embarrassment. But by the 1880s, pipeline technology became good enough to bring gas from wells to nearby cities, where it could provide light and heat with a cleaner burn and more energy density than coal gas. During the early U.S. petroleum boom in Appalachia and the Midwest, natural gas supply came to Pittsburgh, from the Haymaker well about twenty miles east of the city. Originally drilled in search of oil, the Haymaker had been left abandoned and burning for four years before being tamed and tapped. Natural gas came to Chicago from the wells to the southeast that gave Gas City, Indiana its name. Then, after coming, the gas went again, as these northern gas fields ran dry.[22] The situation changed radically when the second American oil and gas boom took off in the Southwest in the early twentieth century. Prospectors found trillions of cubic feet of natural gas beneath their feet in Louisiana and Texas. This created a huge imbalance in supply and demand between southern producers, “flaring or simply venting into the atmosphere hundreds of millions of cubic feet per day of natural gas” and northern cities eager for a clean, cheap source of heat.[23] Huge investments of capital in the 1920s and 30s would finally rectify that imbalance. A group of investors, including Samuel Insull, best known as the czar of Chicago’s electrical system, pooled their resources to build a massive pipeline from Texas to Chicago. Nearly a thousand miles long and two feet in diameter, it could carry gas at a rate of 210 million cubic feet per day, ten times the capacity of the defunct lines that had once supplied Chicago from the Indiana gas fields. The newly developed technique of electrical resistance welding made the line feasible by fusing the sections of pipe together without any seam, allowing it to carry the gas under high pressure without leakage.[24] The “Big Inch” pipeline, originally built to bring oil from Texas  to the northeastern U.S. during World War II, was later converted to natural gas. That was all very well for chilly Midwesterners, but it was nothing to concern steam. Some areas of the Southwest burned gas instead of oil or coal in their electric plants—why not, when it was so abundant—but that was just another way to make steam.[25] But, as we know, gas could be burned inside an engine, too, and a new, very efficient and powerful kind of internal combustion engine was just about to come onto the scene. Given that waterwheels had to make room for water turbines, and steam pistons for steam turbines, the appearance of internal combustion turbines (typically known as gas turbines, though they may burn liquid fuels) has the air of inevitability. The idea had a long, difficult gestation, however, in large part of because of the continuous high temperatures to which the moving parts were exposed, which could be withstood only be new heat-tolerant alloys. The basic idea is to burn fuel continuously within the turbine, and use the hot gas that resulted, rather than steam, to spin the turbine blades and provide power. Some heat is also drawn off to drive a compressor that continuously pushes fresh air into the chamber for combustion. A few stationary power turbines were built in the early part of the twentieth century, primarily in Switzerland, but gas turbine technology got a massive boost from the intensified development of aeroengines before and during World War II. In the 1930s, both Frank Whittle in Britain and Hans-Joachim von Ohain in Germany designed experimental jet engines (which exhausted the hot gas directly for thrust instead of driving an axle) for combat aircraft, and a German jet engine saw combat on the Messerschmitt Me262 fighter. American engineers, meanwhile, gained experience in high-performance gas compressors by designing superchargers to enable piston-driven aircraft such as the P-47 and P-51 fighters to operate efficiently at high altitude.[26] Neuchatel of Switzerland’s experimental gas turbine on exhibit in 1939. It took still more decades for stationary turbine technology to develop to the point where electricity generated by gas turbines became commonplace. Demand for gas power plants has grown over the decades since, fueled by environmental concerns (gas burns much more cleanly than coal and produces less carbon dioxide) and the need for excess capacity to respond to demand spikes (gas plants can be turned on quickly since they don’t need to build up a head of steam). The share of global electricity production claimed by gas rose from 13% to 24% between 1971 and 2019.[27] The Twilight of Steam Steam is still with us: roughly half of the electricity generated world-wide is still mediated by steam, whether generated by coal, nuclear power, or oil.[28] Most modern natural gas plants also use steam in a secondary role: they run the still-hot gas turbine exhaust through a steam-powered turbine in order to maximize the overall efficiency of the plant. A few warships still prowl the world’s oceans under a head of steam generated by nuclear fission, which allows them to stay on station for months without any local fuel source. But the meaning and influence of steam in our economy, society, and culture have altogether diminished. No twenty-first century Edward Ellis will write of a “Steam Man of the Prairies” (1868) marching across the American West, nor a twenty-first century Verne of a Steam House (1880) puffing around India, nor a twenty-first century Wells of steam-powered “Land Ironclads” (1903) crawling over the battlefields of Europe. Visions of a steam-powered tomorrow now exist only at a second level of remove, not as something imagined in our future, but as something imagined in a future of the past, within the kitschy sub-genre of steampunk. Steam did not decline all at once; a gradual process saw it shunted from foreground to background. Around the turn of the twentieth century, many writers wrote of the time in which they lived as an age of steam, though some believed it was in the process of being supplanted by an age of electricity.[29] As late as 1938, Henry Dickinson’s A Short History of the Steam Engine gives off no overtones of valediction or mourning. One leaves the book, after a discussion of contemporary improvements in turbines and boilers, with the sense that steam power is still on an upward trajectory. Not long after that, an American G.I. headed to Operation Torch in North Africa in the fall of 1942 would have experienced a hybrid world. He might have ridden a train pulled by steam locomotive to his port of departure and boarded a Liberty ship powered by a triple-expansion steam engine. But he would have crossed an ocean prowled by diesel-powered U-boats, and would have mounted a two-and-a-half-ton truck to move forward with his unit, or taken up a crew station on an M4 Sherman tank, both vehicles equipped with gasoline engines. This is to be expected. The diffusion of new technologies is always gradual and, to some degree, incomplete. At this same time, the German army of the blitzkrieg could boast a spearhead of gleaming motorized divisions, but relied on old-fashioned horse-drawn wagons and carts for the rest.[30] Certainly, by the 1960s, however, the age of steam had taken up decisive residence in the past, a historical era on which one looked back fondly, rather than a description of a present reality.[31] Google’s Ngram viewer traces its decline from vital concept to historical reminiscence quite vividly: What can we say, in closing, about the age of steam? What was it, exactly, and what did it mean? Each of the aspects of the age of steam that we have visited—the pumping engine, the factory prime mover, the steamboat and steamship, the locomotive, the electric power—share a common theme, and that theme is modernity, and specifically the unhitching of mankind from the wagon of Earth’s natural, cyclical processes: day and night, summer and winter, wind and calm, sun and rain. We cannot, of course, lay all of modernity at the feet of steam. The industrial revolution began under water power, and would probably have continued to develop in a quite similar direction for some time without steam. All of the political, social, and cultural features that we associate with the modern world (nationalism, individualism, secularization, and so on) have no obvious connection to steam power, and can be traced to antecedents dating centuries before steam power took off. But steam power changed humanity’s relationship to the world in a profound sense. Almost all of the useful energy humans encounter comes, one way or another, from the light of the sun: the plants it feeds, the rain that it lifts into the sky, the wind that it drives across the plains. We must wait patiently for this bounty to grow, to fall, to blow. The age of steam refused to wait, borrowing instead from the banked solar energy of the past, stored beneath the earth in the form of coal, to get power on demand, at will. The petroleum age of internal combustion that succeeded steam was simply an extension and intensification of this process; we should perhaps speak of a single age, an age of fossil fuels. Jevons’ paradox, formulated by the economist William Jevons in 1865, captures the impatient, spendthrift spirit of the age. Concerned about the future exhaustion of Britain’s reserves of coal, he noted that the obvious solution of making more efficient machinery would have the opposite of the intended effect: the less coal steam engines consumed for a given amount of work, the more useful coal became, and so the more of it the British nation pulled out of the earth to burn: “It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth.”[32] Now, awakening to the full implications of borrowing from the past with no intent to repay, and thus releasing all of Earth’s long-buried carbon into the open, humanity is embarked on a new quest, to supply our needs by more completely and effectively capturing the solar energy of today: renewable energy, that will come again tomorrow and the next day as long as the sun continues to shine. We want, we hope, to somehow give up our dependence on borrowing from the past, without giving up all the hard-won conveniences of the steam age. Wish us luck.

Read more
One System, Universal Service?

The Internet was born in a distinctly American telecommunications environment — the United States treated telegraph and telephone providers very differently than the rest of the world — and there is good reason to believe that this environment played a formative role in its development, shaping the character of the Internet to come. Let us, then, take a good look at how this came to be. To do so we, must go back to the birth of the American telegraph. The American Anomaly In 1843, Samuel Morse and his allies convinced Congress to spend $30,000 to build a telegraph line from Washington D.C. to Baltimore. They believed it was only the first link in a government-sponsored chain of telegraph lines that would span the continent.  In a letter to the House of Representatives, Morse proposed that the government buy the full rights to his telegraph patents, and then charter private companies to build out the individual pieces of the network, while retaining certain government lines for official communications. In this manner, Morse wrote, “it would not be long ere the whole surface of this country would be channelled for those nerves which are to diffuse, with the speed of thought, a knowledge of all that is occurring throughout the land, making, in fact, one neighborhood of the whole country.”1 It seemed to him that such a vital communications system naturally fell within the public interest, and thus the sphere of government. Facilitating communications among the several states via a postal service was one of the few powers of the federal government specifically enumerated in the Constitution. His motives were not altogether informed by public-feeling, though. Government control provided Morse and his backers with a clear endgame for their venture – a single windfall payout from the government. In 1845, Cave Johnson, Postmaster General under James Polk, indicated his backing for a public telegraph system such as Morse had proposed: “The use of an instrument so powerful for good or evil cannot, with safety to the people, be left in the hands of private individuals,” he wrote.2 But that was as far as it went. The rest of Polk’s Democratic administration wanted no part in a public telegraph, nor did the Democratic Congress. The party frowned upon Whig schemes for government spending on “internal improvements,” which they considered an invitation to favoritism, venality, and corruption. Since the government would not act, Morse collaborator Amos Kendall began developing a telegraph network with private backers instead. But the Morse patent did not suffice to supply its owners with a monopoly on telegraphy. Within a decade, dozens of competitors had sprung up, either licensing an alternative telegraph technology (primarily Royal House’s printing telegraph) or simply wildcatting, on highly questionable legal footing. Lawsuits flew back and forth, paper fortunes rose and fell, and ailing companies collapsed or were purchased with watered stock by eager rivals. Out of this scrum, by the late 1860s, rose one dominant player: Western Union. Fearful whispers of “monopoly” began to spread. The telegraph had already become essential to several aspects of American life: finance, railroads, and the press. Never before had a single private organization loomed so large in American life. The proposal for government control of the telegraph gained new life. Over the decade following the Civil War, the postal committees within Congress mooted a variety of plans for somehow bringing the telegraph within the orbit of the Post Office. Three basic variations arose: 1) the Post Office could sponsor one more more new competitor to Western Union, granting them special access to post offices and postal right-of-ways in return for honoring certain limits on their rates. 2) The Post Office could run its own telegraph service in competition with Western Union and other private enterprises. 3) The government could simply nationalize the entire telegraph system, placing it entirely under the control of the Post Office. These plans for some kind of postal telegraph found a handful of staunch supporters within Congress, such as Alexander Ramsey, chair of the Senate Post Office Committee. But much of the energy behind the campaign came from outside lobbyists, most notably Gardiner Hubbard, who had experience with public utilities as the organizer of the city water and gas lighting systems in Cambridge, Massachussetts3. Hubbard and his allies argued that a public system would provide the same kind of generally useful distribution of intelligence then provided by paper mail, driving down rates. Surely, they argued, it would serve the public more ably than Western Union’s system, which primarily targeted business elites. Western Union, of course, countered that the prices for a telegram were determined by its costs, and that a public system with artificially low rates would suffer financial ruin and serve no one. In any case, the postal telegraph never had sufficient backing to face trial in battle on the floor of either chamber of Congress. Instead every proposed bill suffered the inglorious indignity of quiet suffocation in committee. The specter of monopoly never had sufficient power to overcome the fear of government overreach and abuse. The Democrats secured control of Congress once more in 1874, the spirit of national reconstruction in the immediate aftermath of civil war dimmed, and so did the already feeble prospects for a postal telegraph. The idea of putting the telegraph (and later telephone) under state control was revived from time to time over the succeeding decades, but other than a brief (and nominal) government takeover of the telephone as a wartime exigency in 1918, nothing came of it. This hands-off approach to the telegraph and telephone was a global anomaly. In France the telegraph was nationalized before it was electrified.  In 1837, when a private company tried to set up an optical telegraph (using signal towers) alongside the existing state-controlled system, the French parliament promulgated a law that forbade the development of telegraphs not authorized by the government. Britain, on the other hand, allowed private telegraphy to develop unimpeded for several decades. But public discontent at the resulting duopoly led to the imposition of state control in 1868. Throughout Europe, governments put telegraphy and telephony under the authority of the national post office, just as Hubbard and his allies had proposed. Outside Europe and North America, much of the world was still under the control of colonial powers, and thus had no say in the development and regulation of telegraphy. Where independent governments existed, however, they generally built state-run telegraph systems on the European model.[^undersea] These systems generally lacked the capital to expand at anything like the rate of their American and European counterparts. Brazil’s state telegraph company, for example, under the auspices of the Ministry of Agriculture, Commerce and Public Works, had only 1300 miles of line 1869, whereas the U.S., with a similar area and only four times as many people, had 80,000 miles by 1866.4. [^undersea] Supplemented by international undersea telegraph cable companies backed by Anglo-American capital. The New Deal Why did the United States follow such a idiosyncratic path? One might give some weight to the American “spoils system” of party patronage, which lasted until the last years of the nineteenth century.  The entire government bureaucracy, down to local postmasters, consisted of political appointments which could be used to reward loyal allies. Both parties were loathe to create large new sources of patronage power for their opponents, and such would certainly be the case if the telegraph came under the control of the federal government. But the simplest explanation is the traditional American suspicion of a powerful central government – the same basic reason why the structure of American healthcare, education, and other public services appear equally anomalous compared to its peers. Given the increasing importance of electrical communication to national life and national security, however, the United States proved unable to preserve a completely hands-off stance towards its development. Instead, over the first few decades of the twentieth century, it developed a hybrid system in which private telecommunications was checked by two forces: on one side, a bureaucratic body to continually monitor the rates of the communications companies to ensure that they did not use a monopoly position to extract excessive profits; on the other, the blunt threat that misbehavior would mean dismemberment by anti-trust law. These two forces could act at cross-purposes, as we will see: the theory of rate regulation accepted monopoly power as a given, even “natural” in some circumstances, where duplication of service would be wasteful. The regulators generally tried to mitigate the negative consequences of monopoly by controlling prices. Antitrust prosecution, however, sought to destroy monopoly at its root, and force a competitive market into existence. The concept of rate regulation originated with the railroads, and was embodied, at the federal level, in the Interstate Commerce Commission (ICC), created by act of Congress in 1887. The primary impetus for the bill came from small businesses and independent farmers. They generally had no choice in which railroad they used to get their products to market, and claimed that the railroad companies exploited this fact to squeeze them for every penny they were worth, while simultaneously granting sweetheart deals to big corporations. The five-member ICC was given authority to oversee the services and rates of the railroad and curb any such abuses of monopoly power, in particular forbidding the railroads from giving special rates to favored companies.5 The 1910 Mann-Elkins Act extended the authority of the ICC over telegraph and telephone. However the ICC, focused on transportation issues, never took much interest in this new responsibility, and more or less ignored it. Simultaneously, however, the federal government developed an entirely different tool for dealing with monopoly power: trust-busting. The Sherman Act of 1890 empowered the Attorney General to pursue in court any business “combination” found to be in “restraint of trade” – that is, impeding competition through monopoly power.  The law was invoked to dissolve several major corporations in the succeeding two decades, include the Supreme Court’s decision in 1911 to dissect Standard Oil into 34 constituent pieces.6 The Standard Oil “octopus”, as depicted in 1904 before its dissection By this time, the telephone, and its dominant provider, AT&T, had eclipsed the telegraph and Western Union in import and power. So much so that in 1909 AT&T was able to acquire a controlling stake in Western Union. Theodore Vail became joint president of both companies, and began the process of weaving them into a coherent whole. Vail firmly believed that the public interest would best be served by a benevolent telecommunications monopoly, and promoted a new company slogan in that spirit: “One Policy, One System, Universal Service.” Bell was ripe for the attention of the trust-busters. Theodore Vail, circa 1918 The accession to power of the Woodrow Wilson administration, in 1913, provided an opportune moment for his cabinet of progressives to wield the truncheon of antitrust action threateningly. Postmaster General Sidney Burleson favored full postalization of the telephone, along the European model, but, as usual, this idea got little traction. Instead, Attorney General George Wickersham let it be known that he considered AT&T’s continued acquisition of independent telephone companies to be in violation of the Sherman Act. Rather than go to court, Vail and his deputy, Nathan Kingsbury, brokered an agreement known to history as the “Kingsbury Commitment”, whereby AT&T agreed to three provisions: To stop acquiring independent companies, To divest its interests in Western Union, To allow independent telephone companies to connect into the AT&T long distance network. But after this moment of danger, the threat of anti-trust went quiescent for decades. The gentle star of rate regulation ascended, with its assumption of a natural monopoly in communications. By the early 1920s, the first condition was relaxed, and AT&T resumed the process of ingesting small independent telephone companies into its operating network. This attitude was enshrined in the 1934 act that formed the Federal Communications Commission (FCC), which replaced the ICC as the regulator of wireline communication rates. By this time the Bell System controlled 90% or more of America’s telephone business, by any measure: 84 million of the 88 millions miles of wire, 2.1 billion of the 2.3 billion monthly telephone calls, 990 million of the billion dollars in annual revenue.7 Yet the FCC’s primary stated purpose was not to revive competition, but “to make available so far as possible, to all the people of the United States, a rapid, efficient, nation-wide, and world-wide wire and radio communications service with adequate facilities at reasonable charges.”8 If a single entity could best provide such service, so be it. Over the middle decades of the twentieth century, state and federal telecommunications regulators developed a multilayered cross-subsidy scheme to facilitate the development of a universal communications service. Regulatory commissions set rates based on the presumed value that each customer derived from the network, not from the cost of providing service to that customer. Thus business users (who relied on the telephone to conduct their affairs) paid higher prices than residential customers (for whom it was a social convenience). Customers in large urban markets with easy access to many other users paid higher prices than those in smaller towns, despite the greater efficiency of large exchanges. Long-distance users paid an outsize share of the cost of telephone capital, even as technology relentlessly drove down the cost of interstate transmission, and the wages of local exchange operators increased. This complex arrangement of belts and cogs to transmit costs from one place to another worked quite smoothly… as long as there existed one monolithic provider within whom the machinery could operate. The New Technology We are trained to see monopoly as a deadening force, which produces indolence and lassitude as a matter of course. We expect a monopoly to jealously guard its position and the status quo, not to serve as an engine of technological, economic, and cultural transformation. Yet it is hard to reconcile this view with AT&T in its heyday, which generated innovation after innovation, anticipating and facilitating the arrival of every new advance in communication. In 1922, for example, AT&T set up a commerical radio broadcasting station on its building in downtown Manhattan, just one-and-half years after the first such large station went on the air, Westinghouse’s KDKA. The following year it used its long-distance network to re-transmit an address by President Warren Harding to local radio stations across the country. A few years later AT&T also gained a toehold in the film industry, after Bell Labs engineers developed a machine for coordinating motion pictures and recorded sound. Warner Brothers studio used this “Vitaphone” to produce the first Hollywood picture with a synchronized music track (Don Juan), followed by the first “talkie” (The Jazz Singer). The Vitaphone Walter Gifford, who became President of AT&T in 1925, decided to withdraw AT&T from ancillary ventures such as radio broadcasting and film, in part to avoid anti-trust scrutiny. Though the Justice Department had not threatened any action since the Kingsbury Commitment, It would not do to attract undue attention with actions that might be interpreted as an attempt to abuse the telephone monopoly for unfair advantage in other markets. So instead of transmitting its own broadcasts, AT&T, became the primary carrier of signals for the Radio Corporation of America and other radio networks, relaying programs from their studios in New York and other major cities to affiliate stations across the country. Meanwhile, Radiotelephony service spanned the Atlantic in 1927, inaugurated with a banaly query by Gifford to his counterpart at the British Post Office: “How’s the weather over in London?”  Not exactly “What hath God wrought”, but nonetheless it marked an important milestone, which made intercontinental conversations possible decades before the undersea telephone cable, though at great expense and with low fidelity. But the most important developments, for the purposes of our story, were in high-capacity long-distance transmission.  AT&T always wanted to attract more traffic into its long-distance network, which was its primary competitive advantage vis-a-vis the few remaining independents, and highly profitable to boot. And the easiest way to attract more business was to develop new technology to reduce transmission costs – generally by cramming more conversations in a single wire or cable. But, as we have already seen, demand for long-distance communication was expanding beyond traditional, person-to-person telegraph and telephone messages. The radio networks needed their own channels, and television, with far greater capacity requirements, loomed just over the horizon. The most promising way to meet these new demands lay in the coaxial cable, consisting of concentric (i.e. co-axial, sharing an axis) metal cylinders. The properties of such a conductor, known then as a “concentric main,” were studied as far back as the nineteenth century by the giants of classical physics: Maxwell, Heaviside, Rayleigh, Kelvin, and Thomson. As a transmission line, it had great theoretical advantages, since it could carry wide signal bands and was entirely shielded by its own structure from cross-coupling or interference with other outside signals. When television began to develop in the late 1920s, no existing technology could effectively handle the megahertz or more of bandwidth needed to carry a high-quality broadcast. So Bell Labs engineers set out to turn the theoretical advantages of the cable into a working long-distance, wideband transmission line, including all the necessary ancillary equipment for generating, amplifying, receiving, and otherwise processing signals. In 1936 AT&T established a field trial, with FCC authorization, over 100 miles of cable from Manhattan to Philadelphia. After first testing the system with twenty-seven voice circuits, engineers successfully transmitted moving pictures by the end of 1937. At the same time, however, another challenger was emerging as a potential high-bandwidth, long-distance communications medium, the radio relay. Radiotelephony, of the sort used in the 1927 transatlantic link, used a pair of radio broadcast signals create a two-way voice channel in the shortwave band. Tying up two entire radio receivers and transmitters and an entire frequency band for a single phone conversation was not economical for overland use. If a way could be found to multiplex many conversations on the same radio beam, however, the financial tables might be turned. Though each individual station would be somewhat expensive, one hundred or so might suffice to relay a signal across the entire United States. Two frequency bands contended for use in such a system: ultra-high-frequency (UHF) radio (with wavelengths measured in inches or tens of inches) and microwave (measured in centimeters). The higher frequencies of microwave tempted with their greater potential bandwidth, but also posed a greater technical challenge. In the 1930s, responsible opinion at AT&T leaned towards the safer UHF option. But microwave technology leapfrogged forward during the Second World War, due to the extensive use of those frequencies in radar equipment. Bell Labs proved the feasibility of microwave relay radio with the AN/TRC-69, a truck-borne system capable of carrying eight telephone circuits to and from another antenna within line-of-sight. This allowed military headquarters units to quickly re-establish voice communications as they relocated, without the need to wait for the laying of wire (and without the risk of a line being cut by accident or military action). Deployed AN/TRC-6 microwave relay, from “A Multichannel Microwave Radio Relay System,” IEEE Transactions of Electrical Engineering, December 1946 After the war, Harold T. Friis, a Danish-born lifer at Bell Labs, led the development of microwave relay communications. A 220 mile trial route from New York to Boston opened at the end of 1945. Beams leapt 30 miles at a time between towers on high ground – using the same basic relay principle as the optical telegraph, or, for that matter, a chain of beacon fires. Upriver to the Hudson Highlands, across a series of hills in Connecticut, over to Asnebumskit Mountain in western Massachussetts, and then down into Boston Harbor. AT&T was not the only company interested in microwave relay, nor the only company to have gained wartime experience in the technology for manipulating microwave signals.  Philco, General Electric, Raytheon, and the television broadcasters all built or planned their own experimental systems in the immediate post-war years: In fact Philco beat AT&T to the punch, lighting up its Washington to Philadelphia line in the spring of 1945. AT&T microwave relay station at Creston, Wyoming, 1951, part of the first transcontinental line For over thirty years, AT&T had avoided any real challenge to its business model by either the anti-trust or the regulatory forces within the federal government. In large part it was protected by the assumption that its services formed a natural monopoly – the assumption that to have multiple, competing, disjoint systems laying wires across the country and into every neighborhood would be terribly inefficient. The microwave was the first serious chink in that armor, allowing multiple parities to provide long-distance service without prodigious waste. Microwave transmission drastically lowered the barrier-to-entry of would-be competitors. Since the technology required only a series of stations every thirty miles or so, there was no need to acquire thousands of miles of right-of-way to build a useful system, nor to build and maintain thousands of miles of wire or cable. Moreover, the capacity of microwave dwarfed that of traditional wire pairs, with each relay able to carry thousands of telephone conversations, or several TV broadcasts. The competitive advantage of AT&T’s existing wired long-distance network thus dwindled to relative insignificance. However the FCC shielded AT&T from the implications of this for man years, with two decisions it handed down in the 1940s and 50s. First, it refused to issue anything but temporary, experimental licenses to non-common-carrier providers. (A common carrier being anyone offering service to the public at regulated rates, as opposed to, for example, a private internal network for a single enterprise).  Thus to even enter the field risked revocation of one’s license at any time. The commissioners were very concerned about unleashing the same sort of problems that had faced broadcasting twenty years earlier, and led to the creation of the FCC in the first place: a cacophony of interference as a wide variety of transmitters polluted a limited frequency commons. The second decision concerned interconnection. Recall that the Kingsbury Commitment required A&T to allow competing local telephone companies to connect into its long-distance network. Did the same requirement apply to competing private microwave relay systems? The FCC ruled that it only applied in areas where no adequate common-carrier system existed. Thus any competitor building a regional or local network also faced the prospect of sudden disconnection from the rest of the country as soon as AT&T decided to enter their geographical area. The only alternative to preserve connectivity was to build an entire competing national network of one’s own, a daunting prospect to undertake under an experimental license. By the end of the 1950s, therefore, there was still only one major player in long-distance telecommunications – AT&T. Its microwave network was carrying 6,000 telephone circuits on each route, and reached into every state in the continental United States.10   The AT&T microwave relay network in 1960 The other shoe was not long in dropping. But the first landmark challenge to AT&T’s complete, end-to-end control of the telecommunications network came from an altogether different angle. Further Reading Gerald W. Brock, The Telecommunications Industry (1981) The telecommunications industry : the dynamics of market structure / Gerald W. Brock John Brooks, Telephone: The First Hundred Years (1976) M. D. Fagen, ed., History of Engineering and Science in the Bell System: Transmission Technology (1985) Joshua D. Wolff, Western Union and the Creation of the American Corporate Order (2013)

Read more
The Relay

[Previous Part] In 1837, American scientist and teacher Joseph Henry took his first tour of Europe. During his visit to London, he made a point of visiting a man he greatly admired, the mathematician Charles Babbage. Accompanying Henry were his friend Alexander Bache, and his new acquaintance and fellow experimenter in telegraphy, Charles Wheatstone. Babbage told his visitors of his upcoming appointment to demonstrate a calculating machine to a member of Parliament, but was even more excited to show them his plans for another machine, “which will far transcend the powers of the first…” Henry recorded the outlines of Babbage’s plan in his diary:1 [t]his machine is divided into two parts one of which Mr B calls the store house and the other the mill. The store house is filled with wheels on which numbers are drawn. These are drawn out occasionally by levers and brought into the mill where the processes required are performed. This machine will when finished tabulate any formula of an algebraic kind. The historian cannot help but feel a chill at this kind of coincidental intersection of human lives. Here two threads in the history of computing crossed, one nearing its end, the other only beginning. For though Babbage’s machine is often treated as a starting point for the history of the modern general-purpose computer, the connection between the two is tenuous at best. His machine (such as it was, for it was never built) was rather the culmination of the dream of mechanical computation. This dream was stimulated by the increasingly intricate clockwork devices built by craftsmen from the late medieval period onward, and first fully articulated by Leibniz. But no general-purpose computer was ever built on a purely mechanical basis – it was simply too complex a task.2 The electromagnetic relay3, on the other hand, conceived by Henry and others, could be composed with relative ease into computation circuits of previously unfathomable complexity – though this would take decades to come to fruition, and was neither foreseen nor dreamed of by Henry and his contemporaries. It was the progenitor of the myriads of transistors that make possible the digital meta-world that now overlays so much of our everyday experience. Relays filled the guts of early programable computing machines, which ruled for a brief interval before being overtaken by their purely electronic cousins. The relay was invented several times, independently, in the 1830s. It was protean in its conception (its five inventors had at least three different purposes in mind) and, as we shall see, in its use. But it is convenient to think of it as a dual-purpose device. It could be used as a switch, to control another electrical device (including, significantly, another relay); or as an amplifier, to turn a weak signal into a strong one. Switch Joseph Henry combined in one person a deep knowledge of natural philosophy, mechanical aptitude, and an interest in the problem of the electric telegraph. Perhaps only Wheatstone shared this combination of qualities in the 1830s. By 1831, he had built a 1.5 mile circuit that could ring an alarm bell, using a more powerful electromagnet than anyone else yet possessed. Had he continued to pursue telegraphy with the kind of tenacity shown by Morse, his might be the name celebrated in American textbooks. But Henry, a teacher at the Albany Academy and then the College of New Jersey (now Princeton University), built and improved electrical equipment with research, instruction, and scientific demonstration in mind. He showed no interest in turning his pedagogic instrument into a communications system.  Around 1835, he devised a particularly clever demonstration, using two circuits. Recall that Henry had deduced that electricity had two dimensions – intensity and quantity (what we call voltage/tension, and current, respectively). He created circuits with intensity batteries and magnets to project electromagnetic force over long distances, and circuits with quantity batteries and magnets to generate large electromagnetic forces . His new apparatus combined the two. The powerful quantity electromagnet held suspended hundreds of pounds of weight. The intensity magnet, at the end of a long circuit, was used to raise a small metal wire: a switch. Connecting the intensity circuit caused the magnet to raise the wire, opening the switch, breaking the quantity circuit. The quantity electromagnet then suddenly dropped its load, with a resounding crash.4 This relay – for that is what the intensity magnet and its wire had become in this configuration – was a means to make a point about the transformation of electrical into mechanical energy, and how a small force can trigger a larger one. Gently dipping a wire into acid to close a circuit caused the tiny movement of the little switch, which in turn entailed a catastrophe of falling metal, enough to crush a person foolish enough to stand beneath it. For Henry the relay was a tool for the demonstration of scientific principles. It was, in effect, an electric lever. Henry’s quantity magnet, suspending a load of heavy weights Henry was probably the first to connect two circuits in this way – to use electromagnetism from one circuit to control another. The second, so far as we know, were William Cooke and Charles Wheatstone, though with an entirely different purpose in mind. In March 1836, soon after seeing a telegraphic demonstration in Heidelberg using a galvanic needle for signaling, Cooke was inspired by a music box. Cooke believed that using needles for signaling in a real telegraph would require several needles, and therefore several circuits, to identify a particular letter. Cooke wanted instead to use an electromagnet to activate a mechanism, which could then be as complex as needed to indicate the desired letter. The machine he had in mind would resemble a music box, with a barrel surrounded by a number of protruding pins. On one side of the barrel was a dial inscribed with the letters of the alphabet. One such device would sit at each end of the telegraph. A wound spring could make each barrel turn, but most of the time a detent would hold them in place. When the telegraph key was depressed, it closed the circuit, activating electromagnets to release both detents, allowing both machines to turn. Once the desired letter was showing on each dial the key was released and the detents dropped, stopping the motion of the barrels. Without knowing it, Cooke had recreated the chronometric model behind Ronalds’ telegraph of two decades before, and the early Chappe telegraph experiments (though those used sound rather than electricity to synchronize the dials). Cooke realized that a similar mechanism could be used to solve another longstanding problem in telegraphy – how to notify the receiver that a message was incoming. A second circuit with another electromagnet and detent could be used to activate a mechanical alarm bell: close the circuit, pull up the detent, and the alarm rings. In March of 1837, Cooke began collaborating with Wheatstone on telegraphy, and around that time one or both of the partners conceived of the secondary circuit. Rather than having an independent circuit for the alarm (and the miles of wire that entailed), why not simply use the primary telegraphic circuit to also control the alarm circuit? Cooke and Wheatstone secondary alarm circuit By this time Cooke and Wheatstone were back to needle-based designs, and it was straightforward to attach a small piece of wire to the back of the needle so that, when its tip was pulled up by electromagnetism, its tail would close a second circuit. This circuit would set off the alarm. After a decent interval (allowing the receiver to rouse himself from his nap, disconnect the alarm, and prepare pencil and paper), the needle could then be used to signal a message in the usual fashion.5 Twice on two continents in the span of two years for two different reasons, someone had realized that an electromagnet could be used as a switch to control another circuit. But there was another way of thinking about the relationship between the two circuits. Amplifier By the fall of 1837 Samuel Morse had confidence that his idea for an electric telegraph could be made to work. Using Henry’s intensity battery and magnet, by way of Morse’s New York University colleague, Leonard Gale, he had sent messages as far as a third of a mile. However, to prove to Congress that his telegraph could span the continent, he needed to do a great deal more than that. It was clear that, however powerful the battery, at some point the circuit would become too long to provide a legible signal at the far end. But Morse realized that even with its potency greatly diminished by distance, an electromagnet could open and close another circuit powered by a separate battery, which could then send the signal on. This process could be repeated as many times as necessary to span arbitrary distances. Hence the name relay for these intermediary magnets – like a relay horse, they take the electrical message from their fatigued partner and carry it forward with renewed vigor.6 Whether this idea was influenced by Henry’s work is impossible to say, but what was certainly new with Morse was the purpose to which he put the relay. He saw the relay not as a switch but as an amplifier, which could turn a weak signal into a strong one. One of Morse’s relay schemes: a long circuit connected to a relay is used to control a short local circuit, which controls the output device Across the Atlantic and around the same time, Edward Davy, a London pharmacist, had the same notion. Davy probably began tinkering with telegraphy sometime in 1835. By early 1837, he was doing regular experiments with a one-mile circuit in Regent’s Park, in the northwest of London. Soon after Cooke and Wheastone’s March 1837 meeting, Davy caught wind of the competition, and began to think more seriously about building a practical system. He had noticed that the strength of the deflection of a galvanometer’s needle diminished notably with the length of the wire. As he wrote many years later:7 It then occurred to me that the smallest motion (to a hair’s breadth) of the needle would suffice to bring in contact two metallic surfaces so as to establish a new circuit, dependent on a local battery; and so on ad infinitum. Davy called his idea for making a weak signal strong again the “Electrical Renewer.” However he would never bring this nor any of his telegraphic ideas to fruition. He was granted his own telegraph patent in 1838, independent of Cooke and Wheatstone’s. But he sailed to Australia in 1839 to escape a failed marriage, leaving the field in Britain clear for his rivals. Their telegraph company bought up his idle patent several years later.8 The Relay in the World We tend to pay a great deal of attention to systems in the history of technology, while rather neglecting their components. We chronicle the history of the telegraph, the telephone, the electric light, and bathe their creators (or those who we deem retrospectively as such) in the warm rays of our approbation. Yet these systems are made possible only by the combination, recombination, or slight modification of existing elements which have quietly grown in the shade.9 The relay is just such an element. From its ancestral forms, it quickly evolved and diversified as telegraph networks began to grow in earnest in the 1840s and 50s. It then found its way into electrical systems of all kinds over the following century. The earliest change was the use of a stiff metal armature, like that in the telegraph sounder, to close the circuit. A spring pulled the armature away from the circuit when the electromagnet was off. This was a much more reliable and durable mechanism than the bits of wire or pins used in Henry, Cooke, and Wheatstone’s experiments. Default-closed models were also devised, as complements to the original default-open design. Diagram of a typical late-nineteenth century relay. The spring (T) holds the armature (B) away from the contact (C). When the electromagnet (M) activates, it overcomes the strength of the spring, and closes the circuit between the wire W and the contact (C). Relays were only occasionally used as amplifiers or “renewers” in the early decades of the telegraph, since a single circuit could span 100 miles or more without them. They were very useful, though, for joining a low-current long-distance line to a high-current local circuit that could be used to power other machinery, like the Morse register.10 Dozens of U.S. patents from the latter half of the nineteenth century describe new forms of relay or new applications for them. The differential relay, which split the coil so that it the electromagnetic effect was canceled in one direction but reinforced in the other, enabled a form of duplex telegraphy: two signals passing in opposite directions over a single wire. Thomas Edison used the polarized (or polar) relay to make his quadruplex, which could send 4 simultaneous signals over a single wire: two in each direction.11 In the polarized relay, the armature itself was a permanent magnet such that it responded to the direction or polarity of the current, rather than its strength. Permanent magnets also allowed for the creation of latches, relays which would stay open or closed, whichever way they were last set. The functioning of a polarized relay. When the electromagnet activates, the magnetized armature is attracted to one or the other of its poles, depending on the direction of the current. In addition to their role in new telegraphic equipment, relays also became essential components of railway signaling systems. When electrical power networks began to appear at the end of the century, they found uses in those circuits, too – especially as fault-protection devices. Yet even these networks, vast and complex as they may seem, did not demand of the relay more than it could give. The telegraph and railroad touched every town, but not every building. They had tens of thousands of endpoints, but not millions. Electrical power systems, for their part, did not care where in particular its current ended up – it bathed a neighborhood-wide circuit in electricity, and each home or business could pick up what it needed from the wire. The telephone was another story. Because the telephone had to make a point-to-point connection from any arbitrary home or office to any other, it would need control circuits on a scale never before seen. Moreover, the imprint of the human voice wavering across the telephone wire was a rich signal (far richer than Morse code), but a feeble one. So long-distance telephony would also demand new, better amplifiers; and it would turn out that these new amplifiers could be switches, too. It was the telephone networks, more than any other system, that would drive the evolution of the switch. [Next part] Further Reading Historical work on the electromechanical relay is scanty. I have not yet found any source wholly dedicated to the topic. A brief technical account can be found in James B. Calvert, “The Electromagnetic Telegraph“, which is also an excellent source on the technical history of the telegraph in general. Franklin Leonard Pope’s Modern Practice of the Electric Telegraph (1891) is a cornucopia of information on late-nineteenth century telegraphic equipment in the United States.

Read more
Twilight of the Steam Age, Part 1: Internal Combustion

Here in the early decades of the twenty-first century, steam turbines can still be found (though they are almost never seen) but steam piston engines are archaic relics. Nearly every moving machine that we see—cars, trucks, lawnmowers, the aircraft in the sky and the boats in the water—derives its power directly from the combustion of a fuel (such as gasoline) inside of a cylinder: internal combustion, unlike the external combustion that produces steam from fuel burned outside the boiler. The internal-combustion engine requires its own chapter in the story of the age of steam, for two reasons. The more obvious is the role the former played in the demise of the latter. The internal-combustion engine could be accused with some fairness of the crime of slaying the steam engine. The other, less obvious, reason is internal combustion developed in reaction to, and under the shadow of, steam. Through the nineteenth century, internal-combustion remained an upstart, seeking a place for itself in a world where steam had become the default choice for anyone in need of mechanical power.[1] Early Internal Combustion: A Motive Without Means The story of the internal combustion engine is an enormously complex one ranging over a century and more; it branches, then converges then branches again as engineers repurposed and remixed design ideas to solve new problems and fill new niches.[2] One could certainly trace it back at least as far as Huygens’ seventeenth-century gunpowder engine; the modern internal combustion engine could be seen as the revenge of the gunpowder engine, much in the same way that the modern water turbine was the revenge of the horizontal water wheel. To tell the complete story in any detail would require another volume at least as large as this one, so I will provide only a montage of the most significant moments and trends as they relate to our larger story. A thorough history of the internal combustion engine (which I will shorten from now on to “combustion engine” to spare the reader from either an eight-syllable mouthful or the infelicitous initialism “ICE”) would include a laundry list of inventors and inventions, dating back into the eighteenth century or even beyond, almost all of whom have faded into obscurity along with their machines. This is a pattern that should by now be familiar: the steam engine, the steam boat, the locomotive; in every case, a decades-long history can be found of inventors striving after the same vision without quite producing anything that they could convince others of using. In the case of the combustion engine, the probable motive for most of these early inventors was simplicity and ease of use: the steam engine was a large, complicated machine of many parts that required careful tending. The boiler, in particular, had an unnerving propensity to explode at high pressures. To be able to dispense with the firebox and boiler, and simply burn fuel inside the working cylinder, presented a very attractive prospect.[3] Moreover, boilers were not just large and potentially dangerous, they also had a kind of momentum. You couldn’t simply switch a steam engine on and begin using it, you had to build up a head of steam first, and as long as you maintained that head of steam, you were burning fuel. So, steam engines demanded continuous use: that worked out fine for factory owners who wanted to keep their expensive capital running day and night, but small craftsmen and workshops that needed power on an ad hoc basis were less happy with the age of steam. By the early nineteenth century, another reason for looking elsewhere than steam for a source of power had appeared: the advent of gas lighting. Gas, quite evidently, burned easily, and by mid-century could be drawn at will from city gas systems in every major urban center of Europe and the United States. What if you could dispense not only with firebox and boiler, but also with the coal storage house, and the transport costs for hauling the stuff to your place of business, and the labor costs for stoking the engine? Volta’s electric pistol, developed in the 1770s, and exhibited all over Europe, may have served a similar role in the early development of combustion engines that the vacuum pump did for the steam engine: a dramatic demonstration that inspired new inventive ideas. The pistol used a spark across two electrodes to detonate a charge of hydrogen gas in a stoppered glass vessel, shooting the stopper across the room. An imaginative mind might have realized that the if vessel could be rapidly refilled with gas and the spark repeated, one would have a machine that generates a continuous series of mechanical impulses, much like a steam engine. Issac de Rivas, a Swiss inventor who dreamed of replacing horses with self-propelled vehicles, used exactly Volta’s spark mechanism to drive the motor of his gas-powered carriage.[4]    An uncorked Volta pistol [Museo Galileo] The concept and the advantages were clear enough; making internal combustion work was another matter. It required mastering a variety of new techniques: properly mixing fuel and air before or while adding them to the combustion chamber, igniting the mixture inside the chamber, and, most difficult of all, timing the ignition to the cycle of the engine so that each explosion would occur at exactly the right point in the motion of the piston. Unlike a steam engine, which would still function to some extent even with leaky valves, poor timing, and other deficiencies, an ill-tuned combustion engine was useless. The existing body of engineering experience provided no guide in these matters, and so everyone was groping in the dark—quite literally, in the sense that would-be inventors could not witness the combustion, hidden within the cylinder, that they were striving to control.[5] Thermodynamics and Internal Combustion The rise of thermodynamic science at mid-century and its gradual percolation through the engineering community intensified interest in internal combustion as it exposed the weaknesses of steam. The expansive elasticity of steam, which had been thought a key to the success of the steam engine in its guise as a pressure engine, became a liability when it was considered as a heat engine. As Carnot had observed, to maximize the power of the engine required making the working fluid as hot as possible (and then cooling it as far as possible), but transforming water into steam at high temperatures created immense pressures, beyond the capacity of even iron and steel to contain. Thermodynamics made it possible to measure steam engines against the efficiency of an ideal heat engine, and it came up severely wanting. To some degree, this led engineers who had incompletely absorbed the lessons of Carnot and Rankine on a wild goose chase. Believing that a large amount of heat was “wasted” to evaporate the water, they diverted themselves into dead ends building engines that operated on other fluids without so much latent heat, such as ether, alcohol, carbon disulphide, and ammonia. As engine historian Lynwood Bryant sympathized, “to a man in a commonsense world of pressures and volume,” such substitutions appeared sensible because “with a given expenditure of heat he can reach higher pressures with ammonia than with steam.” It required a mind more thoroughly steeped in the new abstractions of energy to grasp that the latent heat was not truly wasted, as energy could not be destroyed; all of that energy still existed in the hot steam, which made it a more energy dense working fluid than most of its would-be competitors.[6] What, then, of the air engine, whose advantages Rankine had touted back in the 1850s? Ordinary atmospheric air did not build up to the same immense pressures when heated as steam, so why not simply substitute air for steam as the working fluid of an external combustion engine? Some inventors tried this too, but the approach failed, for two reasons: first, the poor conductivity of air meant that huge conduction surfaces were required to transfer heat from the fuel source to the air, which made air engines larger, heavier, and more expensive to build than an equivalent steam engine. Second, extremely high temperatures still could not be reached because of the mechanical limits of the iron that had to conduct the heat from the fuel to the air. When pushed beyond about 1,300 degrees Fahrenheit, the iron would weaken and become unusable.[7] John Ericsson’s hot air engine. Ericsson, still thinking of heat as a kind of caloric fluid, believed that his “regenerator” would recapture heat from the exhaust of one engine cycle for re-use in the next. Air could be used in another way, however, as the perspicacious Carnot had observed back in 1824 (emphasis mine): Vapors of water can be formed only through the intervention of a boiler, while atmospheric air could be heated directly by combustion carried on within its own mass. Considerable loss could thus be prevented, not only in the quantity of heat, but also in its temperature. This advantage belongs exclusively to atmospheric air. Other gases do not possess it.[8] That is to say, the fact that air contains oxygen means that it can be mixed directly with fuel and then burned inside the engine, without any intervening loss of heat. This was the thermodynamic promise that drove an intensified search for a workable combustion engine in the second half of the nineteenth century. Combustion can generate temperatures of 2,700 degrees Fahrenheit inside the cylinder, allowing for a greater drop in temperature and this greater efficiency, and these hot gases are created exactly where they are needed; there’s no need to transport them through conduits and valves that inevitably lose heat to the environment. In this circumstance, the poor conductivity of air becomes an advantage: relatively little of this heat is conducted into the surrounding metal, and that metal, not needed to transmit heat to another part of the engine, can be kept cool enough to prevent mechanical failure.[9] German Engineering Infused with new purpose by the science of thermodynamics, and bolstered by the ever-improving precision of machine tools, the combustion engine finally made the leap from experiment to industry in the 1860s. It began as a classic “disruptive innovation,” as described by Clayton Christensen, beginning with small, inexpensive engines, at the bottom of the market. It could not yet compete with large-scale industrial steam power in textile or flour mills; instead it found customers in small workshops and industries with modest power needs. The combustion engine’s closest competitor were derivatives of the 1807 Maudslay table engine, a small steam engine of as little as 1.5 horsepower that could sit (as the name suggests) on a table.[10] A combustion engine could start and stop more quickly without continuing to burn fuel, draw fuel straight from the town gas line, and be built even smaller, delivering one half or one third of a horsepower. Most of the early developments in combustion engine technology took place in continental Europe. A promising effort by a pair of Tuscans, Eugenio Barsanti and Felice Matteucci, was cut short by the death of one of the principals. The honor of the first (modestly) successful commercial combustion engine went instead Jean Joseph Étienne Lenoir, born in Luxembourg (later part of Belgium) but working in Paris. His 1860 gas engine was the most conservative design possible, borrowing the form of a double-acting steam engine, but with burning gas to push the piston and a water jacket to keep the cylinder cool. It ran poorly under load with a great deal of loud banging, its electric ignition system required constant attention, and it did not achieve the gains in fuel efficiency that Lenoir expected (and, indeed, had promised). Nonetheless, the small size of the engine and the availability of on-tap fuel were enough to attract some customers. Lenoir sold five hundred engines, or so, almost all of three horsepower or less.[11] A Lenoir gas engine. The gas intake valve is visible at top center and the equipment to generate electric sparks for ignition at bottom center. Shortly thereafter, Nicolaus Otto, a traveling tea salesman from the Duchy of Nassau in Western Germany, learned about, and became obsessed with improving upon, Lenoir’s engine. For several decades to come, the most famous names in internal combustion would all be German ones: Otto, Diesel, Maybach, Benz, Daimler. One can trace the reasons that the steam engine first appeared in Britain with some confidence to a handful of geographic and economic factors. The German affinity for internal combustion is harder to explain. One factor may have been the later take-off of industrial growth in Germany, based less on textiles and more on chemicals, mining, and metallurgy. Small-scale craftsmen, inheritors to the ancient traditions of the guilds, running workshops with a handful of employees, remained a major economic presence in German manufacturing throughout the nineteenth century.[12] Such businesses constituted exactly the market to which combustion engines were most suited. This explanation is not entirely satisfactory, however. Britain, and other countries, still had small workshops and tradesmen aplenty who could benefit from a compact, convenient engine.[13] But the combustion engine also had an ideological dimension in Germany, and this may be where the key lies. Struggles over the protection of the traditional rights of tradesmen remained vigorous in mid-century German states, and many traditionalists perceived capital-intensive business interests as a novel and predatory force. The conservatives in several states, after crushing the liberal revolutions of 1848-1849, introduced industrial regulations to resurrect the traditional rights of craftsmen.[14] In this context, the combustion engine carried special meaning as a weapon of the weak; a means for the little guy to fight back against big business. This is particularly evident in the writings of Franz Reuleaux, an academic mechanical engineer from Prussia. In 1875, he wrote a treatise on kinematics which includes, under the heading “The Meaning of the Machine for Society,” an extensive treatise that expounds on the evils of industrial society in terms redolent of Marx, but proposes to cure those ills with a return to traditional values, not through a proletarian revolution. Reuleaux in 1877, when he was in his late 40s. Reuleaux laments the dominance of centralized capital, before which the small craftsman lies prostrate, replaced by the grim and alienating monotony of factory work. This is not, he argues, because the productive machinery itself is so expensive, but because “[o]nly capital is able to build an operate the powerful steam engines around which is grouped the remainder of the establishment.” His solution lies in new prime movers like the combustion engine: To combat most of the evil, engineers must provide cheap, small engines, or in other words, small engines with low running costs. If we give a power supply to the small master as cheaply as the great powerful steam-engines can be obtained by capital, and we thus support this important class of society, we shall strengthen it where it happily still exists and we shall re-create it where it has disappeared. …Air and gas engines… can be used almost everywhere and are being steadily perfected. These little engines are the true prime movers of the people [Volk]; They can be obtained at reasonable prices and are very inexpensive to operate.[15] Breaking the bonds tying the workman to capital would allow the former to seize the means of production and restore the traditional moral order of craft work: a hierarchical but harmonious household of family, apprentices, and assistants guided by the hand of the tradesman.[16] Tellingly, the English translation of this same section, provided by a British academic engineer, Alex Kenedy, is a bowdlerization rather than a true translation. Kennedy provides only four anodyne pages on the machine and society rather than seventeen, stripped of all the ideological fire of the original. He evidently did not feel that this plea for a conservative industrial democracy held any interest for his British readers.[17] Otto’s Engines Whatever the true reasons for Germany’s outsize contributions to internal combustion, they began with Otto. Brimming with ideas and entrepreneurial energy, he threw himself into the work of improving Lenoir’s engine. However, as a clerk and salesman with no formal technical education, he made little effective progress. He needed the guidance of someone with a strong engineering background and good business sense. He found it in Eugen Langen, who had studied at a polytechnic institute in Karlsruhe, and worked his way up to a partnership in his father’s sugar refining business while running a side hustle making equipment for gas producers. Restless for a new venture, in 1864 he somehow came across Otto’s work and decided that it had promise (the exact circumstances of how the men met are unknown, though both worked in Cologne).[18] Otto in 1868, in his mid-30s. A portrait of Langen from much later: the 1890s, when he was about 60. He was actually born about nine months after Otto. Even with the addition of Langen’s talents and money, and the further advice of Reuleaux (an old school chum of Langen’s) it took a further three years Otto and Langen’s partnership to produce a commercially useful engine, which debuted publicly at the 1867 Paris Exhibition. With the help of Reuleaux, who served on the judging board, it won the grand prize by dint of its efficiency: it used half the gas to do the same work as competing engines. This first Otto and Langen machine was an atmospheric engine, a direct descendant of the Newcomen engine, and indeed Huygens’ gunpowder engine. The cylinder was set vertically, and the explosion of the burning gas drove the piston up freely (that is, without any connection to the drive mechanism). The power stroke came as the piston descended, pushed down by the weight of the atmosphere into the cylinder just evacuated by the explosion. The complexities—the attributes that made it a viable prime mover, and not just a demonstration piece like Huygens’ machine—came in the timing of regular ignitions, the valve controls, and the intricate design of the clutch that engaged on the down stroke to turn the drive shaft.[19] The Otto Langen atmospheric engine. In 1872, flush with orders for the Otto and Langen and with new capital from Langen’s brothers and other interested businessmen, the company reorganized as Gasmotoren-Fabrik Deutz (after the Cologne suburb where their new headquarters was located). Langen hired Gottlieb Daimler to get the factory running efficiently and Daimler brought with him the young Wilhelm Maybach, hired as a design engineer.[20] With this new company, new team, additional advice and encouragement from Reuleaux, and four more years of work, Otto produced in 1876 the machine that immortalized his name: the so-called “Silent Otto.” Though not tremendously more efficient than its atmospheric predecessor, a Silent Otto weighed 1/3 as much and had a cylinder volume 1/15 the size for the same horsepower. This made it possible to scale to larger sizes: because of the large cylinder required to draw power from the atmosphere, the Otto and Langen could not practically grow much over one or two horsepower. Between their own factory and British and American licensees, Deutz sold tens of thousands of engines of this type by 1890.[21] As usual, practice outran theory: Otto had made a great advance, but without a clear idea as to why. He believed that he had created a stratified charge which, by gradually increasing the mix of fuel in the air over the length of the cylinder, cushioned the blow of the explosion, allowing the engine to run much more smoothly and quietly than the clanging Otto and Langen. However, this was not an accurate model of what was actually happened during an explosion in the cylinder. The real key to the success of the Silent Otto lay in its four-stroke cycle, in which the first stroke draws the fuel-air mixture into the cylinder, the second stroke compresses it, the third delivers power as the mixture explodes, and the fourth pushes the exhaust gases out of the cylinder. Compressing the gas inside the cylinder before ignition made for a far more powerful and efficient explosion, and it was much easier to achieve this with a four-stroke cycle than with fewer.[22] The four-stroke cycle,as illustrated in John B. Rathbun, Gas, Gasoline and Oil Engines (1919), 28. The four-stroke cycle went completely against the grain of what most of Otto’s contemporaries (including Daimler) were trying to do: they hoped to recapitulate the history of the Watt engine by turning a single-acting atmospheric engine into a double-acting engine that derives power from every stroke. But a combustion engine is not a steam engine: explosive combustion is very powerful and very fast, and at hundreds of cycles per minute one power stroke out of every four is sufficient to drive many kinds of machinery.[23] Otto’s engines provided power to many small workshops and tradesman, but did not turn the tide away from capital and towards industrial democracy, as Reuleaux had hoped. His assertion that the economics of large-scale factory work rested only on the need to share a large prime mover was simply wrong; an error born of wishful thinking, perhaps. An industrial democracy of mechanized craft households did not replace factory work, but many businesses found a use for a more compact, user-friendly engine: from bakeries and printers to sawmills and soda water makers.[24] Combustion engines also moved up the market as engines with tens of horsepower engines became possible, horning in on more and more of the steam engine’s traditional territory.

Read more
Microcomputers – The First Wave: Responding to Altair

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] Don Tarbell: A Life in Personal Computing In August 1968, Stephen Gray, sole proprietor of the Amateur Computer Society (ACS), published a letter in the society newsletter from an enthusiast in Huntsville, Alabama named Don Tarbell. To help other would-be owners of home-built computers, Tarbell offered a mounting board for integrated circuits for sale for $8 from his own hobby-entrepreneur company, Advanced Digital Design. Tarbell worked for Sperry Rand on projects for NASA’s Marshall Space Flight Center, but had gotten hooked on computers through coursework at the University of Alabama at Huntsville, and found the ACS through a contact at IBM.[1] Over the ensuing years, integrated circuits became far cheaper and easier to come by, and building a real home computer on one’s own thus far more feasible (though still a daunting challenge, demanding a wide range of hardware and software skills). In June 1972, Tarbell had mastered enough of those skills to report to the ACS Newsletter that he (at last) had a working computer system, with an 8-bit processor built from integrated circuits, four-thousand bytes of memory, a text editor and a calculator program, a Teletype for input and output, and an eight-track-tape interface for long-term storage. Not long after this report to ACS, Tarbell decamped from Alabama and moved to the Los Angeles area to work for Hughes Aircraft.[2] Don Tarbell with his home-built computer system [Kilobaud: The Small Computer Magazine (May 1977), 132]. Three years after that, in 1975, the arrival of the Altair 8800 kit announced that anyone with the skills to assemble electronics could have the power of a minicomputer in their own home, and thousands heeded the call. A group of 150 of these personal computer hobbyists met in the commons of the apartment complex where Tarbell lived. They had come on Father’s Day for the inaugural meeting of the Southern California Computer Society (SCCS). Half of the participants already owned Altairs. Tarbell took on the position of secretary for the new society, and served on the board of directors. Within a few months, SCCS began producing its own magazine with a full editorial staff, a far more sophisticated operation than the old hand-typed ACS Newsletter; Tarbell eventually became one of its associate editors.[3] But an Altair kit by itself was far from a complete computer system like the one Tarbell had back in 1975. It had a piddling 256 bytes of memory, and no devices for reading or writing data other than lights and switches. Dozens of hobbyists founded their own companies to sell other computer buffs the additional equipment that would answer the deficiencies of their newly-purchased Altairs. Don Tarbell was one of them. Among the major problems was the inability to permanently store or load programs and data. Once you shut off the computer, everything you had entered into it was lost. A standard Teletype terminal came equipped with a paper tape punch and reader, but even a heavily used Teletype could cost $1000. In February 1976, Tarbell offered a much simpler and cheaper solution, the Tarbell cassette interface, a board that would slot into the Altair case and connect the computer to an ordinary cassette recorder, writing or reading data to or from the magnetic tape. Not only was a cassette machine much cheaper than a teletype, cassettes were more durable than paper, could store more data (up to 2200 bits per inch with Tarbell’s controller), and could be rewritten many times. Tarbell’s board sold for $150 assembled, $100 for a kit. He later branched out into floppy disk controllers and an interpreter for the BASIC computer language, and became a minor celebrity of the growing microcomputer scene.[4] Tarbell’s story offers a microcosm of the transition of personal computers, over the course of the 1970s, from an obscure niche hobby to a national industry. Like Hugo Gernsback in radio half a century before, home-computer tinkerers found themselves new roles in a growing hobby business as community-builders, publishers, and small-scale manufacturers. Like Tarbell, the first wave of these entrepreneurs responded directly to the Altair, offering supplemental hardware to offset its weaknesses or offering a more reliable or more capable hobby computer. The First Wave: Responding to Altair The Micro Instrumentation and Telemetry Systems (MITS) Altair came with a lot of potential, but it lay mostly unrealized in the basic kit MITS shipped out. This was partly intentional: the Altair sold on the basis of its exceptionally low price (less than $500), and it simply couldn’t remain so cheap if it had all the features of a full-fledged minicomputer system. Other deficiencies arose by accident, out of the amateurish nature of MITS. The good timing and negotiating skills of Ed Roberts, the company’s owner, had put him at the spearhead of the hobby computer revolution, but no one at his company had exceptional talent in electronics or product design. The Altair took hours to assemble, and the assembled machines often didn’t work. Follow-up accessories came out slowly as MITS technicians struggled to get them working. Tarbell’s cassette interface succeeded because it performed faster and more reliably than MITS’ equivalent. The most urgent need of the hobbyist other than easier input and output was additional memory beyond the scanty 256 bytes included with the base kit: far from enough to run a meaningful program, like a BASIC interpreter. In the spring of 1975, MITS started shipping a 4096-byte (4K) board designed by Roberts, but these boards simply didn’t work.[5] Unsurprisingly, other hobby-entrepreneurs began to step up quickly to fill the gaps. Several of them came from the most famous of the Altair-inspired hobby communities, the Homebrew Computer Club, which met in Silicon Valley and attracted attendees from around the Bay Area. Processor Technology was founded in Berkeley by Homebrew regular and electronics enthusiast Bob Marsh and his reclusive partner, Gary Ingram. In the spring of 1975, they began offering a 4K memory board for the Altair that actually worked. Later, the company came out with its own tape controller and a display board that would make Altair into a TV Typewriter, which they called VDM-1.[6] MITS’ 4K memory board compared to Processor Technology’s. Even without knowing anything about hardware design, it’s easy to see how sloppy the former is compared to the latter. [s100computers.com] Only one “authorized” Altair board maker existed, Cromemco, also located in the Bay Area. Cromemco founders Harry Garland and Roger Melen met as Ph.D. students in electrical engineering at Stanford (and named their company after their dormitory: Crothers Memorial). They contributed articles to Popular Electronics regularly, and found out about Altair while visiting the magazine’s offices in New York. They originally intended to build an interface board for the Altair that could read data from their “Cyclops” digital camera design. Despite the early partnership, no Cromemco board saw the light of day until 1976. Their slow start notwithstanding, Garland and Melen created two products of significance to MITS’ business and to the future of personal computing: the “Dazzler” graphics board and the “Bytesaver” read-only-memory (ROM). Unlike the TV Typewriter or the VDM-1, which could display only text, the Dazzler could paint arbitrary pixels onto the screen from an eight color palette (though only at a resolution of 64 x 64, or up to 128 x 128 in monochrome mode). Less sexy but equally significant, the Bytesaver board stored a program that would be immediately loaded into the Altair memory on power up; prior to that an Altair could do nothing until basic control instructions were keyed in manually to bootstrap it (instructing it, for example, to load another program from paper tape).[7] A 1976 ad for the Cromemco Dazzler [Byte (April 1976), 7] Roberts bristled at the competition from rival card makers. But more aggravating still were the rival computer makers cranking out Altair knock-offs. In 1974, Robert Suding and Deck Bemis had launched Digital Group out of Denver to support the Micro-8. After Altair came out, they decided to make their own, superior computer; Suding happily quit his steady but dull job at IBM to serve as the Woz to Bemis’ Jobs, avant la lettre. Digital Group computers came complete with an eight-kilobyte memory board, a cassette tape controller, and a ROM chip that could boot a program directly from tape. They also had a processor board independent of the backplane into which expansion cards slotted, which meant you could upgrade your processor without replacing any of your other boards. In short, they offered a computer hobbyist’s dream. The catch came in the form of poor quality control and very long waits for delivery, after paying cash up front.[8] Other would-be Altair-killers entered the market from around the country in 1975. Mike Wise, of Bountiful, Utah, created the Sphere, the first hobby computer with an integrated keyboard and display—although production was so limited that, decades later, vintage computer collectors would doubt whether any were actually built. The SWTPC 6800 came out of San Antonio, built by the same Southwest Technical Products Corporation that had sold parts for Don Lancaster’s TV Typewriter. A pair of Purdue graduate students in West Lafayette, Indiana wrote software for the SWTPC under the moniker of Technical Systems Consultants. A few hundred miles to the east, Ohio Scientific of Hudson, Ohio released a Microcomputer Trainer Board that put it, too, on the hobbyist map.[9] The SWTPC 6800. The bluntly rectangular cabinet design with the computer’s name prominent on the faceplate is typical of this era of microcomputers.[Michael Holley] But the real onslaught came in 1976. By that time hobbyists with entrepreneurial ambition had had time to fully absorb the lessons of the Altair, to hone their own skills at computer building, and to adopt new chips like the MOS Technology 6502 or Zilog Z80. The most significant releases of the year were the Apple Computer, MOS Technology KIM-1, IMSAI 8080, Processor Technology Sol-20, and, in the unkindest cut for Roberts, the Z-1 from former ally Cromemco. Most of these computer makers solved the upgrade problem in a more blunt fashion than the Digital Group’s sophisticated swappable boards: they simply copied the card interface protocol (known as the “bus”) of the Altair. Already own an Altair? Buy a Z-1 or Sol-20 and you could put all of the expansion cards for your old computer into the new. Cromemco founder Roger Melen encouraged the community to disassociate this interface from MITS by calling it the S100 bus, not the Altair bus—another twist of the knife.[10] Almost all of these businesses (excepting IMSAI, of whom more shortly) continued to exclusively target electronic hobbyists as their customers. The Z-1 looked just like an upmarket Altair, with a front panel now adorned with slightly nicer switches and lights. The Apple Computer and KIM-1 offered no frills at all, just a bare green printed circuit board festooned with chips and other components. Processor Technology’s Sol-20, inflected with Lee Felsenstein’s vision of a “Tom Swift” terminal for the masses, sported a handsome blue case with integrated keyboard and walnut side panels. This represented substantial progress in usability compared to the company’s first memory boards (which came only as a kit the buyer had to assemble), but the Sol-20 was still marketed via Popular Electronics as a piece of hobby equipment.[11] Software Entrepreneurs In early 1975, a computer hobbyist who wanted a minicomputer-like system of their own had only one low-price option: buy an Altair; then build, or wait for, or scrounge, the additional components that would make it into a functional system. Eighteen months later, abundance had replaced scarcity in the computer hobby hardware market, with many makes, models, and accessories to choose from. But what about software? A working computer consisted of metal, semi-conductor, and plastic, but also a certain quantity of “thought-stuff,” program text that would tell the computer what, exactly, to compute. A large proportion of the hobby community had a minicomputer background. They were accustomed to writing some software themselves and getting the rest (compilers, debuggers, math libraries, games, and more) from fellow users, often through organized community exchanges like the DEC user group program library. So, they expected to get microcomputer programs in the same way, through free exchange with fellow hobbyists. Even in the mainframe world, software was rarely sold independently of a hardware system prior to the 1970s.[12] It came as a shock, then, when, immediately on the heels of Altair, the first software entrepreneurs appeared. Paul Allen and Bill Gates—especially Gates—were roughly a decade younger than most of the early hardware entrepreneurs, at just 22 and 19, respectively. Compare to Ed Roberts of MITS at 33; Lee Felsenstein of Processor Technology, 29; Harry Garland of Cromemco, 28; Chuck Peddle of MOS Technology and Robert Suding of the Digital Group, both 37. These two young men from Seattle had caught the computer bug at the keyboard of their private school’s time-sharing terminal; they had finagled some computer time at a Seattle time-sharing company in exchange for finding bugs, but had no serious work experience that would have immersed them in the practices of the minicomputer world. For all their youth, though, Gates and Allen brimmed with ambition, and when they saw the Altair on the cover of Popular Electronics, they saw a business opportunity. Of course, everyone knew that a computer would need software to be useful, but it was not obvious that anyone would pay for that software. Gates and Allen, having not yet grown accustomed to getting software for free, had an easier time imagining that they would. They also knew that the first program any self-respecting hobbyist would want to get their hands on was a BASIC interpreter, so that they could run the huge existing library of BASIC software (especially games) and begin writing programs of their own. Gates and Allen in 1981. [MOHAI, King County News Photograph Collection, 2007.45.001.30.02, photo by Chuck Hallas] Like Cromemco, Gates and Allen started out as partners with MITS—within days of seeing they Altair cover, they contacted Ed Roberts promising a BASIC interpreter. They delivered in March, despite having no Altair, nor even an 8080 processor—they developed the program on a simulator written by Allen for the DEC PDP-10 at Harvard, where Gates was enrolled as a sophomore. In another debt to DEC, Gates based the syntax on Digital’s popular BASIC-PLUS. Allen moved to Albuquerque soon after, to head a new software division at MITS. Gates eventually followed to nurture their independent software venture, Micro-Soft, though he did not completely abandon Harvard until 1977.[13] Many hobbyists balked at the culture shock of paying for software, and freely exchanged paper tapes of Altair BASIC in defiance of Micro-Soft and MITS, prompting Gates’ famous “Open Letter to Hobbyists,” in February 1976. There he made the case that software writers deserved compensation for their work just as much as hardware builders did, prompting a flurry of amici curiae from various corners of the hobby (with far more weighing in for the defendants than the plaintiff). But, though this controversy is famous for its retrospective echoes of later debates over free software, Gates and Allen rendered the issue irrelevant almost immediately, by switching to a different business model. They began licensing BASIC to computer manufacturers at a flat fee, instead of a royalty on each copy sold. MITS paid $31,200, for example, for the BASIC for a new Altair model using the Motorola 6800 processor. The licensor could choose to charge for the software or not, Micro-Soft didn’t care, but they typically didn’t. This approach bypassed the cultural conflict altogether; BASIC interpreters and other systems software became a bullet point in a list of advertised features for a given piece of hardware rather than a separate item in the catalog.[14] Having a BASIC would let you run programs on your computer; but the other crucial linchpin for an easy-to-use microcomputer system was a program to manage your other programs and data. As faster and denser magnetic storage supplanted paper tape, computer users needed a way to quickly and easily move files between memory and their cassettes or floppy disks. By far the most popular tool for this purpose was CP/M, for Control Program for Microcomputers. CP/M was the creation of Gary Kildall, who got his hands on his first microcomputer directly from the source: Intel. Kildall grew up in Seattle and studied computer science at the University of Washington, where he had a brief run in with Gates and Allen, who at the time were teenagers who worked at a company part-owned by one of his professors, the Computer Center Corporation, in exchange for free computer time. Drafted into the army, Kildall used his connections at the University and his father’s position as a merchant marine instructor to get posted instead to naval officer training, and then a position as a math and computer science teacher at the Naval Postgraduate School in Monterey. After completing his obligations to the Navy in 1972, he stayed on as a civilian instructor.[15] Gary Kildall with his wife Dorothy, in 1978. [Computer History Museum] That same year, Kildall learned about the Intel 4004, and, like so many other computer enthusiasts, became enchanted with the idea of a computer of his own. The most obvious route was to get his hands on Intel’s development kit for the 4004, the SIM4-01, intended to be used by customers to write software for the new chip. So Kildall began talking to people at Intel, and then consulting at Intel, and in exchange for software written for Intel, managed to acquire microprocessor development kits for the 4004, and then later the 8008 and 8080 processors.[16] The most significant piece of software Kildall provided to Intel was PL/M, Programming Language for Microprocessors, which allowed developers to express code in a higher-level syntax that would then be compiled down to the 4004 (or 8008, or 8080) machine language. But you could not write PL/M on a microcomputer, it didn’t have the necessary mass storage interface or software tools; clients were expected to write programs on a minicomputer and then flash the final result onto a ROM chip that would power whatever microprocessor application they had in mind (a traffic light controller, for example, or a cash register.) What Kildall dreamed of was to “self-host” PL/M: that is, to author PL/M programs on the same computer on which they would run. By 1974 he had assembled everything he needed—a Intellec 8/80 development kit (for the 8080), a used hard drive and teletype, a disk controller board built by a friend—except for a program that could load and store the PL/M compiler, the code to be compiled, and the output of the compilation. It was for this reason, to complete his own personal quest, that he wrote CP/M.[17] Only after the fact did he think about selling it, just in time to catch the rising wave of hobby computers. Though Kildall later offered direct sales to users, he began with the same flat-fee license model that Micro-Soft had adopted: Kildall sold the software to Omron, a smart terminal maker, and then to IMSAI for their 8080 computer, each at a fee of $25,000. He incorporated his software business as Intergalactic Digital Research (later just Digital Research) in Pacific Grove, just west of Monterey. Gates visited in 1977 to float the idea of a California merger of the two (relative) giants of microcomputer software, but he and Allen decided to relocate to Seattle instead, leaving behind an intriguing what-if.[18] A CP/M command line interaction via a Tarbell disk controller, showing all the files on disk “A”. [Computer History Museum]      CP/M soon became the de-facto standard operating system for personal computers. Having an operating system made writing application software far easier, because basic routines like reading data from disk could be delegated to system calls instead of being re-written from scratch every time. CP/M in particular stood out for its quality in an often-slapdash hobby industry, and could easily be adapted to new platforms because of Kildall’s innovation of a Basic Input Output System (BIOS), which acted as a translation layer between the operating system and hardware. But what bootstrapped its initial popularity was the IMSAI deal, which attached Digital Research to the rising star in what up to that point had been Altair’s market to lose.[19] Getting Serious? There was one company thinking different about the microcomputer market in 1975: IMSAI, headquartered in San Leandro, California, intended to sell business machines. It had the right name for it, an acronym stuffed wall-to-wall with managerial blather: Information Management Sciences Associates, Inc. William (Bill) Millard was an IBM sales rep, then worked for San Francisco setting up computer systems, and founded IMS Associates to sell his services to companies who needed similar IT help. Bill Millard circa 1983. Provenance unknown. Despite the anodyne name he gave to his company, Millard, too, felt the influence of the ideologies of personal liberation that seemed to rise from San Francisco Bay like a fog. But unlike a Lee Felsenstein or a Bob Albrecht, he though mainly of liberating himself, not others: he was a devotee of Erhard Seminars Training, or est, a self-help seminar which promised paying customers access to an understanding of the world-changing power of their will in just two weekends; according to Erhard, “If you keep saying it \ the way it really is \ eventually your word \ is law in the universe.”[20] Neither Millard nor either of his technical employees (part-time programmer Bruce Van Natta and physicist-cum-electrical engineer Joseph Killian), had any prior interest or experience in home computers; they stumbled into the business almost by accident. Their primary contract, to build a computer networking hub for car dealerships based on a DEC computer, had begun spiraling towards failure. Casting about for some solution, they latched onto the news of Altair’s success: here was an inexpensive alternative to the DEC. When Altair refused to deliver on their timetable, they decided, in late summer of 1975, to clone it instead. And, to get cash flow going to pay their expenses and loans, they would sell their clone direct to consumers as well, while working to complete the big contract. When orders from hobbyists began to pour in, they abandoned the automotive scheme altogether to go all-in on their Altair clone.[21] The IMSAI 8080. It closely resembles the Altair, but with cleaner design and higher quality front-panel components. [Morn] The IMSAI 8080 began shipping in December 1975, at a kit price of $439. Millard cultivated an est culture at the company; employees with the “training” were favored, and total commitment to the work was expected. Some employees considered Millard a “genius or a prophet,” spouses and children of employees showed up after school to help assemble computers. By April, they were doing hundreds of thousands of dollars per month in sales. IMSAI was board-compatible with MITS but made improvements that stood out to the connoisseur: a more efficient internal layout, a cleaner and more professional exterior, and a seriously beefed-up power supply that could support a case fully loaded with expansion boards. These advantages appealed enough to buyers to make it Altair’s top competitor in 1976.[22] But what most set IMSAI apart in 1976 was the fact that it was not led by hobby entrepreneurs, but by a business man who wanted to build business machines. An advertisement in the May 1976 issue of BYTE magazine described the IMSAI as a “rugged, reliable, industrial computer with high commercial-type performance,” as opposed to “Altair’s hobbyist kit” (the IMSAI was of course also sold as a kit), along with obscure allusions to expensive IMSAI business products (Hypercube and Intelligent Disk) that never materialized. This was an odd pretense to put on while advertising in BYTE—a publication featuring articles such as “More to Blinking Lights than Meets the Eye” and “Save Money Using Mini Wire Wrap.”  This is not to say that IMSAI (or its contemporaries) had no commercial customers or applications. Alan Cooper, known later for creating Visual Basic, wrote a basic accounting program for the IMSAI in 1976 called General Ledger. But these applications remained a small minority among the mass of buyers who were computer-curious.[23] In 1977, IMSAI began advertising a “megabyte micro,” another fantasy. Such a powerful and expensive machine could sell in the higher end of the minicomputer market, but not to IMSAI’s actual buyers, hobbyists who were buying kits for less than a thousand dollars out of retail storefronts.IMSAI tried again to attract serious business customers with its second major product, the all-in-one VDP-80, which began shipping in late 1977 with an integrated keyboard, display, and dual disk drives, but it was plagued with quality defects, and lacked any application software for its would-be business customers to use.[24] Those customers did arrive in large numbers in good time, but only after a second wave of all-in-one computers appeared, aimed at the mass-market, and after the emergence of useful application software to run on them.

Read more
Interactive Computing: A Counterculture

In 1974, Ted Nelson self-published a very unusual book. Nelson lectured on sociology at the University of Illinois at Chicago to pay the bills, but his true calling was as a technological revolutionary. In the 1960s, he had dreamed up a computer-based writing system which would preserve links among different documents. He called the concept “hypertext” and the system to realize it (always half-completed and just over the horizon) “Project Xanadu.” He had become convinced in the process that his fellow radicals had computers all wrong, and he wrote his book to explain why. Among the activist youth of the 1960s counterculture, the computer had a wholly negative image as a bureaucratic monster, the most advanced technology yet for allowing the strong to dominate the weak. Nelson agreed that computers were mostly used in a brutal way, but offered an alternative vision for what the computer could be: an instrument of liberation. His book was really two books bound together, each with its own front cover—Computer Lib and Dream Machines—allowing the book to be read from either side until the two texts met in the middle. Computer Lib explained what computers are and why it is important for everyone to understand them, and Dream Machines explained what they could be, when fully liberated from the tyranny of the “priesthood” that currently controlled not only the machines themselves, but all knowledge about them. “I have an axe to grind,” Nelson wrote, I want to see computers useful to individuals, and the sooner the better, without necessary complication or human servility being required. …THIS BOOK IS FOR PERSONAL FREEDOM AND AGAINST RESTRICTION AND COERCION. … A chant you can take to the streets: COMPUTER POWER TO THE PEOPLE! DOWN WITH CYBERCRUD![1] If the debt Nelson’s cri de coeur owed to the 1960s counterculture wasn’t clear enough, Nelson made it explicit by listing his “Counterculture Credentials” as a writer, showman, “Onetime seventh-grade dropout,” “Attendee of the Great Woodstock Festival,” and more, including his astrological sign.[2] The front covers of Ted Nelson’s “intertwingled” book, Computer Lib / Dream Machines. Nelson’s manifesto is the most powerful piece of evidence of one popular way to tell the story of the rise of the personal computer: as an outgrowth of the 1960s counterculture. Surely more than geographical coincidence accounts for the fact that Apple Computer was born on the shores of the same bay where, not long before, Berkeley radicals had protested and Haight-Ashbury deadheads had partied? The common through line of personal liberation is clear, and Nelson was not the only countercultural figure who wanted to bring computer power to the people. Lee Felsenstein, a Berkeley engineering drop-out (and then eventual graduate) with much stronger credentials in radical politics than Nelson, invested much of his time in the 1970s on projects to make computers more accessible such as Community Memory, which offered a digital bulletin board via public computer terminals set up at several locations in the Bay Area. In Menlo Park, likewise, anyone off the street could come in and use a computer at Bob Albrecht’s People’s Computer Company. Both Felsenstein and Albrecht had clear and direct ties to the early personal computer industry, Felsenstein as a hardware designer and Albrecht as a publisher. The two most seminal early accounts of the personal computer’s history, Steven Levy’s Hackers: Heroes of the Computer Revolution, andPaul Freiberger and Michael Swaine’s, Fire in the Valley: The Making of The Personal Computer, both argued that the personal computer came into existence because of people like Felsenstein and Albrecht (whom Levy called long-haired, West Coast, “hardware hackers”), and their emphasis on personal liberation through technology. John Markoff extended this argument to book length with What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer. Stewart Brand put it succinctly in a 1995 article in Time magazine: “We Owe it All to the Hippies.”[3] This story is appealing, but not quite right. The influence of countercultural figures in promoting personal computing was neither necessary, nor sufficient, to explain the sudden explosion of interest in the personal computer caused by the Altair. Not necessary, because the Altair existed primarily because of two people who had nothing to do with the radical left or hippie idealism: the Albuquerque Air Force veteran and electronics lover Ed Roberts, and the New York hobby magazine editor Les Solomon. Not sufficient because it addresses only supply, not demand: why, when personal computers did become available, were there many thousands of takers out there looking to buy the personal liberation that men like Nelson and Albrecht were selling? These people were not, for the most part, hippies or radicals either. The countercultural narrative seems plausible when one zooms in on the activities happening around the San Francisco Bay, but the personal computer was a national phenomenon; orders for Altairs poured in to Albuquerque from across the country. Where did all of these computer lovers come from? Getting Hooked In the 1950s, researchers working at a laboratory affiliated with MIT synthesized an electronic concoction in their labs that, in the decades to come, transformed the world. The surprising byproduct of work on an air defense system, it proved to be highly addictive, at least to those of a certain personality type: inquisitive and creative, but also fascinated by logic and mathematics. The electronic computer, as originally conceived in the 1940s, emulated a room full of human computers. You provided it with a set of instructions for performing a complex series of calculations—a simulation of an atomic explosion, say, or the proper angle and explosive charge required to get an artillery piece to hit a target at a given distance, and then came back later to pick up the result. A “batch-processing” culture of computing developed around this model, where computer users brought a computer program and data to the computer’s operators in the form of punched cards. These operators fed in batches of these cards and fed them to the computer for processing, and then later extracted the results on a new set of punched cards. The user then picked up the results and then either walked away happy or (more often), noticed an error, scrutinized their program for bugs, made adjustments, and tried again. By the early 1960s, this batch-processing culture had become strongly associated with IBM, which had parlayed its position as the leader in mechanical data-processing equipment into dominance of electronic computing as well. However, the military faced many problems that could not be pre-calculated, and required an instantaneous decision, calling for a “real-time” computer that could provide an answer to one question after another, with seconds or less between each response. The first fusion of real-time problem solving with the electronic computer came in the form of a flight simulator project at MIT under the leadership of electrical engineer Jay Forrester, which, through a series of twists and turns and the stimulus of the Cold War, evolved into an air defense project with the backronym of Semi-Automated Ground Environment (SAGE). Housed at Lincoln Laboratory, a government facility about thirty miles to the northwest of MIT, SAGE became a mammoth project that spawned an entirely new form of computing as an accidental side effect. An operator interacting with a SAGE terminal with a light gun. The SAGE system demanded a series of powerful computers (to be constructed by IBM), two for each of the air defense centers to be built across North America (one acted as a back-up in case the other failed). Each would serve multiple cathode-ray screen terminals showing an image of incoming radar blips, which the operator could select to learn more information and possibly marshal air defense assets against them. At first, the project leads assumed these computer centers would use vacuum tubes, the standard logic component for almost all computers throughout the 1950s. But the invention of the transistor offered the opportunity to make a smaller and more reliable solid-state computer. So, in 1955-56, Wesley Clark and Ken Olsen oversaw the design and construction of a small, experimental transistor-based computer, TX-0, as a proof-of-concept for a future SAGE computer. Another, larger test machine called TX-2 followed in 1957-58.[4] The most historically significant feature of these computers, however, was the fact that, after being completed, they had no purpose. Having proved that they could be built, their continued existence was superfluous to the SAGE project, so these very expensive prototypes became Clark’s private domain, to be used more or less as he saw fit. Most computers operated in batch-processing mode because it was the most efficient way to use a very expensive piece of capital equipment, keeping it constantly fed with work to do. But Clark didn’t particularly care about that. Lincoln Lab computers had a tradition of hands-on use, going all the way back to the original flight simulator design which was intended for real-time interaction with a pilot, and Clark believed that real-time access to a computer assistant could be a powerful means for advancing scientific research.[5] The TX-0 at MIT, likely taken in the late 1950s. And so, a number of people at MIT and Lincoln Lab got to have the experience of simply sitting down and conversing directly with the TX-0 or TX-2 computer. Many of them got hooked on this interactive mode of computing. The process of instant feedback from the computer when trying out a program, which could then be immediately adjusted and tried again, felt very much like playing a game or solving a puzzle. Unlike the batch-processing mode of computing that was standard by the late 1950s, in interactive computing the speed at which you got a response from the computer was limited primarily by the speed at which you could think and type. When a user got into the flow, hours could disappear like minutes. J.C.R. Licklider was a psychologist employed to help with SAGE’s interface with its human operators. The experience of interacting with the TX-0 at Lincoln Lab struck him with the force of revelation. He thereafter became an evangelist for the power of interactive computers to multiply human intellectual power via what he called “man-computer symbiosis”: Men will set the goals and supply the motivations, of course, at least in the early years. They will formulate hypotheses. They will ask questions. They will think of mechanisms, procedures, and models. … The equipment will answer questions. It will simulate the mechanisms and models, carry out the procedures, and display the results to the operator. It will transform data, plot graphs …In general, it will carry out the routinizable, clerical operations that fill the intervals between decisions.[6] Ivan Sutherland was another convert: he developed a drafting program called Sketchpad on the TX-2 at Lincoln Lab for his MIT doctoral thesis and later moved to the University of Utah, where he became the founding father of the field of computer graphics. Lincoln also shipped the TX-0, entirely surplus to its needs after the arrival of TX-2, to the MIT Research Laboratory for Electronics (RLE), where it became the foundation—the temple, the dispensary—for a new “hacker” subculture of computer addicts, who would finagle every spare minute they could on the machine, roaming the halls of the RLE well past midnight. The hackers compared the experience of being in total control of a computer to “getting in behind the throttle of a plane,” “playing a musical instrument,” or even “having sex for the first time”: hyperbole, perhaps, similar to Arnold Schwarzenegger’s famous claim about the pleasures of pumping iron.[7]   It is worth pausing to note here the extreme maleness of this group: not a single woman is mentioned among the MIT hackers in Steven Levy’s eponymous book the topic. This is unsurprising since very few women attended MIT; until 1960 they were technically allowed but not encouraged to enroll. But this severe imbalance of the sexes did not change much with time. Almost all the people who got hooked on computers as interactive computing spread beyond MIT were also men. It was certainly not the case that the computing profession as a whole was overwhelmingly male circa 1960: at that time women probably occupied a third or more of all programming jobs. But at the time, almost all of those jobs involved neatly coiffed business people running data processing workloads in large corporate or government offices, not disheveled hackers clacking away at a console into the wee hours. For whatever reason, men showed a much greater predilection than women to get lost in the rational yet malleable corridors of the digital world, to enjoy using computers for the sake of using computers. This fact likely produced the eventual transformation of computer science into an overwhelmingly male field, a development we may revisit later in this story. But for now, back to the topic at hand.[8] Minicomputers: The DIY Computer While Clark was exploring the potential of computers as a scientific instrument, his engineering partner, Ken Olsen, saw the market potential for selling small computers like the TX-0. Having worked closely with IBM on the SAGE contract, he came away unimpressed with their bureaucratic inefficiency. He thought he could do better, and, with help from one of the first venture capital firms and Harlan Anderson, another Lincoln alum, he went into business. Warned by the head of the firm to avoid the term “computer,” which would frighten investors with the prospects of an expensive uphill struggle against established players like IBM, Olsen called his company Digital Equipment Corporation, or DEC.[9] In 1957, Olsen set up shop in an old textile mill on the Assabet River about a half-hour west of Lincoln Lab. There the company remained until the early 1990s, at the end of Olsen’s tenure and the beginning of the company’s terminal decline. Olsen, an abstemious, church-going Scandinavian, stayed in suburban Massachusetts for nearly all of his adult life; he and his wife lived out their last years with a daughter in Indiana. It is hard to imagine someone who less embodies the free-wheeling sixties counterculture than Ken Olsen. But his business became the vanguard for and symbol of a computer counterculture; one that would raise a black flag of rebellion against the oppressive regime of IBM-ism and spread the joy of interactive computing far beyond MIT, sprinkling computer addicts across the country. DEC began selling its first computer, the PDP-1 (for Programmed Data Processor) in 1959. Its design bore a fair resemblance to that of the TX-0, and proved similarly addictive to young hackers when one was donated to MIT in 1961. A whole series of other models followed, but the most ground-breaking was the PDP-8, released in 1965: a computer about the size of a blue USPS collection box for just $18,000 dollars.  Not long after, someone (certainly not the straightlaced Olsen), began calling this kind of small computer a minicomputer, by analogy to the newly-popular miniskirt. A DEC ad campaign described PDP-8 computers as “approachable, variable, easy to talk to, personal machines.” A 1966 advertisement depicting various PDP-8 models juxtaposed with cuddly teddy bears. [Datamation, October 1966] Up to that point, the small, relatively inexpensive computers that did exist typically stored their short-term memory on the magnetized surface of a spinning mechanical drum. This put a hard ceiling on how fast they could calculate. But the PDP-8 used fast magnetic core memory, bringing high-speed electronic computing within reach of even quite small science and engineering firms, departments and labs. PDP-8s were also deployed as control systems on factory floors, and even placed on a tractor. They sold in large numbers, for a computer—50,000, all told, over a fifteen-year lifespan—and became hugely influential, spawning a whole industry of competing minicomputer makers, and later inspiring the design for Intel’s 4004 microprocessor.[10] In the early 1960s, IBM, under Thomas Watson, Jr., established itself as the dominant manufacturer of mainframe computers in the United States (and therefore, in effect, the world). Its commissioned sales force cultivated deep relationships with customers, which lasted well beyond the closing of the deal. IBM users leased their machines on a monthly basis, and in return they got access to an extensive support and service network, a wide array of peripheral devices (many of which derived from IBM’s pre-existing business as a maker of punched-card processing machinery), system software, and even application software for common business needs like payroll and inventory tracking. IBM expected their mainframe customers to have a dedicated data processing staff, independent from the actual end users of the computer, people responsible for managing the computer’s hardware and software and their firm’s ongoing relationship with IBM.[11] DEC culture dispensed with all of that; it became a counter-culture, representing everything that IBM was not. Olsen expected end users take full ownership of their machine in every sense. The typical buyer was expected to be an engineer or scientist; an expert on their own needs, who could customize the system for their application, write their own software, and administer the machine themselves. IBM had technical staff with the interest and skills needed to build interactive systems. Andy Kinslow, for example, led a time-sharing project (more on time-sharing shortly) at IBM in the mid-1960s; he wanted to give engineers like himself that hands-on-the-console experience that the MIT hackers had fallen in love with.  But the eventual product, TSS/360, had serious technical limitations at launch in 1967, and was basically ignored by IBM afterwards.[12] This came down to culture: IBM’s product development and marketing focused on the needs of their core data-processing customers who wanted more powerful batch-processing systems with better software and peripheral support, not by the interests of techies and academics who wanted hands-on computer systems and didn’t mind getting their hands dirty. And so, the latter bought from DEC and other smaller outfits. As an employee of Scientific Data Systems (another successful computer startup of the 1960s) put it: There was, of course, heavy spending on scientific research throughout the sixties, and researchers weren’t like the businessmen getting out the payroll. They wanted a computer, they were enchanted with what we had, they loved it like Ferrari or a woman. They were very forgiving. If the computer was temperamental you’d forgive it, the way you forgive a beautiful woman.[13] DEC customers included federally-funded laboratories, engineering firms, technical divisions of major corporate conglomerates, and, of course, universities. They worked predominantly onreal-time projects in which a computer interacted directly with human users or some kind of industrial or scientific equipment: doing on-demand engineering calculations for a chemical manufacturer, controlling tracing machinery for physics data analysis, administering experiments for psychological research, and more.[14] They shared knowledge and software through a community organization called DECUS, the Digital Equipment Computer Users’ Society. IBM users had founded a similar organization, SHARE, in 1955, but it had a different culture from the start, one that derived from the data-processing orientation of IBM. SHARE’s structure assumed that each participating organization had a computing center, distinct from its other operational functions, and it was the head of that computing center who would participate in SHARE and collaborate with other sites on building systems software (operating systems, assemblers, and the like). The end users of computers, who worked outside the computing center, could not participate in SHARE at all, in the beginning. At most DEC sites, no such distinction between users and operators existed.[15] My father, a researcher specializing in computerized medical records, was part of the DEC culture, and co-authored at least one paper for DECUS, CJ McDonald and B Bhargava, “Ambulatory Care Information Systems Written in BASIC-Plus,” DECUS Proceedings (Fall 1973). Here he is pictured at top left, in 1973, in the terminal room for his research institute’s PDP-11 [Regenstrief Institute] DECUS, like SHARE, maintained an extensive program library: for reading and writing to peripheral devices, assembling and compiling human-readable code into machine language, debugging running programs, calculating math functions not supported by hardware (e.g., trigonometric functions, logarithms, and exponents), and more. Maintaining the library required procedures for reviewing and distributing software: In 1963, for example, users contributed fifty programs, most of which were reviewed by at least two other users, and seventeen of which were certified by the DECUS Programming Committee.[16] Aflame with the possibilities of interactive computing to revolutionize their fields of expertise, from education to clinical medicine, the reach of the DEC devotee sometimes exceeded their grasp: at one DECUS meeting, Air Force doctor Joseph Mundie reminded “the computer enthusiasts,” with gentle understatement, “that even the PDP computer had a few shortcomings when making medical diagnoses.”[17] Though none achieved the market share of DEC, a number of competing minicomputer makers also flourished in the late 1960s in the wake of the PDP-8. They included start-ups like Data General (founded by defectors from DEC, just up the Assabet river in Hudson, Massachusetts), but also established electronics firms like Honeywell, Hewlett-Packard, and Texas Instruments. Many thousands of units were sold, exposing many more thousands of scientists and engineers to the thrill of getting their hands dirty on a computer in their own lab or office. Even among the technical elite at MIT, administrators had considered the hackers’ playful antics with the TX-0 and PDP-1 in the late 1950s and early 1960s a grotesque “misappropriation of valuable machine time.” But department heads acquiring a small ten- or twenty-thousand-dollar computer had much less reason to worry about wastage of spare cycles, and even if they did, most lacked a dedicated operational staff to oversee the machine and ensure its efficient use. Users were left to decide for themselves how to use the computer, and they generally favored their own convenience: hands on, interactive, at the terminal. But even while minis were allowing thousands of ordinary scientists and engineers to enjoy the thrill of having an entire computer at their disposal, another technological development began spreading a simulacrum of that experience among an even wider audience.[18] Time-Sharing: Spreading The Love As we have already seen, a number of people got hooked on interactive computing in and around MIT by 1960, well before the PDP-8 and other cheaper computers became available. Electronic computers could perform millions of operations per second, but in interactive mode, all of that capacity sat unused while the human at the console was thinking and typing. Most administrators—those with the responsibility for allocating limited organizational budgets—recoiled at the idea of allowing a six- or seven-figure machine to sit around idle, wasting that potential processing power, just to make the work of engineers and scientists a bit more convenient. But what if it wasn’t wasted? If you attached four, or forty, or four hundred, terminals to the same computer, it could process the input from one user while waiting for the input from the others, or even process offline batch jobs in the interim between interactive requests. From the point-of-view of a given terminal user, as long as the computer was not overloaded with work, it would still feel as if they had interactive access to their own private machine. The strongest early proponent of this idea of time-sharing a computer was John McCarthy, a mathematician and a pioneer in artificial intelligence who came from Dartmouth College to MIT primarily to get closer access to a computer (Dartmouth had no computer of its own at the time). Unsatisfied with the long turnaround that batch-processing imposed on his exploratory programming, he proposed time-sharing as a way of squaring interactive computing with the other demands on MIT’s computation center.[19] McCarthy’s campaigning eventually led an MIT group led by Fernando “Corby” Corbató to develop the Compatible Time-Sharing System (CTSS)—so-called because it could operate concurrently with the existing batch-processing operations on the Computation Center’s IBM computer. McCarthy also directed the construction of a rudimentary time-sharing system on a PDP-1 at Bolt, Beranek, and Newman, a consulting firm with close ties to MIT. This proved that a less powerful computer than an IBM mainframe could also support time-sharing (albeit on a smaller scale), and indeed even PDP-8s would later host their own time-sharing systems: a PDP-8 could support up to twenty-four separate terminals, if configured with sufficient memory.[20] The most important next steps taken to extend the reach of time-sharing specifically, and interactive computing generally, occurred at McCarthy’s former employer, Dartmouth. John Kemeny, head of the Dartmouth math department, enlisted Thomas Kurtz, a fellow mathematician and liaison to MIT’s Computation Center, to build a computing center of their own at Dartmouth. But they would do it in a very different style. Kemeny was one of several brilliant Hungarian Jews who fled to the U.S. to avoid Nazi persecution. Though of a younger generation than his more famous counterparts such as John von Neumann, Eugene Wigner, and Edward Teller, he stood out enough as a mathematician to be hired onto the Manhattan Project as a mere Princeton undergraduate in 1943. His partner, Kurtz, came from the Chicago suburbs, but also passed through Princeton’s elite math department, as a graduate student. He began doing numerical analysis on computers right out of college in the early 1950s, and his loyalties lay more with the nascent field of computer science than with traditional mathematics. Kurtz (left) and Kemeny (right), inspecting a GE flyer for a promotional shot. The pair started in the early 1960s with a small drum-based Librascope LGP-30 computer, operated in a hands-on, interactive mode. By this time both men were convinced that computers had acquired a civilizational import that would only grow. Having now also seen undergraduates write successful programs in LGP-30 assembly, they also became convinced that understanding and programming computers should be a required component of a liberal education. This kind of expansive thinking about the future of computing was not unusual at the time: other academics at the time were writing about the impact of computers on libraries, education, commerce, privacy, politics, and law. As early as 1961, John McCarthy was giving speeches about how time-sharing would lead to an all-encompassing computer utility that would offer a wide variety of electronic services served up from computers to home and office terminals via the medium of the telephone network.[21] Kurtz proposed that a new, more powerful computer by brought to Dartmouth that would be time-shared (at the suggestion of McCarthy), with terminals directly accessible to all undergraduates: the computer equivalent of an open-stack library. Kemeny applied his political skills (which would eventually bring him the presidency of the university), to sway Dartmouth’s leaders while Kurtz secured grants from the NSF to cover the costs of a new machine. General Electric, which was trying to elbow its way into IBM’s market, agreed to a 60% discount on the two computers Kemeny and Kurtz wanted: a GE-225 mainframe for executing user programs and a Datanet-30 (designed as a message-switching computer for communication networks) for exchanging data between the GE-225 and the user terminals. They called the combined system the Dartmouth Time-Sharing System (DTSS). It did not only benefit Dartmouth students: the university became a regional time-sharing hub via which students at other New England colleges and even high schools got access to computing via remote terminals connected to DTSS by telephone: by 1971 this included fifty schools in all, encompassing a total user population of 13,000[22] Kemeny teaching Dartmouth students about the DTSS system in a terminal room. Beyond this regional influence, DTSS made two major contributions of wider significance to the later development of the personal computer. First was a new programming language called BASIC. Though some students had proved apt with machine-level assembly language, it was certainly too recondite for most. Both Kemeny and Kurtz agreed that to serve all undergraduates, DTSS would need a more abstract, higher-level language that students could compile into executable code. But even FORTRAN, the most popular language of the time in science and engineering fields, lacked the degree of accessibility they strove for. As Kurtz later recounted, by way of example, it had an “almost impossible-to-memorize convention for specifying a loop: ‘DO 100, I = 1, 10, 2’. Is it ‘1, 10, 2’ or ‘1, 2, 10’, and is the comma after the line number required or not?” They devised a more approachable language, implemented with the help of some exceptional undergraduates. The equivalent BASIC loop syntax, FOR I = 1 TO 10 STEP 2, demonstrates the signature feature of the language, the use of common English words to create a syntax that reads somewhat like natural language.[23] The second contribution was DTSS’ architecture itself, which General Electric borrowed to set up its own time-sharing services, not once, but twice: The GE-235 and Datanet-30 architecture became GE’s Mark I time-sharing system, and a later DTSS design based on the GE-635 became GE’s Mark II time-sharing system. By 1968, many firms had set up time-sharing computer centers to which customers could connect computer terminals over the telephone network, paying for time by the hour. Over 40% of this $70 million dollar market (comprising ten of thousands of users) belonged to GE and its Dartmouth-derived systems. The paying customers included Lakeside School in Seattle, whose Mother’s Club raised the funds in 1968 to purchase a terminal with which to access a GE time-sharing center. Among the students exposed to programming BASIC at Lakeside were eighth-grader Bill Gates and tenth-grader Paul Allen.[24] Architecture of the second-generation DTSS system at Dartmouth, circa 1971. GE’s marketing of BASIC through its time-sharing network accelerated the language’s popularity, and BASIC implementations followed for other manufacturers’ hardware, including DEC and even IBM. By the 1970s, helped along by GE, BASIC had established itself as the lingua franca of the interactive computing world. And what BASIC users craved, above all, were games.[25] A Culture of Play Everywhere that the culture of interactive computing went, play followed. This came in the obvious form of computer games, but also in a general playful attitude towards the computer, with users treating the machine as a kind of toy and the act of programming and using it as an end in itself, rather than a means towards accomplishing serious business.   The most famous instance of this culture of play in the early years of MIT hacking came in the form of the contest of reflexes and wills known as Spacewar!. The PDP-1 was unusual for its time in having a two-dimensional graphical display in the form of a circular cathode-ray-tube (CRT) screen. Until the mid-1970s, most people who interacted with computers did so via a teletype. Originally invented for two-way telegraphic messaging, these machines could take in user input like a normal typewriter, send that input over the wire to a remote recipient (the computer in this case), and then automatically type out the characters received over the wire in response. Because of its origins in the SAGE air defense program, however, the MIT PDP-1 also came equipped with a screen designed for radar displays. The MIT hackers had already exercised their playfulness in the form of several earlier games and graphical demos on the TX-0, but it was a hanger-on with no official university affiliation named Stephen “Slug” Russell who created the initial version of Spacewar!, inspired by the space romances of E.E. “Doc” Smith. The game reached a useable form by about February 1962, allowing two players controlling rocket ships to battle across the screen, hurling torpedoes at one another’s spaceships. Other hackers quickly added enhancements, however: a star background that matched Earth’s actual night sky, a sun with gravity, hyperspace warps to escape danger, a score counter, and more. The resulting game was visually exciting, tense, and skill-testing, encouraging the MIT hackers to spend many late nights blasting each other out of the cosmos.[26] Spacewar!’s dependence on a graphical display limited its audience, but Stanford became a hotbed of Spacewar! after John McCarthy moved there in 1962, and its use is also well-attested at the University of Minnesota. In 1970, Nolan Bushnell started his video game business (originally called Syzygy, later Atari), to create an arcade console version of the game, which he called Computer Space. The game’s influence lasted into the 1990s, with the release of the game Star Control and its epic sequel (The Ur-Quan Masters), which introduced the classic duel around a star to my generation of hobbyists.[27] The large majority of minicomputers users who lacked a screen did not, however, lack for games. Teletype games relied on text input and output, but could be just as compelling, ranging from simple guessing games up to rich strategy games like chess. Enthusiasts exchanged paper tapes among themselves, but DECUS also helped to spread information about games and game programs among the DEC user base. The very first volume of the DECUS newsletter, DECUSCOPE, from 1962, contains an homage to SpaceWar!, and a simple dice game appeared in the program library available to all members in 1964. By November 1969, the DECUS software catalog listed thirty-seven games and demos, including simple games like hangman and blackjack, but also more sophisticated offerings like SpaceWar! and The Sumer Game, a Bronze Age resource-management simulation. The catalog of scientific and engineering applications, the primary reason for most owners to have a minicomputer in the first place, numbered fifty-eight.[28] Playfulness could also be expressed in forms other than actual games. The MIT hackers, for example, wrote a variety of programs simply for the fun of it: a tinny music generator, an Arabic to Roman numeral converter, an “Expensive Desk Calculator” for doing simple arithmetic on the $120,000 PDP-1, an “Expensive Typewriter” for composing essays. Using the computer to efficiently achieve some real-world outcome did not necessarily enter their minds: many worked on tools for writing and debugging programs without much thought to using the tools for anything other than more play; often “the process of debugging was more fun than using a program you’d debugged.” As the interactive computing culture expanded from minicomputers to time-sharing systems, fewer and fewer of its acolytes had the heightened taste and technical skill required to extract joy from the creation of compilers and debuggers; but many of these new users could create computer games in BASIC, and all could play them. By about 1970, BASIC gaming had become by far the most widespread culture of computer-based play (though not the only one; the University of Illinois / Control Data Corporation PLATO system, for example, constituted its own, distinct sub-culture). As with the earlier minicomputer teletype games, almost all of these BASIC games had textual interfaces, because hardly anyone yet had access to a graphical display. Dave Ahl, who worked at DEC as an educational marketing manager, began including code listings for BASIC games in his promotional newsletter, EDU. Some were of his own creation (like a conversion of The Sumer Game called Hammurabi), others were contributed by high school and college students using DEC systems at school. They proved so popular that DEC published a compilation in 1973, 101 BASIC Computer Games, which went through three printings. After leaving the company, Ahl wisely retained the rights, and went on to sell over a million copies to computer buyers in the 1980s.[29] While many of these games were derivative of existing board or card games, others, like SpaceWar!, created whole new forms of play, unique to the computer. Unlike SpaceWar!, most of these were single-player experiences that relied on the computer to hide information, gradually revealing a novel world to the user as they explored. Hide and Seek, for example, a simple game written by high school students about searching a grid for a group of hiders, evolved into a more complex searching game called Hunt the Wumpus, with many later variants. Computer addicts overlapped substantially with Star Trek fans, and so a genre of Star Trek strategy games also emerged. The most popular version, in which the player hunts Klingons across the randomly-populated quadrants of the galaxy, originated with Mike Mayfield, an engineer who originally wrote it for a Hewlett-Packard (HP) minicomputer (presumably the one he used at work). DECUS was not the only organization sharing program libraries, and Mayfield’s Star Trek became part of the HP library, from whence it found its way to Ahl, who converted it to BASIC. Other versions followed, such as Bob Leedom’s 1974 Super Star Trek.[30] The practices of the BASIC gaming community made it very easy for gaming lineages to evolve in this way, because every game was distributed textually, as BASIC code. If you were lucky, you got a paper or magnetic tape from which you could automatically read the code into your computer’s memory. If not (if you wanted to try out a game from Ahl’s book, for example), you were in for hours of tedious and error-prone typing. But in either case, you had total access to the raw source code. You could read it, understand it, and modify it. If you wanted to make Ahl’s Star Trek slightly easier, you could modify the phaser subroutine on line 3790 to do more damage. If you were more ambitious, you could go to line 1270 and add a new command to the main menu—make an inspiring speech to the crew, perhaps? A selection of the code listing for Civil War, a simulation game created by high school students in Lexington, Massachusetts in 1968, and included in Ahl’s 101 BASIC Computer Games book. Typing something like this into your own computer required a great deal of patience. [Ahl, 101 Basic Computer Games, 81] Perhaps the most prolific game author of the era, Don Daglow, got hooked on a DEC PDP-10 in 1971 through a time-sharing terminal installed in his dorm at Pomona College, east of Los Angeles. Over the ensuing years he authored his own version of Star Trek, a baseball game, a dungeon-exploration game based on Dungeons & Dragons, and more. His extended career owed to his extended time at Pomona where he had consistent access to the computer: nine years in total as an undergraduate, graduate student, and then instructor.[31] By the early 1970s, many thousands of people like Daglow had discovered the malleable digital world that lived inside of computers. If you could master its rules, it became an infinite erector set, out of which you could reconstruct an ancient long-dead civilization, or fashion a whole galaxy full of hostile Klingons. But unlike Daglow, most of these computer lovers were kept at arm’s length from the object of their desire. Perhaps they could use the university computer at night while they were an undergraduate, but lost that privilege upon graduation a few years later. Perhaps they could afford to rent a few hours of access to a time-sharing service each week, perhaps they could visit a community computing center (like Bob Albrecht’s in Menlo Park), perhaps, like Mike Mayfield, they could cadge a few hours on the office computer for play after hours. But best of all would be a computer at home, to call their own, to use whenever the impulse struck. Out of such longings came the demand for the personal computer. Next time we will look in detail at the story of how that demand was satisfied, and by whom.

Read more
Discovering Interactivity

The very first electronic computers were idiosyncratic, one-off research installations.1 But as they entered the marketplace, organizations very quickly assimilated them into a pre-existing culture of data-processing – one in which all data and processes were represented as stacks of punched cards. Herman Hollerith developed the first tabulator capable of reading and counting data based on holes in paper cards, for the United States census in the late nineteenth century. By the middle of the following century, a whole menagerie of descendants of that machine had proliferated across large businesses and government offices worldwide. Their common language was a card consisting of a series of columns, with each column (typically) representing a digit that could be punched in one of ten places to represent 0 through 9.2 The punching of input data into cards required no expensive machinery, and could be distributed among the various offices across the organization which generated that data. When that data needed processing – for example, to compute the revenue numbers for the sales department’s quarterly report – the relevant cards could then be brought to a  data center and be queued to be run through the necessary machines to produce a set of output data on cards or printed paper. Around the central processing machines – tabulators and calculators – lay a whole array of peripheral devices for gang punching, duplicating, sorting, and interpreting cards3. An IBM 285 tabulator, a popular piece of punched-card machinery in the 1930s and 40s By the latter half of the 1950s, virtually all computers were installed into this kind of “batch-processing” system. From the point of view of the typical end user in that sales department, very little had changed. One brought in a stack of cards to be processed, and received a printout or another stack of cards as the result of your job. In between, the cards were transformed from holes in paper into electronic signals and then back again, but that made little difference to you. IBM had been the dominant player in punched-card machinery, and remained the dominant player in electronic computers, in large part due to their existing sales relationships and wide range of peripheral equipment. They simply replaced their customers’ mechanical tabulators and calculators with a faster, more flexible data-processing engine. An IBM 704 set up for processing punched cards. The woman in the foreground is operating the card reader. This system of punched-card data processing had functioned smoothly for decades and showed no sign of decline – quite to the contrary. Yet in the late 1950s, a fringe subculture of computer researchers began to argue that this whole way of working should be overturned – the best way to use a computer, so they claimed, was to do so interactively. Rather than leaving a job and coming back later to pick up the results, the user should commune directly with the machine, and summon its powers as needed. In Capital, Marx described how industrial machines -merely set in motion by men – supplanted tools directly controlled by a human hand.4 Computers, however, began life as a machines. Only later did some of their users re-imagine them as tools. This re-imagining did not originate in the data processing centers at the likes of the U.S. Census Bureau, Metropolitan Life, or U.S. Steel.5 For an organization trying to get this week’s payroll processed as efficiently and reliably as possible, it is hardly desirable to have someone disrupt that processing by futzing around on the computer. The value of being able to sit down at a console and just try things out was, however, more obvious to academic scientists and engineers, who wanted to explore a problem, to attack it from a variety of angles until a weak point was discovered, to alternate quickly between thought and action. So, the ideas came from academic researchers. But the money to pay for using a computer in such a profligate fashion did not come from their department heads.  The new subculture (one might even say, cult) of interactive computing grew out of a productive partnership between the American military and the elite among American universities. This mutually beneficial relationship began in World War II. Atomic weaponry, radar, and other wonder weapons had taught the military leadership that seemingly arcane academic preoccupations could turn out to have tremendous military significance. The coziness persisted for roughly a generation before disintegrating in the politics of another war, in Vietnam. In the interim, American academics had access to vast sums of money, with few questions asked, for almost anything that could be vaguely justified in relation to national defense. For interactive computers, the justification started with a bomb. Whirlwind and SAGE On August 29, 1949, a Soviet research team successfully detonated their first nuclear weapon at a test site in Kazakhstan. Three days later a U.S. Air Force reconnaissance flight over the North Pacific picked up traces of radioactive material from that test in the atmosphere. The Soviets had the bomb, and their American rivals now knew it. Tensions between the two had already been running high for over a year, since the Soviets had cut off the land routes into the Western-controlled sectors of Berlin, in response to plans by the Western powers to rebuild Germany as a strong economic power. The blockade ended in the spring of 1949, stymied by a massive operation by the Western Allies to supply the city by air. Tensions between the U.S. and the U.S.S.R. eased slightly. Nonetheless, those in charge of America’s national defense could not ignore the existence of a potential hostile power with access to nuclear weapons, especially given the ever-increasing size and range of strategic bombers. The U.S. had a chain of radar stations for detecting incoming aircraft, constructed on the Atlantic and Pacific coasts during World War II. But these posts used outdated technology, did not cover the northern approaches over Canada, and were not supported by any central system to coordinate air defense.  To devise a remedy for this situation, the Air Force (an independent branch of the U.S. military since 1947) convened an Air Defense Systems Engineering Committee (ADSEC). The group is better known to history as the Valley Committee, after it chair, George Valley. He was an MIT physicist, and a veteran of the wartime radar research group known as the Rad Lab, transformed after the war into the Research Laboratory of Electronics (RLE). After studying the problem with his committee for about a year, Valley issued his final report in October 1950. One might guess that such a report would be a stodgy mess of bureaucratese, culminating in a carefully hedged and largely conservative proposal. Instead one finds a bracing piece of creative argumentation, and a radical and risky recommended course of action. It clearly owes a debt to another MIT professor, Norbert Weiner, who argued that  the study of living creatures and machines could be unified under the single discipline of cybernetics. For Valley and his co-authors started from the premise that an air defense system is  an organism – not metaphorically, but in actual fact. Radar stations serve as its sensory organs, interceptors and missiles as the effectors with which it can act in the world. Both operate under the control of a director, which uses sensory inputs to decide what actions to take.[^cyber] They further argued that a director comprised purely of human elements would be incapable of stopping hundreds of incoming aircraft across millions of square miles in a matter of minutes, and that therefore as many as possible of the directive functions should be automated. Most radically, they concluded that the best means to automate the director would be via digital electronic computers, which could substitute for several areas of human judgment: analyzing incoming threats, directing weapons against those threats (calculating intercept courses and transmitting them to fighters in the air), and perhaps even strategizing about optimal response patterns. That computers would be suited to such a task was by no means obvious. At the time there existed exactly three functioning electronic computers in the entire United States, none of which came close to meeting the reliability requirements of a military system upon which millions of lives might hinge. Nor did any have the capability to respond to incoming data in real time. They were simply very fast, programmable number-crunchers. Nonetheless, Valley had reason to believe that a real-time digital computer was possible, because he knew about Project Whirlwind. It had begun during the war at MIT’s Servomechanisms Laboratory, under a young graduate student name Jay Forrester. Its original goal was to build a general-purpose flight simulator, one that could be reconfigured to support new models of aircraft without being rebuilt from scratch each time. A colleague convinced Forrester that his simulator should use digital electronics to process inputs from the pilot and generate new output states for the instruments. Gradually the effort to build this high-speed digital computer outgrew and overshadowed the original objective. With the flight simulator forgotten and the war that had occasioned it long since over, Whirlwind’s overseers at the Office of Naval Research (ONR) were becoming disenchanted with its ever-growing budget and ever-receding completion date.  In 1950, the ONR drastically slashed Forrester’s budget for the coming year, with the intention of cutting off the project completely after that. But for George Valley, Whirlwind was a revelation. The actual Whirlwind computer had yet to become fully functional. But once it was… here was a computer that was not all disembodied mind. A computer with senses and effectors. An organism. Forrester had already mooted plans for expanding Whirlwind to become the centerpiece of a national military command-and-control system. To the computer experts on the ONR’s board, who saw computers as suited only to mathematical applications, this appeared grandiose and absurd. But it was just the kind of vision Valley was looking for, and he arrived just in time to save Whirlwind from oblivion. Despite (or perhaps because of) its bold ambition, the Valley report had convinced the Air Force leadership, and they kicked off a massive new research and development program to first figure out how to build an air defense system centered on digital computers, and then build one.  The Air Force partnered with MIT to carry out the core research effort, a natural choice given the presence of Whirlwind and the RLE, and a history of successful air-defense collaboration going back to the Rad Lab during World War II. They dubbed the new effort Project Lincoln, and constructed new research facility, Lincoln Laboratory, to house it at Hanscom Field, about fifteen miles northwest of Cambridge. The Air Force called the overall computerized air defense project SAGE, a typically awkward military acronym for Semi-Automatic Ground Environment. Whirlwind would serve as the test-bed computer for proving out the concept before going to full-scale hardware production and deployment – a responsibility that would fall to IBM. The production version of Whirlwind to be built by IBM received the rather less evocative name of AN/FSQ-76. By the time the Air Force drew up its plans for the full SAGE system in 1954, it consisted of a variety of radar installations, air bases, and anti-aircraft weapons, all controlled from  twenty-three direction centers, massive blockhouses designed to survive an airstrike. To fill these centers, IBM would need to provide forty-six computers, not twenty-three, at a cost to the Air Force of many billions of dollars. That was because they still relied on vacuum tubes for their logic circuits, and vacuum tubes burnt out like light bulbs. Any of the tens of thousands of tubes in the active computer could fail at any moment. It would obviously be unacceptable to leave an entire sector of the nation’s air space undefended while technicians conducted their repairs, so a redundant machine was always kept at the ready. A SAGE direction center at Grand Forks Air Force Base in North Dakota, containing two AN/FSQ-7 computers. Each direction center contained dozens of operators in front of cathode-ray tube (CRT) screens, each monitoring a sub-sector of the surrounding airspace. The computer tracked any potential airborne threats and rendered them as tracks onto the screens. An operator could use a light gun to pull up additional details on a track and issue commands for a defense response, which the computer transformed into a printed message to an available missile battery or air force base. The Virus of Interactivity Given the nature of the SAGE system – a direct, real-time interaction between human operators and a digital computer via CRT, light gun, and console – it is not surprising that  Lincoln Laboratory incubated the first cohort of interactive computing devotees. In fact, the entire computing culture at Lincoln Lab was an insular bubble, cut off from the batch-processing norm that was developing in the commercial world. Researchers used Whirlwind and its descendants by reserving a block of time when they would have exclusive access to the computer. They were accustomed to using their hands, eyes and ears to interact directly with it via switches, keyboards, brightly-lit screens, and even a loudspeaker, with no paper intermediary. This odd little sub-culture spread into the wider world beyond Lincoln Lab like a virus, by direct physical contact. And if it was a virus, the closest thing to a patient zero was a young man named Wesley Clark. Clark dropped out of a graduate program in physics at Berkeley in 1949, to become a technician at a nuclear weapons plant. But he disliked the work. After reading several magazine articles about computers, started looking for a way to get into what seemed an exciting new field, full of potential. He learned about the need for computer expertise at Lincoln from an advertisement, and in 1951 he moved East to take a position under Forrester, now head of Lincoln’s Digital Computer Laboratory. Wesley Clark in 1962, demonstrating his LINC biomedical computer Clark joined the Advanced Development Group, a subsection of the lab that epitomized the kind of laxity that prevailed in the military-university relationship in this period. Though technically a part of the Lincoln universe, the Advanced Development team existed in a bubble within a bubble, isolated from the immediate needs of the SAGE project and free to pursue any computing work that could be loosely tied to Lincoln’s air-defense mission. Their main task in the early 1950s was to bring up the Memory Test Computer (MTC), designed to prove out a new, highly efficient and reliable form of digital storage known as core memory, which would replace the finicky CRT-based storage that Whirlwind used. Since the MTC had no users other than the engineers bringing it to life, Clark had total control of the computer for hours at a time. Clark had become interested in the trendy cybernetic admixture of physics, physiology, and information theory through an older colleague, Belmont Farley, who had contacts with a biophysics group in the RLE, down in Cambridge. Clark and Farley spent many of their long sessions with the MTC building neural network models in software, to investigate the properties of self-organizing systems. Out of these experiences with Farley and the MTC, Clark began to extract certain axiomatic principles of computer engineering, from which he never strayed. In particular, he came to believe that “convenience [for the user] is the most important design factor.”7 In 1955 Clark teamed up with Ken Olsen, one of the designers of the MTC, to propose a plan to build a new computer that would point the way to the next generation military command-and-control systems. By using a very large core memory for storage and transistors for its logic, it would be much more compact, reliable, and powerful than Whirlwind. They initially put forward a design they called TX-18, but the heads of Lincoln Lab rejected it as too costly and risky. Transistors had only entered the market a few years earlier, and only a handful of experimental computers had been built using transistor logic. So Clark and Olsen came back with a scaled-down proof-of-concept machine called TX-0, which was approved. The TX-0 computer For Clark the ostensible command-and-control rationale for TX-0, though obviously necessary to sell it to Lincoln’s sponsors, held far less interest than its ability to further his ideas about computer design. For him interactive computing had ceased to be a mere fact of life at Lincoln and had become a norm – the correct way to build and use computers, especially for scientific work. He opened the TX-0 up for use by the MIT biophysicists, though their work had no connection to with Lincoln’s air defense mission, allowing them to use the machine’s visual display to analyze EEG data from sleep studies. No one seemed to mind. TX-0 was successful enough that in 1956 Lincoln approved a full-scale transistorized computer, TX-2, with a massive two-million-bit memory. It would take about two years to complete this new project. After that, the virus began to escape the lab. With the TX-2 completed, Lincoln had no need for the earlier proof-of concept machine, and so agreed to ship the TX-0 down to Cambridge, on loan to the RLE. It was installed on the second floor, above the batch-processing Computation Center. And it immediately infected students and professors across the MIT campus, who clamored for time slots which would give them complete control of a computer. It had already become clear that it was effectively impossible to write a computer program correctly on the first try. Moreover, for researchers exploring a new problem, it often wasn’t even clear at the outset what the correct behavior should be. But to get results from the Computation Center, one typically had to wait hours, or even overnight. To the dozens of nascent programmers on campus it was a revelation to go upstairs and be able to discover an error and instantly correct it, to try a new approach and instantly see improved results. Some used their time on TX-0 to complete serious science and engineering projects, but the joy of interactivity also called forth a more playful spirit. One student built a text editing program that he called Expensive Typewriter. Another student followed suite with Expensive Desk Calculator, which he used to do his numerical analysis homework. Ivan Sutherland demonstrating his Sketchpad drafting program on the TX-2 Meanwhile Ken Olsen and another TX-0 engineer, Harlan Anderson, impatient with the slow progress on TX-2, decided to bring small-scale, interactive computing for scientists and engineers to the market. They left Lincoln to found Digital Equipment Corporation, setting up shop in a former textile mill on the Assabet River, about ten miles west of Lincoln. Their first computer, the PDP-1 (released in 1961), was effectively a TX-0 clone. The TX-0 and Digital had begun to spread excitement about a new way of using computers beyond the confines of Lincoln Lab. Still, so far the interactive computing virus was geographically localized, contained to eastern Massachusetts. But that was about to change. [Previous] [Next] Further Reading Lars Heide, Punched-Card Systems and the Early Information Explosion, 1880-1945 (2009) Joseph November, Biomedical Computing (2012) Kent C. Redmond and Thomas M. Smith, From Whirlwind to MITRE (2000) M. Mitchell Waldrop, The Dream Machine (2001)      

Read more
The Electronic Age

We saw last time how the first generation of digital computers were built around the first generation of automatic electrical switch, the electromagnetic relay. But by the time those computers were built, another digital switch was already waiting in the wings. Whereas the relay was an electromechanical device (because it used electricity to control a mechanical switch), this new class of digital switches was electronic – founded on the new science of the electron, a science born around the turn of the twentieth century. This science concretized the carrier of electrical force as not a current, wave, or field, but as a solid particle. The device that gave birth to an electronic age, rooted in this new physics, became known (at least in the U.S.) as the vacuum tube. Conventionally, two men figure in the story of its creation: the Englishman Ambrose Fleming, and the American Lee de Forest. In fact, of course, its origins are more complex and woven from many threads, which criss-cross Europe and the Atlantic, and stretch back as far as the early Leyden jar experiments of the mid-eighteenth century. For the purposes of our story, however, it’s convenient, and illuminating (so to speak) to begin with Thomas Edison. Edison made a curious discovery in 1880s as part of his work on a new kind of electric light, a discovery that sets the stage for our story. From there, further development of the vacuum tube was spurred by the demands of two other technological systems: a new form of wireless communication, and the ever-expanding long-distance telephone networks. Prologue: Edison Edison is, in the popular imagination, the inventor of the electric light bulb. This gives him both too much credit and too little. Too much credit, again, because Edison was not the only one to devise an incandescent bulb. In additional to a variety of pre-commercial predecessors, Joseph Swan and Charles Stearn in the U.K. and fellow American William Sawyer brought lamps to market around the same time as Edison. All consisted of a sealed, glass bulb containing a resistive filament. When placed in an electrical circuit, the heat generated by its resistance caused the filament to glow. The bulb was evacuated of air to prevent the filament from burning. Electric light was already commonplace in large cities in the form of electric arc lamps, used to illuminate large public spaces. All these inventors were trying to “subdivide the light,” drawing from the flaming arc a spark small enough to enter the home and replace gas lamps with a light source that was safer, cleaner, and brighter. What Edison — or, more correctly, the industrial lab which Edison headed — did, however, was more than merely to create a light source. He – it – built an entire inter-operable electrical system for home lighting – generators, transmission wires, transformers, and so forth, of which the bulb was only the most obvious and visible component. The presence of Edison’s name in his power companies was not a mere genuflection to the great inventor, as was the case with Bell Telephone. Edison proved himself not only an inventor strictly speaking, but also an able system builder.1 To that end, his lab continued to tinker with the various components of electric lighting even after their early successes. Early Edison lamp As part of these researches, some time in 1883, Edison (or perhaps one of his employees) decided to seal a metal plate into the incandescent lamp, along with the filament. The accounts of why he did this do not present a clear and unambiguous picture. But it was probably an attempt to alleviate the problem of lamp blackening: the tendency of the the glass interior of the bulb to accumulate a mysterious dark substance over time. Edison (if it was him) probably hoped that the blackening particles could be drawn off onto the electrified plate. To his surprise, however, he found that when the plate was wired into a circuit with the positive end of the filament, a current flowed that was directly proportional to the intensity of the filament’s glow. When it was connected to the negative end of the filament, nothing happened. Edison believed this effect, later dubbed the Edison effect, could be used to measure or even regulate the “electro-motive force,” or voltage, in an electrical power system. As was his habit, he filed a patent for this “electrical indicator,” then returned to other, more pressing matters.2 The Wireless We now skip forward twenty years, to 1904. At this time, In England, a man named John Ambrose Fleming was working on behalf of the Marconi Company to develop a better receiver for radio waves. It’s important, before we proceed further, to explain what the radio was and was not at this time, both as an instrument and as a practice. In truth, the radio wasn’t even yet radio – it was wireless. (Not until the 1910s did the former term supersede the latter in American English.) Specifically, it was the wireless telegraph – a means of conveying signals in the form of dots-and-dashes from a sender to a recipient. Its primary application was in ship-to-ship and ship-to-shore communication, and as such it was of special interest to the navies of the world. Some few inventors, notably Reginal Fessenden, were by this time experimenting with the notion of a radio-telephone: point-to-point speech communication over the air via a continuous wave. Not for another fifteen years, though, would broadcasting in the modern sense emerge: the intentional transmission of news, stories, music, and other programming to a wide audience. Until then, the omni-directional nature of radio signals was mainly a problem to be overcome, not a feature to be exploited. The radio equipment that existed at this time was well-adapted to sending Morse code, and ill-adapted to anything else. Transmitters generated “Hertzian” waves by sending a spark across a gap in a circuit. The signal thus propagated through the ether was accompanied by a dirty burp of static. Receivers detected this emission via a coherer: a collection of metal filings in a glass tube that cohered into a contiguous mass, thus completing a circuit, when stimulated by radio waves. The glass then had to be tapped to decohere the filings and reset the receiver for the next signal – at first by hand, but it did not take long to come up with automated tapping devices. Just coming into use in 1905 were crystal detectors, also known as “cat’s whisker” detectors. It turned out that by simply touching a wire to certain crystals such as silicon, iron pyrite, and galena (lead ore), one could pull a radio signal from the air. The resulting radio receivers were cheap, compact and very easy for anyone to try, and they stimulated a widespread amateur wireless movement, primarily among young men. The sudden surge in traffic that resulted raised a problem because of the shared nature of the radio commons. Not only did the innocent chatter of amateur “hams” accidentally interfere with naval communications, but some miscreants went so far as to send out false naval orders and distress signals. It became inevitable that the state would intervene. As Ambrose Fleming himself wrote, (no, we haven’t forgotten about him), the introduction of crystal detectors3 …was followed at once by an outburst of irresponsible radiotelegraphy at the hands of innumerable electrical amateurs and students which required the firm intervention of National and International legislation to keep it within the bounds of reason and safety. From the peculiar electrical properties of these little crystals would emerge, in due time, yet a third generation of digital switch to follow the relay and the tube, the switch that dominates our world today. But everything in its proper place. We have surveyed the stage, now let us return our attention to the actor who has just stepped into the footlights: Ambrose Fleming, England, 1904. Valve In 1904 Fleming was a professor of electrical engineering at University College, London, but also a consultant for the Marconi Company. Marconi’s initial interest in recruting him was to get his expert advice on the construction of the power plant at a new shore station, but soon after he took on the problem of building a better detector. Fleming in 1890 Everyone knew that the coherer was a poor detector in terms of sensitivity, and the magnetic detector that Marconi had devised was not much better. In order to help him find a successor, Fleming set out in the first instance to build a sensitive circuit for detecting Hertzian waves. If not a practical detector itself, such a device would be a useful for further investigations. To build such a thing, he needed a way to continuously measure the strength of the current generated by the incoming waves, in contrast to the discontinuous coherer (it was either on – with the filings cohered – or not).4 But known devices for measuring current strength – galvanometers – required a direct (unidirectional) current to operate. The alternating current induced by radio waves reversed directions so quickly that it would produce no measurement at all. Fleming then remembered a handful of curiosities he had sitting in a closet – Edison indicator lamps. In the 1880s, he had been a consultant for Edison Electric Light Company of London, and worked on the lamp blackening problem. During that time he received copies of Edison’s indicator, probably from William Preece, chief electrical engineer of the British Post Office, who had just come back from an electrical exhibition in Philadelphia. (Remember that it was the norm at this time, outside the United States, for post offices to control the telegraph and telephone, and therefore to be centers of electrical engineering expertise). Later, in the 1890s, Fleming did his own studies on the Edison effect using the lamps acquired from Preece. He showed that the effect consisted in a unidirectional current flow: negative electrical potential could flow from the hot filament to the cold electrode, but not vice versa. But not until 1904, when presented with the problem of radio wave detection, had he realized that this fact could have any practical use. The Edison indicator would allow only the forward surges of alternating current to crash over the gap between filament and plate, creating a steady one-way flow on the far side. Fleming grabbed one of the bulbs, attached it in series with a mirror galvanometer, and switched on a spark transmitter – et voila, the mirror turned and the light beam moved on the scale. It worked. He could precisely measure the incoming radio signal. Prototype Fleming valves. The anode is set in the middle of the loop of filament (the hot cathode). Fleming called his invention a “valve”, since it acted as a one-way gate for electricity. In more general electrical engineering terms, it was a rectifier – a means of transforming an alternating current into a direct one (rectifying it, i.e. straightening it out). Finally, it was called a diode, since it contained two electrodes: the hot cathode (the filament) which emitted electricity and the cold anode (the plate) which received it. Fleming made several refinements to the design, but in its essence it was no different from the indicator lamp built by Edison.5 Its transformation into a new kind of thing was, as we have seen before, the result of a change in mental state. A change in the world of ideas inside Fleming’s head, not in the world of stuff, outside of it. By itself, Fleming’s valve was a useful object. It was the best field test device yet found for measuring radio signals, and a reasonable detector in its own right. But it did not shake the world. The explosive growth of electronics came only after an American, Lee de Forest, added a third electrode, making the valve into a relay. Audion Lee de Forest had an unusual upbringing for a Yale man. His father, the Reverend Henry de Forest, was a Civil War veteran from New York, a Congregationalist pastor, and a fervent believer in his mission as a man of God to spread the light of knowledge and justice. As such, he dutifully took up the call when invited to the presidency of Talladega College in Alabama. Talladega had been founded by the New York-based American Missionary Association after the Civil War, with a mission to educate and edify the local black population.6 There young Lee found himself caught between two stones: picked on by the local black boys for being a homely and cowardly weakling; shunned by the local white boys for being a Yankee meddler. Nonetheless, the younger de Forest developed a firm confidence in himself. He found that he had more than a little skill as a mechanic and a tinkerer – his scale-model locomotive became a local wonder. Already as a teenager in Talladega he knew that he would make his way in the world through his inventions. Later, as a young man about town in New Haven, the religious convictions of this pastor’s son fell away — worn away by exposure to Darwinism, then shorn off in one fatal blow by the unexpected death of his father. But the core, unshakable sense of his own destiny remained – de Forest believed himself to be a man of genius, and aimed to make himself another Nikola Tesla – a wealthy, famous, and mystical magician of the electric age. His classmates at Yale, on the other hand, believed him to be a conceited windbag. He may well be the least likable human being to feature in our story thus far.7 De Forest ca. 1900 By the time he completed his Ph.D. at Yale in 1899, de Forest had set his heart on the emerging art of the wireless as his path to fame and fortune. Over the coming decades he pursued that path with great determination and resolve, but rather less scruple. It began with de Forest and a partner, Ed Smythe, working together in Chicago. Smythe kept de Forest in room and board with regular five-dollar payments, and together they developed their own radio detector, consisting of two metal leads connected by a paste that de Forest called “goo.” But De Forest was impatient for the rewards of his genius. He ditched Smythe and teamed up with a shady New York financier named Abraham White (nee, ironically, Schwartz), to form the De Forest Wireless Telegraph Company. The actual operations of the company were incidental to both protagonists: White concentrated on using the public’s ignorance to line his pockets. He flogged the stock of the new company relentlessly, and brought in millions from wide-eyed investors afraid of missing out on the radio boom. Meanwhile De Forest, amply funded by the “suckers”8, focused on proving his genius through the development of a new American system of wireless (in contrast to the European systems developed by Marconi and others). Unfortunately for that American system, however, de Forest’s “goo” detector didn’t actually work very well. He solved that problem in the short term by borrowing the design of Reginald Fessenden’s (patented) “liquid barretter” detector – two platinum wires immersed in a sulfuric acid bath. Fessenden soon filed suit for patent infringement – a suit he would clearly win. De Forest could not rest until he had devised a new detector that was unequivocally his own. In the autumn of 1906, he announced that he had done so. Before two separate meetings of the American Institute of Electrical Engineers, de Forest described his new wireless detector, which he dubbed the “Audion.” Its actual provenance is, however, rather dubious. For some time, much de Forest’s effort to build a new detector had centered on passing a current through a Bunsen burner flame, which he believed could act as an asymmetric conductor. As far as can be told, this idea had no merit.9 Then, at some point in 1905, he learned about Fleming’s valve. De Forest convinced himself that the valve and his Bunsen burner devices were in principle the same: simply replace the flame with a hot filament, encase it in glass bulb to contain the gas, and you had the valve. He then developed a series of patents that recapitulated the ancestry of the Fleming valve via his gaseous flame detectors. In this way he evidently thought to give himself priority of invention over Fleming’s U.S. patent, since his Bunsen burner work predated it (going all the way back to 1900). Whether this was self-delusion or simple fraud is impossible to tell, but it all culminated in de Forest’s patent of August 1906, for: “an evacuated vessel of glass… having two separated electrodes, between which intervenes the gaseous medium which when sufficiently heated or otherwise made highly conductive forms the sensitive element…” The equipment and behavior described is Fleming’s; the explanation for its function, de Forest’s. De Forest would lose this patent suit, too, though it would take ten years.10 The impatient reader may began to wonder: why are we spending so much time on this man, whose self-declared genius seemed to consist largely in passing off the ideas of others as his own? The reason is the transformation that the Audion underwent in the last few months of 1906. De Forest, by this point, was out of a job. White and his partners had avoided responsibility for the Fessenden suit by creating a new company, United Wireless, and leasing the assets of American De Forest to that new company for $1. De Forest was cast off with $1000 in severance pay and a few apparently useless patents, including those for his Audion. Having accustomed himself to a fairly lavish lifestyle, he now found himself in serious financial difficulty, and was desperate to turn the Audion into a big success. To understand what happened next, it’s important to realize that De Forest believed that, in contradistinction to Fleming’s rectifier, he had invented a relay. He had set up his Audion by hooking a battery to the cold plate of the valve, and believed that the signal in the antenna circuit (connected to the hot filament) was modulating the more powerful current in that battery circuit. In fact he was quite wrong: there were not two circuits at all, the battery simply shifted the signal from the antenna, it did not amplify it. However this false belief proved critical, because it led de Forest to start experimenting with a 3rd electrode in the bulb, to more completely separate the two circuits of his “relay”. At first he added this second cold electrode side-by-side with the first, but then, perhaps inspired by the sort of control mechanisms used by physicists to channel the rays in cathode-ray tubes, he moved it between the filament and the original plate. Concerned that this would block the flow of electricity, he then changed the third electrode’s shape from a plate to a wiggly piece of wire that resembled a gridiron – he called this the grid. 1908 Audion triode. The (broken) filament on the left is the cathode, the wiggly bit of wire is the grid, and the rounded sheet of medal the anode. Note that it still has a screws for a socket like an ordinary light bulb. Here we have a true relay. A weak current (such as that from a radio antenna) applied to the grid could control a much more powerful current between filament and plate, by repelling the charged particles trying to pass from one to the other. This would allow it to act as a much more powerful detector than the valve, since it could not just rectify but also amplify the radio signal. And like the valve (and unlike the coherer) it could produce a continuous signal, allowing not only radio telegraphy but also radio telephony (and later the broadcasting of voice and music). In practice, however, it didn’t actually work very well. De Forest’s Audions were finicky, prone to burn out quickly, lacking in uniformity of manufacture, and generally ineffective as amplifiers. It required bespoke electrical tuning to find the right parameters to get a given Audion to function at all. Nonetheless, de Forest believed in his invention. To promote it, he formed a new venture, the De Forest Radio Telephone Company, but managed only a trickle of sales. His biggest prize was a sale of equipment to the Navy for intra-fleet telephony during the cruise of the Great White Fleet – but the fleet commander, unable to take the time to get de Forest’s receivers and transmitters working and to train his crews in their use, had them packed up and put in storage. Moreover, De Forest’s new company, presided over by a disciple of Abraham White, was no more scrupulous than his last; and so to add to his troubles he soon found himself under indictment for fraud.11 So, for five years the Audion went nowhere. Once again the telephone would play a critical role in the development of a digital switch, this time to rescue a promising but unproven technology from the brink of obscurity. The Telephone, Again The long-distance network was the central nervous sytem of AT&T. Tying together its many local operating companies, it provided a crucial competitive advantage after the expiration of the core Bell patents. By joining the AT&T network, a new customer could, in theory, reach any of his or her fellow subscribers, hundreds or thousands of miles away – though in practice long-distance calls were rare. The network was also the material basis for AT&T’s all-encompassing ideology of “One Policy, One System, Universal Service.” But as the second decade of the twentieth century began, that network was reaching its physical limits. As telephone wires stretched longer and longer, the signal that passed through them became weaker and noisier, until speech became entirely incomprehensible. Because of this, there were in fact two AT&T networks in the United States, divided by the continental ridge. For the eastern network, New York City was the stake in the ground, mechanical repeaters and loading coils the leash that defined how far the human voice could roam. But these technologies could only do so much. Loading coils altered the electrical properties of the telephone circuit in order to reduce attenuation at voice frequencies – but they could only reduce it, not eliminate it. Mechanical repeaters (nothing more than a telephone speaker coupled to an amplifying microphone) added noise with each repetition. A 1911 New York to Denver line tensed this leash to its absolute limit. To span the entire continent was beyond consideration. Yet in 1909, John J. Carty, AT&T’s Chief Engineer, had publicly promised to do exactly that. And he promised to do it within five years: in time for the Panama–Pacific International Exposition set to take place in San Francisco in 1914. The first to make such a venture conceivable, with a new electronic telephone amplifier, was not an American, but the scion of a wealthy Viennese family with a scientific bent. As a young man, Robert von Lieben bought a telephone manufacturing company with aid of his parent’s wealth, and set out to develop an amplifier for telephone conversations. By 1906 he had built a relay based on cathode-ray tubes, a common device by that time in physics experiments (and later the basis for the dominant video screen technology of the twentieth century). The weak incoming signal controlled an electromagnet that bent the cathode ray beam, modulating the stronger current in the main circuit. By 1910 von Lieben and his colleagues, Eugen Reisz and Sigmund Strauss, learned about the de Forest Audion, and replaced the magnet with a grid inside the tube to control the flow of cathode rays – this was a much more effective design, and surpassed anything developed in the U.S. to date. The German telephone network soon adopted the von Lieben amplifier. In 1914, it enabled a nervous call from the East Prussian army commander to German staff headquarters, 1,000 kilometers away in Koblenz. This in turn led the German Chief of Staff to dispatch Generals Hindenberg and Ludendorff to the East, to enduring fame, and with weighty consequences. The same amplifiers later connected German headquarters with field armies as far south and east as Macedonia and Romania.12 Replica of the mature form of von Lieben’s cathode-ray relay. The cathode is at bottom, the anode is the coil at top, and the grid is the circular metal foil in the middle. But the barriers of language, geography, and war prevented this design from reaching the U.S. before it was overtaken by the developments that follow. De Forest, meanwhile, had left his failing Radio Telephone Company, in 1911, and fled to California. There he took a position at the Federal Telegraph Company in Palo Alto, founded by Stanford graduate Cyril Elwell.13 Nominally, de Forest was assigned to work on an amplifier in order to generate a louder output signal from Federal’s radio receiver (called a “tikker”). In practice, he, Herbert van Ettan (a skilled telephone engineer), and Charles Logwood (designer of the tikker) focused instead on building a telephone amplifier to secure for themselves (not Federal) a rumored $1 million prize from AT&T. To this end, De Forest brought the Audion out of cold storage, and by the summer of 1912 he and his colleagues had a device they were ready to show to the phone company. It consisted of several Audions in sequence, to create multiple amplification stages, plus several other auxiliary components. It worked, after a fashion – it could amplify an audio signal well enough to hear the drop of a handkerchief, or the tick of a pocket watch. But only at currents and voltages too low to be at all useful for telephony. When pressed harder, a blue glow appeared inside the Audions and the signal turned to noise. The telephone men were sufficiently intrigued, however, to bring the device in to see what their engineers would make of it. It so happens that one of them, a young physicist named Harold Arnold, knew exactly how the amplifier from Federal Telegraph could be set to rights. The time has come to discuss how it is that the valve and Audion actually worked. The crucial knowledge for explaining their function came from the Cavendish Lab in Cambridge – the intellectual center of the new electron physics. There, in 1899, J.J. Thomson had shown convincingly via experiments on cathode ray tubes that a particle with mass, later known as the electron, carried the current from cathode to anode. Over the next few years, Owen Richardson, a colleague of Thomson’s, developed this basic premise into a mathematical theory of thermionic emission.14 Ambrose Fleming, an academic engineer who worked just a short train ride away from Cambridge, was familiar with this body of work. Therefore it was clear to him that his valve functioned by thermionic emission of electrons from the heated filament, which then crossed the vacuum gap to the cold anode. But the vacuum in the indicator lamp was far from complete: an ordinary light bulb did not require such a thing; it was sufficient to remove enough oxygen to prevent the combustion of the filament. Fleming therefore realized to make the valve work as well as possible, it should be evacuated beyond normal levels, in order to prevent residual gas from interfering with the passage of electrons. De Forest, on the other hand, did not realize this. Because he came to the valve and Audion by way of his Bunsen burner experiments, he believed exactly the opposite – that hot, ionized gas was the working fluid of the device, and that too-perfect evacuation would destroy its function. This was the reason for the Audion’s inconsistent and disappointing performance as a radio receiver, and for the fatal blue glow.15 AT&T’s Arnold was perfectly placed to correct de Forest’s error. A physicist who had studied under Robert Millikan at the University of Chicago, he was recruited specifically to apply his knowledge of the new electron physics to the coast-to-coast telephony problem. He knew that the Audion tube would function best at a near-perfect vacuum, knew that the newest pump designs could achieve that, knew that a new kind of oxide-coated filament along with a larger plate and grid would also help increase the electron flow. In short, he transformed the Audion into the vacuum tube, wonder-worker of the electronic age.16 AT&T now had the powerful amplifier it needed to build their transcontinental line, it lacked only the legal rights to use it. Their representatives remained diffident in the conversations with de Forest, but opened separate negotiations through a third-party lawyer, who managed to acquire the rights to the Audion as a telephone amplifier for $50,000 (roughly $1.25 million in 2017 dollars). The New York-San Francisco line opened right on time17, though as a triumph of technical virtuosity and corporate publicity rather than of human communication; rates were so exorbitant that hardly anyone would use it. An Electronic Age The true vacuum tube formed the root for a whole new tree of electronic components. As with the relay, so too did the vacuum tube diversify and diversify again, as engineers found ways to tweak the design just so to suit the needs of a particular problem. The growth of -odes did not end with diodes and triodes. It continued with the tetrode, which added an additional grid to sustain amplification as the number of elements in the circuit grew. Pentodes, heptodes, even octodes, followed. There were thyratrons filled with mercury vapor, which glowed an eerie blue; miniaturized tubes as small as a pinky finger, or even (eventually) an acorn; indirectly heated tubes to prevent the hum of an alternating current power source from disturbing the signal. The Saga of the Vacuum Tube, a book describing the growth of the tube industry to 1930, references on the order of 1,000 different models by name in its index, though many were bootleg knock-offs from fly-by-night independent brands: Alltron, Perfectron, Supertron, Voltron, etc.18 Tubes of all shapes and sizes: diodes, triodes, pentodes, etc. Contrast the now-standard pins with the connections on the early Audion. Even more important than diversity of forms was the diversity of applications enabled by the vacuum tube. Regenerative circuits transformed the triode into a transmitter – a transmitter that generated smooth, continuous sine waves, with no noisy spark, and thus able to transmit sound perfectly. With coherer and spark in 1901, Marconi was barely able to heave the smallest fragment of Morse code across the narrowest part of the Atlantic. In 1915, with the vacuum tube as transmitter and receiver, AT&T was able to project the human voice from Arlington, Virginia to Honolulu, over twice the distance. By the early 1920s they were combining long-distance telephony with high-quality sound broadcasting to create the first radio networks. By such means would entire nations soon bend their ear to a single voice, be it that of a Roosevelt, or a Hitler. Moreover, the ability to create transmitters tuned to a precise, stable pitch also allowed telecommunications engineers to finally realize the dream of frequency multiplexing, which had lured Alexander Graham Bell, Edison, and others forty years before. By 1923, AT&T had a ten-channel voice line from New York to Pittsburgh. The ability to carry many voices on a single copper wire would greatly reduce the cost of long-distance calling, which had always been too expensive to use for all but the wealthiest of individuals and businesses.19 Once they saw what vacuum tubes were capable of, AT&T sent their lawyers back to buy more rights from de Forest, and then still more – to secure the rights to the application of the Audion in all imaginable fields, they paid him a total of $390,000, roughly $7.5 million today.20 Given their evident versatility, why did vacuum tubes not dominate the first generation of computers in the same way they dominated radio and other telecommunications equipment? It was obvious that a triode could act as a digital switch in the same fashion as a relay[^22]; so obvious that de Forest was convinced that he had built a relay before he actually managed to do so. And the triode was far more responsive than the traditional electromechanical relay, because there was no need to physically move an armature. Whereas a relay typically took several milliseconds to switch on or off, the effect on the flow from cathode to anode from a change in the electrical potential on the grid was nearly instantaneous. However tubes had a clear disadvantage vis-a-vis relays: their tendency, like their ancestor the incandescent bulb, to burn out. The lifetime of de Forest’s original Audions was so poor – a mere 100 hours or so – that he had a back-up filament placed in the bulb to be wired up after the first inevitably failed. This was exceptionally bad, but even later, high-quality tubes could not be expected to last for more than a few thousand hours of normal use. In a computer with thousands of tubes, whose computations might take hours to complete, this was a serious problem. Relays, by contrast, were, in the words of George Stibitz, “awesomely reliable.” So much so, that he claimed that21 if a set of U-type relays had started in the year 1 A.D. to turn a contact on and off once per second, they still would be clicking away reliably. Their first contact failure, or misfire, would not be due until more than a thousand years from now, around the year 3000. Moreover, no experience existed with large electronic circuits comparable to that of telephone engineers with large electromechanical circuits. Radio receivers and other electronic equipment might contain five or ten tubes, but not hundreds or thousands. No one knew whether a, computer with, say, 5,000 tubes, could be made to work. By going with relays over tubes, computer designers made the safe, conservative choice. In our next installment we will see how, and why, these doubts were overcome. [previous] [next] Sources Hugh G.J. Aitken, The Continuous Wave (1985) J. A. Fleming, The Thermionic Valve (1919) Anton A. Huurdeman, The Worldwide History of Telecommunications (2003) Paul Israel, Edison: A Life of Invention (1998) Tom Lewis, The Empire of the Air (1991) Gerald F. J. Tyne, Saga of the Vacuum Tube (1977)    

Read more
The Era of Fragmentation, Part 4: The Anarchists

Between roughly 1975 and 1995, access to computers accelerated much more quickly than access to computer networks. First in the United States, and then in other wealthy countries, computers became commonplace in the homes of the affluent, and nearly ubiquitous in institutions of higher education. But if users of those computers wanted to connect their machines together – to exchange email, download software, or find a community where they could discuss their favorite hobby, they had few options. Home users could connect to services like CompuServe. But, until the introduction of flat monthly fees in the late 1980s, they charged by the hour at rates relatively few could afford. Some university students and faculty could connect to a packet-switched computer network, but many more could not. By 1981, only about 280 computers had access to ARPANET. CSNET and BITNET would eventually connect hundreds more, but they only got started in the early 1980s. At that time the U.S. counted more than 3,000 institutions of higher education, virtually all of which would have had multiple computers, ranging from large mainframes to small workstations. Both communities, home hobbyists and those academics who were excluded from the big networks, turned to the same technological solution to connect to one another. They hacked the plain-old telephone system, the Bell network, into a kind of telegraph, carrying digital messages instead of voices, and relaying messages from computer to computer across the country and the world. These were among the earliest peer-to-peer computer networks. Unlike CompuServe and other such centralized systems, onto which home computers latched to drink down information like so many nursing calves, information spread through these networks like ripples on a pond, starting from anywhere and ending up everywhere. Yet they still became rife with disputes over politics and power. In the late 1990s, as the Internet erupted into popular view, many claimed that it would flatten social and economic relations. By enabling anyone to connect with anyone, the middle men and bureaucrats who had dominated our lives would find themselves cut out of the action. A new era of direct democracy and open markets would dawn, where everyone had an equal voice and equal access. Such prophets might have hesitated had they reflected on what happened on Usenet and Fidonet in the 1980s. Be its technical substructure ever so flat, every computer network is embedded within a community of human users. And human societies, no matter how one kneads and stretches, always seem to keep their lumps. Usenet In the summer of 1979, Tom Truscott was living the dream life for a young computer nerd. A grad student in computer science at Duke University with an interest in computer chess, he landed an internship at Bell Labs’ New Jersey headquarters, where he got to rub elbows with the creators of Unix, the latest craze to sweep the world of academic computing. The origins of Unix, like those of the Internet itself, lay in the shadow of American telecommunications policy. Ken Thompson and Dennis Ritchie of Bell Labs decided in the late 1960s to build a leaner, much pared-down version of the massive MIT Multics system to which they had contributed as software developers. The new operating system quickly proved a hit within the labs, popular for its combination of low overhead (allowing it to run on even inexpensive machines) and high flexibility. However, AT&T could do little to profit from their success. A 1956 agreement with the Justice Department required AT&T to license non-telephone technologies to all comers at a reasonable rate, and to stay out of all business sectors other than supplying common carrier communications. So AT&T began to license Unix to universities for use in academic settings on very generous terms. These early licensees, who were granted access to the source code, began building and selling their own Unix variants, most notably the Berkeley Software Distribution (BSD) Unix created at the the University of California’s flagship campus. The new operating system quickly swept academia. Unlike other popular operating systems, such as the DEC TENEX / TOPS-20, it could run on hardware from a variety of vendors, many of them offering very low-cost machines. And Berkeley distributed the software for only a nominal fee, in addition to the modest licensing fee from AT&T.1 Truscott felt that he sat at the root of all things, therefore, when he got to spend the summer as Ken Thompson’s intern, playing a few morning rounds of volleyball before starting work at midday, sharing a pizza dinner with his idols, and working late into the night slinging code on Unix and the C programming language. He did not want to give up the connection to that world when his internship ended, and so as soon as he returned to Duke in the fall, he figured out how to connect the computer science department’s Unix-equipped PDP 11/70 back to the mothership in Murray Hill, using a program written by one of his erstwhile colleagues, Mike Lesk. It was called uucp – Unix to Unix copy – and it was one of a suite of “uu” programs new to the just-released Unix Version 7, which allowed one Unix system to connect to another over a modem. Specifically, uucp allowed one to copy files back and forth between the two connected computers, which allowed Truscott to exchange email with Thompson and Ritchie. Undated photo of Tom Truscott It was Truscott’s fellow grad student, Jim Ellis, who had installed the new Version 7 on the Duke computer, but even as the new upgrade gave with one hand, it took away with the other. The news program that was distributed by the Unix users’ group, USENIX, which would broadcast news items to all users of a given Unix computer system, no longer worked on the new operating ssytem. Truscott and Ellis decided they would replace it with their own 7-compatible news program, with more advanced features, and return their improved software back to the community for a little bit of prestige. At this same time, Truscott was also using uucp to connect with a Unix machine at the University of North Carolina ten miles to the southwest in Chapel Hill, and talking to a grad student there named Steve Bellovin.2 Bellovin had also started building his own news program, which notably included the concept of topic-based newsgroups, to which one could subscribe, rather than only having a single broadcast channel for all news. Bellovin, Truscot and Ellis decided to combine their efforts and build a networked news system with newsgroups, that would use uucp to share news between sites. They intended to distributed provide Unix-related news for USENIX members, so they called their system Usenet.  Duke would serve as the central clearinghouse at first, using its auto-dialer and uucp to connect to each other site on the network at regular intervals, in order to pick up it local news updates and deposit updates from its peers. Bellovin wrote the initial code, but it used shell scripts that operated very slowly, so Stephen Daniel, another Duke grad student, rewrote the program in C. Daniel’s version became know as A News. Ellis promoted the program at the January 1980 Usenix conference in Boulder, Colorado, and gave away all eighty copies of the software that he had brought with him. By the next Usenix conference that summer, the organizers had added A News to the general software package that they distributed to all attendees. The creators described the system, cheekily, as a “poor man’s ARPANET.” Though one may not be accustomed to thinking of Duke as underprivileged, it did not have the clout in the world of computer science necessary at the time to get a connection to that premiere American computer network. But access to Usenet required no one’s permission, only a Unix system, a modem, and the ability to pay the phone bills for regular news transfers, requirements that virtually any institution of higher education could meet by the early 1980s. Private companies also joined up with Usenet, and helped to facilitate the spread of the network. Digital Equipment Corporation (DEC) agreed to act as an intermediary between Duke and UC Berkeley, footing the long-distance telephone bills for inter-coastal data transfer. This allowed Berkeley to become a second, west-coast hub for Usenet, connecting up UC San Francisco, UC San Diego, and others, including Sytek, an early LAN business. The connection to Berkeley, an ARPANET site, also enabled cross-talk between ARPANET and Usenet (after a second re-write by Mark Horton and Matt Glickman to create B News). ARPANET sites began picking up Usenet content and vice versa, though ARPA rules technically forbid interconnection with other networks. The network grew rapidly, from fifteen sites carrying ten posts a day in in 1980, to 600 sites and 120 posts in 1983, and 5000 sites and 1000 posts in 1987.3 Its creators had originally conceived Usenet as a way to connect the Unix user community and discuss Unix developments, and to that end they created two groups, net.general and net.v7bugs (the latter for discussing problems with the latest version of Unix). However they left the system entirely open for expansion. Anyone was free to create a new group under “net”, and users very quickly added non-technical topics such as net.jokes. Just as one was free to send whatever one chose, recipients could also ignore whatever groups they chose, e.g. a system could join Usenet and request data only for net.v7bugs, ignoring the rest of the content. Quite unlike the carefully planned ARPANET, Usenet self-organized, and grew in an anarchic way overseen by no central authority. Yet out of this superficially democratic medium a hierarchical order quickly emerged, with a certain subset of highly-connected, high-traffic sites recognized as the “backbone” of the system. This process developed fairly naturally. Because each transfer of data from one site to the next incurred a communications delay, each new site joining the network had a strong incentive to link itself to an already highly-connected node, to minimize the number of hops required for their messages to span the network. The backbone sites were a mix of educational and corporate sites, usually led by one headstrong individual willing to take on the thankless tasks involved in administering all the activity crossing their computer. Gary Murakami at Bell Labs’ Indian Hills lab in Illinois, for example, or Gene Spafford at Georgia Tech. The most visible exercise of the power held by this backbone administrators came in 1987, when they pushed through a re-organization of the newsgroup namespace into seven top-level buckets. comp, for example, for computer-related topics, and rec for recreational topics. Sub-topics continued to be organized hierarchically underneath the “big seven”, such as comp.lang.c for discussion of the C programming language, and rec.games.board for conversations about boardgaming. A group of anti-authoritarians, who saw this change as a coup by the “Backbone Cabal,” created their own splinter hierarchy rooted at alt, with its own parallel backbone. It included topics that were considered out-of-bounds for the big seven, such as sex and recreational drugs (e.g. alt.sex.pictures)4, as well as quirky groups that simply rubbed the backbone admins the wrong way (e.g. alt.gourmand; the admins preferred the anodyne rec.food.recipes). Despite these controversies, by the late 1980s, Usenet had become the place for the computer cognoscenti to find trans-national communities of like-minded individuals. In 1991 alone, Tim Berners-Lee announced the creation of the World Wide Web on alt.hypertext; Linus Torvalds solicited comp.os.minix for feedback on his new pet project, Linux; and Peter Adkison, due to a post on rec.games.design about his game company, connected with Richard Garfield, a collaboration that would lead to the creation of the card game Magic: The Gathering. FidoNet But even as the poor man’s ARPANET spread across the globe, microcomputer hobbyists,  with far fewer resources than even the smallest of colleges, were still largely cut off from the experience of electronic communication. Unix, a low-cost, bare-bones option by the standards of academic computing, was out of reach for hobbyists with 8-bit microprocessors, running an operating system called CP/M that barely did anything beyond managing the disk drive. But they soon began their own shoe-string experiments in low-cost peer-to-peer networking, starting with something called bulletin boards. Given the simplicity of the idea and the number of computer hobbyists in the wild at the time, it seems probable that the computer bulletin board was invented independently several times. But tradition gives precedence to the creation of Ward Christensen and Randy Suess of Chicago, launched during the great blizzard of 1978.  Christensen and Suess were both computer hobbyists in their early thirties, and members of their local computer club. For some time they had been considering creating a server where computer club members could upload news articles, using the modem file transfer software that Christensen had written for CP/M – the hobbyist equivalent of uucp. The blizzard, which kept them housebound for several days, gave them the impetus to actually get started on the project, with Christensen focusing on the software and Suess on the hardware. In particular, Suess devised a circuit that automatically rebooted the computer into the BBS software each time it detected an incoming caller, a necessary hack to ensure the system was in a good state to receive the call, given the flaky state of hobby hardware and software at the time. They called their invention CBBS, for Computerized Bulletin Board System, but most later system operators (or sysops) would drop the C and call their service a BBS.5 They published the details of what they had built in a popular hobby magazine, Byte, and a slew of imitators soon followed. Another new piece of technology, the Hayes Modem, fertilized this flourishing BBS scene. Dennis Hayes was another computer hobbyist, who wanted to use a modem with his new machine, but the existing commercial offerings fell into two categories: devices aimed at business customers that were too expensive for hobbyists, and acoustically-coupled modems. To connect a call on an acoustically-coupled modem you first had to dial or answer the phone manually, and then place the handset onto the modem so they could communicate. There was no way to automatically start a call or answer one. So, in 1977, Hayes designed, built, and sold his own 300 bit-per-second modem that would slot into the interior of a hobby computer. Suess and Christensen used one of these early-model Hayes modems in their CBBS. Hayes’ real breakthrough product, though, was the 1981 Smartmodem, which sat in its own external housing with its own built-in microprocessor and connected to the computer through its serial port. It sold for $299, well within reach of hobbyists who habitually spent a few thousand dollars on their home computer setups. The 300 baud Hayes Smartmodem One of those hobbyists, Tom Jennings, set in motion what became the Usenet of BBSes. A programmer for Phoenix Software in San Francisco, Jennings decided in late 1983 to write his own BBS software, not for CP/M, but for the latest and greatest microcomputer operating system, Microsoft DOS. He called it Fido, after a computer he had used at his work, so-named for its mongrel-like assortment of parts. John Madill, a salesman at ComputerLand in Baltimore, learned about Fido and called all the way across the country to ask Jennings for help in tweaking Fido to make it run on his DEC Rainbow 100 microcomputer. The two began a cross-country collaboration on the software, joined by another Rainbow enthusiast, Ben Baker of St. Louis. All three racked up substantial long-distance phone bills as they logged into one another’s machines for late-night BBS chats. With all of this cross-BBS chatter, an idea began to buzz forward from the back of Jennings’ mind, that he could create a network of BBSes that would exchange messages late at night, when long-distance rates were low. The idea was not new. Many hobbyists had imagined that BBSes could route messages in this way, all the way back to Christensen and Suess’ Byte article. But they generally had assumed that for the scheme to work, you would need very high BBS density and complex routing rules, to ensure that all the calls remained local, and thus toll-free, even when relaying messages from coast to coast. But Jennings did some back-of-the-envelope math and realized that, given increasing modem speeds (now up to 1200 bits per second for hobby modems) and falling long-distance costs, no such cleverness was necessary. Even with substantial message traffic, you could pass text between systems for a few bucks per night. Tom Jennings in 2002 (still from the BBS documentary) So he added a new program to live alongside Fido. Between one to two o’clock in the morning, Fido would shut down and FidoNet would start up. It would check Fido’s outgoing messages against a file called the node list. Each outgoing message had a node number, and each entry in the list represented a network node – a Fido BBS – and provided the phone number for that node number. If there were pending outgoing messages, FidoNet would dial up each of the corresponding BBSes on the node list and transfer the messages over to the FidoNet program waiting on the other side. Suddenly Madill, Jennings and Baker could collaborate easily and cheaply, though at the cost of higher latency – they wouldn’t receive any messages sent during the day until the late night transfer began. Formerly, hobbyists rarely connected with others outside their immediate area, where they could make toll-free calls to their local BBS. But if that BBS connected into FidoNet, users could suddenly exchange email with others all across the country. And so the scheme proved immensely popular, and the number of FidoNet nodes grew rapidly, to over 200 within a year. Jennings’ personal curation of the node list thus became less and less manageable. So during the first “FidoCon” in St. Louis, Jennings and Baker met in the living room of Ken Kaplan, another DEC Rainbow fan who would take an increasingly important role in the leadership of FidoNet. They came up with a new design that divided North America into nets, each consisting of many nodes. Within each net, one administrative node would take on the responsibility of  managing its local nodelist, accepting inbound traffic to its net, and forwarding those messages to the correct local node. Above the layer of nets were zones, which covered an entire continent. The system still maintained one global nodelist with the phone numbers of every FidoNet computer in the world, so any node could theoretically directly dial any other to deliver messages. This new architecture allowed the system to continue to grow, reaching almost 1,000 nodes by 1986 and just over 5,000 by 1989. Each of these nodes (itself a BBS) likely averaged 100 or so active users. The two most popular applications were the basic email service that Jennings had built into FidoNet and Echomail, created by Jeff Rush, a BBS sysop in Dallas. Functionally equivalent to Usenet newsgroups, Echomail allowed the thousands of users of FidoNet to carry out public discussions on a variety of topics. Echoes, the term for individual groups, had mononyms rather than the hierarchical names of Usenet, ranging from AD&D to MILHISTORY to ZYMURGY (home beer brewing). Jennings, philosophically speaking, inclined to anarchy, and wanted to build a neutral platform governed only by its technical standards6: I said to the users that they could do anything they wanted …I’ve maintained that attitude for eight years now, and I have never had problems running BBSs. It’s the fascist control freaks who have the troubles. I think if you make it clear that the callers are doing the policing–even to put it in those terms disgusts me–if the callers are determining the content, they can provide the feedback to the assholes. Just as with Usenet, however, the hierarchical structure of FidoNet made it possible for some sysops to exert more power than others, and rumors swirled of a powerful cabal (this time headquartered in St. Louis), seeking to take control of the system from the people. In particular, many feared that Kaplan or others around him would try to take the system commercial and start charging access to FidoNet. Of particular suspicion was the International FidoNet Association (IFNA), a non-profit that Kaplan had founded to help defray some of the costs of administering the system (especially the long-distance telephone charges). In 1989 those suspicions seemed to be realized when a group of IFNA leaders pushed through a referendum to make every FidoNet sysop a member of IFNA and turn it into the official governing body of the net, responsible for its rules and regulations. The measure failed, and IFNA was dissolved instead. Of course, the absence of any symbolic governing body did not eliminate the realities of power; the regional nodelist administrators instead enacted policy on an ad hoc basis. The Shadow of Internet From the late 1980s onward, FidoNet and Usenet gradually fell under the looming shadow of the Internet. By the second half of that same decade, they had been fully assimilated by it. Usenet became entangled within the webs of the Internet through the creation of NNTP – Network News Transfer Protocol – in early 1986. Conceived by a pair of University of California students (one in San Diego and the other in Berkeley), NNTP allowed TCP/IP network hosts on the Internet to create Usenet-compatible news servers. Within a few years, the majority of Usenet traffic flowed across such links, rather than uucp connections over the plain-old telephone network. The independent uucp network gradually fell into disuse, and Usenet became just another application atop TCP/IP transport. The immense flexibility of the Internet’s layered architecture made it easy to absorb a single-application network in this way.  Although by the early 1990s, several dozen gateways between FidoNet and Internet existed, allowing the two networks to exchange messages, FidoNet was not a single application, and so its traffic did not migrate onto the internet in the same way as Usenet. Instead, as people outside academia began looking for Internet access for the first time in the second half of the 1990s, BBSes gradually found themselves either absorbed into the Internet or reduced to irrelevance. Commercial BBSes generally fell into the first category. These mini-CompuServes offered BBS access for a monthly fee to thousands of users, and had multiple modems for accepting simultaneous incoming connections. As commercial access to the Internet became possible, these businesses connected their BBS to the nearest Internet network and began offering access to their customers as part of a subscription package. With more and more sites and services becoming available on the burgeoning World Wide Web, fewer and fewer users signed on to the BBS per se, and thus these commercial BBSes gradually became pure internet service providers, or ISPs. Most of the small-time hobbyist BBSes, on the other hand, became ghost towns, as users wanting to tap into the Internet flocked to their local ISPs, as well as to larger, nationally known outfits such as America Online. That’s all very well, but how did the Internet become so dominant in the first place? How did an obscure academic system, spreading gradually across elite universities for years while systems like Minitel, CompuServe and Usenet were bringing millions of users online, suddenly explode into the foreground, enveloping like kudzu all that had come before it? How did the Internet become the force that brought the era of fragmentation to an end? [Previous] [Next] Further Reading / Watching Ronda Hauben and Michael Hauben, Netizens: On the History and Impact of Usenet and the Internet, (online 1994, print 1997) Howard Rheingold, The Virtual Community (1993) Peter H. Salus, Casting the Net (1995) Jason Scott, BBS: The Documentary (2005)

Read more
From ACS to Altair: The Rise of the Hobby Computer

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] The Early Electronics Hobby A certain pattern of technological development recurred many times in the decades around the turn of the twentieth century: a scattered hobby community, tinkering with a new idea, develops it to the point where those hobbyists can sell it as a product. This sets off a frenzy of small entrepreneurial firms, competing to sell to other hobbyists and early adopters. Finally, a handful of firms grow to the point where they can drive down costs through economies of scale and put their smaller competitors out of business. Bicycles, automobiles, airplanes, and radio broadcasting all developed more or less in this way. The personal computer followed this same pattern; indeed, it marks the very last time that a “high-tech” piece of hardware emerged from this kind of hobby-led development. Since that time, new hardware technology has typically depended on new microchips. That is a capital barrier far too high for hobbyists to surmount; but as we have seen, the computer hobbyists lucked into ready-made microchips created for other reasons, but already suited to their purposes. The hobby culture that created the personal computer was historically continuous with the American radio hobby culture of the early twentieth-century, and, to a surprising degree, the foundations of that culture can be traced back to the efforts of one man: Hugo Gernsback. Gernsback (born Gernsbacher, to well-off German Jewish parents) came to the United States from Luxembourg in 1904 at the age of nineteen, shortly after his father’s death. Already fascinated by electrical equipment, American culture, and the fiction of Jules Verne and H.G. Wells, he started a business, the Electro Importing Company, in Manhattan, that offered both retail and mail-order sales of radios and related equipment. His company catalog evolved into a magazine, Modern Electrics, and Gernsback evolved into a publisher and community builder (he founded the Wireless Association of America in 1909 and the Radio League of America in 1915), a role he relished for the rest of his working life.[1] Gernsback (foreground) giving an over-the-air lecture on the future of radio. From his 1922 book, Radio For All, p. 229. The culture that Gernsback nurtured valued hands-on tinkering and forward-looking futurism, and in fact viewed them as two sides of the same coin. Science fiction (“scientifiction,” as Gernsback called it) writing and practical invention went hand in hand, for both were processes for pulling the future into the present. In a May 1909 article in Modern Electrics, for example, Gernsback opined on the prospects for radio communication with Mars: “If we base transmission between the earth and Mars at the same figure as transmission over the earth, a simple calculation will reveal that we must have the enormous power of 70,000 K. W. to our disposition in order to reach Mars,” and went on to propose a plan for building such a transmitter within the next fifteen or twenty years. As science fiction emerged as its own genre with its own publications in the 1920s (many of them also edited by Gernsback), this kind of speculative article mostly disappeared from the pages of electronic hobby magazines. Gernsback himself occasionally dropped in with an editorial, such as a 1962 piece in Radio-Electronics on computer intelligence, but the median electronic magazine article had a much more practical focus. Readers were typically hobbyists looking for new projects to build or service technicians wanting to keep up with the latest hardware and industry trends.[2] Nonetheless, the electronic hobbyists were always on the lookout for the new, for the expanding edge of the possible: from vacuum tubes, to televisions, to transistors, and beyond. It’s no surprise that this same group would develop an early interest in building computers. Nearly everyone who we find building (or trying to build) a personal or home computer prior to 1977 had close ties to the electronic hobby community. The Gernsback story also highlights a common feature of hobby communities of all sorts. A subset of radio enthusiasts, seeing the possibility of making money by fulfilling the needs of their fellow hobbyists, started manufacturing businesses to make new equipment for hobby projects, retail businesses to sell that equipment, or publishing businesses to keep the community informed on new equipment and other hobby news. Many of these enterprises made little or no money (at least at first), and were fueled as much by personal passion as by the profit motive; they were the work of hobby-entrepreneurs. It was this kind of hobby-entrepreneur who would first make personal computers available to the public. The First Personal Computer Hobbyists The first electronic hobbyist to take an interest in building computers, whom we know of, was Stephen Gray. In 1966, he founded the Amateur Computer Society (ACS), an organization that existed mainly to produce a series of quarterly newsletters typed and mimeographed by Gray himself. Gray has little to say about his own biography in the newsletter or in later reflections on the ACS. He reveals that he worked as an editor of the trade magazine Electronics, that he lived in Manhattan and then Darien, Connecticut, that he had been trying to build a computer of his own for several years, and little else. But he clearly knew the radio hobby world. In the fourth, February 1967, number of his newsletter, he floated the idea of a “Standard Amateur Computer Kit” (SACK) that would provide an economical starting point for new hobbyists, writing that,[3] Amateur computer builders are now much like the early radio amateurs. There’s a lot of home-brew equipment, much patchwork, and most commercial stuff is just too expensive. The ACS can help advance the state of the amateur computer art by designing a standard amateur computer, or at least setting up the specs for one. Although the mere idea of a standard computer makes the true blue home-brew types shudder, the fact is that amateur radio would not be where it is today without the kits and the off-the-shelf equipment available.[4] By the Spring of 1967, Gray had found seventy like-minded members through advertisements in trade and hobby publications, most of them in the United States, but a handful in Canada, Europe, and Japan. We know little about the backgrounds or motivations of these men (and they were exclusively men), but when their employment is mentioned, they are found at major computer, electronics, or aerospace firms; at national labs; or at large universities. We can surmise that most worked with or on computers as part of their day job. A few letter writers disclose prior involvement in hobby electronics and radio, and from the many references to attempts to imitate the PDP-8 architecture, we can also guess that many members had some association with DEC minicomputer culture. It is speculative but plausible to guess that the 1965 release of the PDP-8 might have instigated Gray’s own home computer project and the later creation of the ACS. Its relatively low price, compact size, and simple design may have catalyzed the notion that home computers lay just out of reach, at least for Gray and his band of like-minded enthusiasts. Whatever their backgrounds and motivations, the efforts of these amateurs to actually builda computer proved mostly fruitless in these early years. The January 1968 newsletter reported a grand total of two survey respondents who possessed an actual working computer, though respondents as a whole had sunk an average of two years and $650 on their projects ($6,000 in 2024 dollars). The problem of assembling one’s own computer would daunt even the most skilled electronic hobbyist: no microprocessors existed, nor any integrated circuit memory chips, and indeed virtually no chips of any kind, at least at prices a “homebrewer” could afford. Both of the two complete computers reported in the survey were built from hand-wired transistor logic. One was constructed from the parts of an old nuclear power system control computer, PRODAC IV. Jim Sutherland took the PRODAC’s remains home from his work at Westinghouse after its retirement, and re-dubbed it the ECHO IV (for Electronic Computing Home Operator). Though technically a “home” computer, to borrow an existing computer from work was not a path that most would-be home-brewers could follow. This hardly had the makings of a technological revolution. The other complete “computer,” the EL-65 by Hans Ellenberger of Switzerland, on the other hand, was truly an electronic desktop calculator; it could perform arithmetic ably enough, but could not be programmed. [5] The Emergence of the Hobby-Entrepreneur As integrated circuit technology got better and cheaper, the situation for would-be computer builders gradually improved. By 1971, the first, very feeble, home computer kits appeared on the market, the first signs of Gray’s “SACK.” Though neither used a microprocessor, they took advantage of the falling prices of integrated circuits: the CPU of each consisted of dozens of small chips wired together. The first was the National Radio Institute (NRI) 832, the hardware accompaniment to a computer technician course disseminated by the NRI, and priced at about $500. Unsurprisingly, the designer, Lou Freznel, was a radio hobby enthusiast, and a subscriber to Stephen Gray’s ACS Newsletter. But the NRI 832 is barely recognizable as a functional computer: it had a measly sixteen 8-bit words of read-only memory, configured by mechanical switches (with an additional sixteen bytes of random-access memory available for purchase).[6] OLYMPUS DIGITAL CAMERA " data-medium-file="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=300" data-large-file="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=739" loading="lazy" width="1024" height="684" src="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=1024" alt="" class="wp-image-14940">The NRI 832. The switches on the left were used to set the values of the bits in the tiny memory. The banks of lights at the top left and right, showing the binary values of the program counter and accumulator, were the only form of output  [vintagecomputer.net]. The $750 Kenbak-1 that appeared the same year was nominally more capable, with 256 bytes of memory, though implemented with shift-register chips (accessible one bit at a time), not random-access memory. Indeed, the entire machine had a serial-processing architecture, processing only one bit at a time through the CPU, and ran at only about 1,000 instructions per second—very slow for an electronic computer. Like the NRI 832, it offered only switches as input and only a small panel of display lights for showing register contents as output. Its creator, John Blankenbaker, was a radio lover from boyhood before enrolling as an electronics technician in the Navy. He began working on computers in the 1950s, beginning with the Bureau of Standards SEAC. Intrigued by the possibility of bringing a computer home, he tinkered with spare parts for making his own computer for years, becoming his own private ACS. By 1971 he thought he had a saleable device that could be used for teaching programming, and he formed the eponymous “Kenbak” company to sell it.[7] Blankenbaker was the first of the amateur computerists to try to bring his passion to market; the first hobby-entrepreneur of the personal computer. He was not the most successful. I found no records of the sales of the NRI 832, but by Blankenbaker’s own testimony, only forty-four Kenbak-Is were sold. Here were home computer kits readily available at a reasonable price, four years before Altair. Why did they fall flat? As we have seen, most members of the Amateur Computer Society had aimed to make a PDP-8 or something like it; this was the most familiar computer of the 1960s and early 1970s, and provided the mental model for what a home computer could and should be. The NRI 832 and Kenbak-I came nowhere close to the capabilities of a PDP-8, nor were they designed to be extensible or expandable in any way that might allow them to transcend their basic beginnings. These were not machines to stir the imaginative loins of the would-be home computer owner. Hobby-Entrepreneurship in the Open These early, halting steps towards a home computer, from Stephen Gray to the Kenbak-I, took place in the shadows, unknown to all but a few, the hidden passion of a handful of enthusiasts exchanging hand-printed newsletters. But several years later, the dream of a home computer burst into the open in a series of stories and advertisements in major hobby magazines. Microprocessors had become widely available. For those hooked on the excitement of interacting one-on-one with a computer, the possibility of owning their own machine felt tantalizing close. A new group of hobby-entrepreneurs now tried to make their mark by providing computer kits to their fellow enthusiasts, with rather more success than NRI and Kenbak. The overture came in the fall of 1973, with Don Lancaster’s “TV Typewriter,” featured on the cover of the September issue of Radio-Electronics (a Gernsback publication, though Gernsback himself was, by then, several years dead). Lancaster, like most of the people we have met in this chapter, was an amateur “ham” radio operator and electronics tinkerer. Though he had a day job at Goodyear Aerospace in Phoenix, Arizona, he figured out how to make a few extra bucks from his hobby by publishing projects in magazines and selling pre-built circuit boards for those projects via a Texas hobby firm called Southwest Technical Products (SWTPC). The 1973 Radio-Electronics TV Typewriter cover. His TV Typewriter was, of course, not a computer at all, but the excitement it generated certainly derived from its association with computers. One of many obstacles to a useful home computer was the lack of a practical output device: something more useful than the handful of glowing lights that the Kenbak-I sported, but cheaper and more compact than the then-standard computer input/output device, a bulky teletype terminal. Lancaster’s electronic keyboard, which required about $120 in parts, could hook up to an ordinary television and turn it into a video text terminal, displaying up to sixteen lines of thirty-two characters each. Shift-registers continued to be the only cheap form of semiconductor memory, and so that was what Lancaster used for storing the characters to be displayed on screen. Lancaster gave the parts list and schematic to the TV Typewriter away for free, but made money by selling pre-built subassemblies via SWTPC that saved buyers time and effort, and by publishing guidebooks likethe TV Typewriter Cookbook.[8] The next major landmark appeared six months later in a ham radio magazine, QST, named after the three-letter ham code for “calling all stations.” A small ad touted the availability of “THE TOTALLY NEW AND THE VERY FIRST MINI-COMPUTER DESIGNED FOR THE ELECTRONIC/COMPUTER HOBBYIST” with kit prices as low as $440. This was the SCELBI 8-H, the first computer kit based around a microprocessor, in this case the Intel 8008. Its creator, Nat Wadsworth, lived in Connecticut, and became enthusiastic about the microprocessor after attending a seminar given by Intel in 1972, as part of his job as an electrical engineer at an electronics firm. Wadsworth was another ham radio enthusiast, and already enough of a personal computing obsessive to have purchased a surplus DEC PDP-8 at a discount for home use (he paid “only” $2,000, about $15,000 in 2024 dollars). Since his employer did not share his belief in the 8008, he looked for another outlet for his enthusiasm, and teamed up with two other engineers to develop what became the SCELBI-8H (for SCientific ELectronic BIological). Their ads drew thousands of responses and hundreds of orders over the following months, though they ended up losing money on every machine sold.[9] A similar machine appeared several months later, this time as a hobby magazine story, on the cover the July 1974 issue of Radio-Electronics: “Build the Mark-8 Minicomputer,” ran the headline (notice again the “minicomputer” terminology: a PDP-8 of one’s own remained the dream). The Mark-8 came from Jonathan Titus, a grad student from Virginia, who had built his own 8008-based computer and wanted to share the design with the rest of the hobby. Unlike SCELBI, he did not sell it as a complete machine or even a kit: he expected the Radio-Electronics reader to buy and assemble everything themselves. That is not to say that Titus made no money: he followed a hobby-entrepreneur business model similar to Don Lancaster’s, offering an instructional guidebook for $5, and making some pre-made boards available for sale through a retailer in New Jersey, Techniques, Inc. The 1974 Mark-8 Radio-Electronics cover. The SCELBI-8H and Mark-8 looked much more like a “real” minicomputer than the NRI 832 or Kenbak-I. A hobbyist hungry for a PDP-8-like machine of their own could recognize in this generation of machines something edible, at least. Both used an eight-bit parallel processor, not an antiquated bit-serial architecture, came with one kilobyte of random-access memory, and were designed to support textual input/output devices. Most importantly both could be extended with additional memory or I/O cards. These were computers you could tinker with, that could become an ongoing hobby project in and of themselves. A ham radio operator and engineering student in Austin, Texas named Terry Ritter spent over a year getting his Mark-8 fully operational with all of the accessories that he wanted, including an oscilloscope display and cassette tape storage.[10] In the second half of 1974, a community of hundreds of hobbyists like Ritter began to form around 8008-based computers, significantly larger than the tiny cadre of Amateur Computer Society members. In September 1974, Hal Singer began publishing the Mark-8 User Group Newsletter (later renamed the Micro-8 Newsletter) for 8008 enthusiastsout of his office at the Cabrillo High School Computer Center in Lompoc, California. He attracted readers from all across the country: California and New York, yes, but also Iowa, Missouri, and Indiana. Hal Chamberlain started the Computer Hobbyist newsletter two months later. Hobby entrepreneurship expanded around the new machines as well: Robert Suding formed a company in Denver called the Digital Group to sell a packet of upgrade plans for the Mark-8.[11] The first tender blossoms of a hobby computer community had begun to emerge. Then another computer arrived like a spring thunderstorm, drawing whole gardens of hobbyists up across the country and casting the efforts of the likes of Jonthan Titus and Hal Singer in the shade. It, too, came as a response to the arrival of the Mark-8, by a rival publication in search of a blockbuster cover story of their own. Altair Arrives Art Salsberg and Les Solomon, editors at Popular Electronics, were not oblivious to the trends in the hobby, and had been on the lookout for a home computer kit they could put on their cover since the appearance of the TV Typewriter in the fall of 1973. But the July 1974 Mark-8 cover story at rival Radio-Electronics threw a wrench in their plans: they had an 8008-based design of their own lined up, but couldn’t publish something that looked like a copy-cat machine. They needed something better, something to one-up the Mark-8. So, they turned to Ed Roberts. He had nothing concrete, but had pitched Solomon a promise that he could build a computer around the new, more powerful Intel 8080 processor. This pitch became Altair—named, according to legend, by Solomon’s daughter, after the destination of the Enterprise in the Star Trek episode “Amok Time”—and it set the hobby electronics world on fire when it appeared as the January 1975 Popular Electronics cover story. The famous Popular Electronics Altair cover story. Altair, it should be clear by now, was continuous with what came before: people had been dreaming of and hacking together home computers for years, and each year the process became easier and more accessible, until by 1974 any electronics hobbyist could order a kit or parts for a basic home computer for around $500. What set the Altair apart, what made it special, was the sheer amount of power it offered for the price, compared to the SCELBI-8H and Mark-8. The Altair’s value proposition poured gasoline onto smoldering embers, it was an accelerant that transformed a slowly expanding hobby community into a rapidly expanding industry. The Altair’s surprising power derived ultimately from the nerve of MITS founder Ed Roberts. Roberts, like so many of his fellow electronics hobbyists, had developed an early passion for radio technology that was honed into a professional skill by technical training in the U.S. armed forces—the Air Force, in Roberts’ case. He founded Micro Instrumentation and Telemetry Systems (MITS) in Albuquerque with fellow Air Force officer Forrest Mims to sell electronic telemetry modules for model rockets. A crossover hobby-entrepreneur business, this straddled two hobby interests of the founders, but did not prove very profitable. A pivot in 1971 to sell low-cost kits to satiate the booming demand for pocket calculators, on the other hand, proved very successful—until it wasn’t. By 1974 the big semiconductor firms had vertically integrated and driven most of the small calculator makers out of business. For Roberts, the growing hobby interest in home computers offered a chance to save a dying MITS, and he was willing to bet the company on that chance. Though already $300,000 in debt, he secured a loan of $65,000 from a trusting local banker in Albuquerque, in September 1974. With that money, he negotiated a steep volume discount from Intel by offering to buy a large quantity of “ding-and-dent” 8080 processors with cosmetic damage. Though the 8080 listed for $360, MITS got them for $75 each. So, while Wadsworth at SCELBI (and builders assembling their own Mark-8s) were paying $120 for 8008 processors, MITS was paying nearly half that for a far better processor.[12] It is hard to overstate what a substantial leap forward in capabilities the 8080 represented: it ran much faster than the 8008, integrated more capabilities into a single chip (for which the 8008 required several auxiliary chips), could support four times as much memory, and had a much more flexible 40-pin interface (versus the 18 pins on the 8008). The 8080 also referenced a program stack an external memory, while the 8008 had a strictly size-limited on-CPU stack, which limited the software that could be written for it. The 8080 represented such a large leap forward that, until 1981, essentially the entire personal and home computer industry ran on the 8080 and two similar designs: the Zilog Z80 (a processor that was software-compatible with the 8080 but ran at higher speeds), and the MOS Technology 6502 (a budget chip with roughly the same capabilities as the 8080).[13] The release of the Altair kit at a total price of $395 instantly made the 8008-based computers irrelevant. Nat Wadsworth of SCELBI reported that he was “devastated by appearance of Altair,” and “couldn’t understand how it could sell at that price.” Not only was the price right, the Altair also looked more like a minicomputer than anything before it. To be sure, it came standard with a measly 256 bytes of memory and the same “switches and lights” interface as the ancient kits from 1971. It would take quite a lot of additional money and effort to turn into a fully functional computer system. But it came full of promise, in a real case with an extensible card slot system for adding additional memory and input/output controllers. It was by far the closest thing to a PDP-8 that had ever existed at a hobbyist price point—just as the Popular Electronics cover claimed: “World’s First Minicomputer Kit to Rival Commercial Models.” It made the dream of the home computer, long cherished by thousands of computer lovers, seem not merely imminent, but immanent: the digital divine made manifest. And this is why the arrival of the MITS Altair, not of the Kenbak-I or the SCELBI-8H, is remembered as the founding event of the personal computer industry.[14] All that said, even a tricked-out Altair was hardly useful, in an economic sense. If pocket calculators began as a tool for business people, and then became so cheap that people bought them as a toy, the personal computer began as something so expensive and incapable that only people who enjoyed them as a toy would buy them. Next time, we will look at the first years of the personal computer industry: a time when the hobby computer producers briefly flourished and then wilted, mostly replaced and outcompeted by larger, more “serious” firms. But a time when the culture of the typical computer user remained very much a culture of play. Appendix: Micral N, The First Useful Microcomputer There is another machine sometimes cited as the first personal computer: the Micral N. Much like Nat Wadsworth, French engineer François Gernelle was smitten with the possibilities opened up by the Intel 8008 microprocessor, but could not convince his employer, Intertechnique, to use it in their products. So, he joined other Intertechnique defectors to form Réalisation d’Études Électroniques (R2E), and began pursuing some of their erstwhile company’s clients. In December 1972, R2E signed an agreement with one of those clients, the Institut National de la Recherche Agronomique (INRA, a government agronomical research center), to deliver a process control computer for their labs at fraction of the price of a PDP-8. Gernelle and his coworkers toiled through the winter in a basement in the Paris suburb of Châtenay-Malabry to deliver a finished system in April 1973, based on the 8008 chip and offered at a base price of 8,500 francs, about $2,000 in 1973 dollars (one fifth the going rate for a PDP-8).[15] The Micral N was a useful computer, not a toy or a plaything. It was not marketed and sold to hobbyists, but to organizations in need of a real-time controller. That is to say, it served the same role in the lab or factory floor that minicomputers had served for the previous decade. It can certainly be called a microcomputer by dint of its hardware. But the Altair lineage stands out because it changed how computers were used and by whom; the microprocessor happened to make that economically possible, but it did not automatically make every machine into which it was placed a personal computer. The Micral N looks very much like the Altair on the outside, but was marketed entirely differently [Rama, Cc-by-sa-2.0-fr]. Useful personal computers would come, in time. But the demand that existed for a computer in one’s own home or office in the mid-1970s came from enthusiasts with a desire to tinker and play on a computer, not to get serious business done on one. No one had yet written and published the productivity software that would even make a serious home or office computer conceivable. Moreover, it was still far too expensive and difficult to assemble a comprehensive office computer system (with a display, ample memory, and external mass storage for saving files) to attract people who didn’t already love working on computers for their own sake. Until these circumstances  changed, which would take several years, play reigned unchallenged among home computer users. The Micral N is an interesting piece of history, but it is an instructive contrast with the story of the personal computer, not a part of it.

Read more
Steamships, Part I: Crossing the Atlantic

For much of this story, our attention has focused on events within the isle of Great Britain, and with good reason: primed by the virtuous cycle of coal, iron, and steam, the depth and breadth of Britain’s exploitation of steam power far exceeded that found anywhere else, for roughly 150 years after the groaning, hissing birth cry of steam power with the first Newcomen engine. American riverboat traffic stands out as the isolated exception. But Great Britain, island though it was, did not stand aloof from the world. It engaged in trade and the exchange of ideas, of course, but it also had a large and (despite occasional setbacks) growing empire, including large possessions in Canada, South Africa, Australia, and India. The sinews of that empire necessarily stretched across the oceans of the world, in the form of a dominant navy, a vast merchant fleet, and the ships of the East India Company, which blurred the lines of military and commercial power: half state and half corporation. Having repeatedly bested all its would-be naval rivals—Spain, the Netherlands, and France—Britain had achieved an indisputable dominance of the sea. Testing the Waters The potential advantages of fusing steam power with naval power were clear: sailing ships were slaves to the whims of the atmosphere. A calm left them helpless, a strong storm drove them on helplessly, and adverse winds could trap them in port for days on end. The fickleness of the wind made travel times unpredictable and could steal the opportunity for a victorious battle from even the strongest fleet. In 1814, Sir Walter Scott took a cruise around Scotland, and the vicissitudes of travel by sail are apparent on page after page of his memoirs:  4th September 1814… Very little wind, and that against us; and the navigation both shoally and intricate. Called a council of war; and after considering the difficulty of getting up to Derry, and the chance of being windbound when we do get there, we resolve to renounce our intended visit to that town… 6th September 1814… When we return on board, the wind being unfavourable for the mouth of Clyde, we resolve to weigh anchor and go into Lamlash Bay. 7th September, 1814 – We had amply room to repent last night’s resolution, for the wind, with its usual caprice, changed so soon as we had weighed anchor, blew very hard, and almost directly against us, so that we were beating up against it by short tacks, which made a most disagreeable night…[1] As it had done for power on land, as it had done for river travel, so steam could promise to do for sea travel: bring regularity and predictability, smoothing over the rough chaos of nature. The catch lay in the supply of fuel. A sailing ship, of course, needed only the “fuel” it gathered from the air as it went along. A riverboat could easily resupply its fuel along the banks as it travelled. A steamship crossing the Atlantic would have to bring along its whole supply. Plan of the Savannah. It is evident that she was designed as a sailing ship, with the steam engine and paddles as an afterthought. Early attempts at steam-powered sea vessels bypassed this problem by carrying sails, with the steam engine providing supplementary power. The American merchant ship Savannah crossed the Atlantic to Liverpool in this fashion in 1819. But the advantages of on-demand steam power did not justify the cost of hauling an idle engine and its fuel across the ocean. Its owners quickly converted the Savannah back to a pure sailing ship.[2] MacGregor Laird had a better-thought-out plan in 1832 when he dispatched the two steamships built at his family’s docks, Quorra and Alburkah, along with a sailing ship, for an expedition up the River Niger to bring commerce and Christianity to central Africa. Laird’s ships carried sails for the open ocean and supplied themselves regularly with wooden fuel when coasting near the shore. The steam engines achieved their true purpose once the little task force reached the river, allowing the ships to navigate easily upstream.[3] Brunel Laird’s dream of transforming Africa ended in tatters, and in the death of most of his crew. But Laird himself survived, and he and his homeland would both have a role to play in the development of true ocean-going steamships. Laird, like the great Watt himself, was born in Greenock, on the Firth of Clyde, and Britain’s first working commercial steamboats originated on the Clyde, carrying passengers among Glasgow, Greenock, Helensburgh, and other towns. Scott took passage on such a ferry from Greenock to Glasgow in the midst of his Scottish journey, and the contrast is stark in his memoirs between his passages at sea and the steam transit on the Clyde that proceeded “with a smoothness of motion which probably resembles flying.”[4] The shipbuilders of the Clyde, with iron and coal closet a hand, would make such smooth, predictable steam journeys ever more common in the waters of and around Britain.  By 1822, they had already built forty-eight steam ferries of the sort on which Scott had ridden; in the following decade ship owners extended service out into the Irish Sea and English Channel with larger vessels, like David Napier’s 240-ton, 70-horsepower Superb and 250-ton and 100-horsepower Majestic.[5] Indeed, the most direct path to long-distance steam travel lay in larger hulls. Because of the buoyancy of water, steamships did not suffer rocket-equation-style negative returns on fuel consumption with increasing size. As the hull grew, its capacity to carry coal increased in proportion to its volume, while the drag the engines had to overcome (and thus the size of engine required) increased only in proportion to the surface area. Mark Beaufoy, a scholar of many pursuits but with a deep interest in naval matters, had shown this decisively in a series of experiments with actual hulls in water, published posthumously by his son in 1834.[6] In the late 1830s, two competing teams of British financiers, engineers, and naval architects emerged, racing to be the first to take advantage of this fact by creating a large enough steamship to make transatlantic steam travel technically and commercially viable. In a lucky break for your historian, the more successful team was led by the more vibrant figure, Isambard Kingdom Brunel: even his name oozes character. (His rival’s name, Junius Smith, begins strong but ends pedestrian.) Brunel’s unusual last name came from his French father, Marc Brunel; his even more unusual middle name came from his English mother, Sophia Kingdom; and his most unusual first name descends from some Frankish warrior of old.[7] The elder Brunel came from a prosperous Norman farming family. A second son, he was to be educated for the priesthood, but rebelled against that vocation and instead joined the navy in 1786. Forced to flee France in 1793 due to his activities in support of the royalist cause, he worked for a time as a civil engineer in New York before moving to England in 1799 to develop a mechanized process for churning out pulley blocks for the British navy with one of the great rising engineers of the day, Henry Maudslay.[8] The most famous image of Brunel, in front of the chains of his (and the world’s) largest steamship design in 1857. Young Isambard was born in 1806, began working for his father in 1822, and got the railroad bug after riding the Liverpool and Manchester line in 1831.  The Great Western Railway (GWR) company named Brunel as chief engineer in 1833, when he just twenty-seven years old. The GWR originated with a group of Bristol merchants who saw the growth of Liverpool, and feared that without a railway link to central Britain they would lose their status as the major entrepôt for British trade with the United States. It spanned the longest route of any railway to date, almost 120 miles from London to Bristol, and under Brunel’s guidance the builders of the GWR leveled, bridged, and tunneled that route at unparalleled cost). Brunel insisted on widely spaced rails (seven feet apart) to allow a smooth ride at high speed, and indeed GWR locomotives achieved speeds of sixty miles-per-hour, with average speeds of over forty miles-per-hour over long distances, including stops. Though the broad-gauge rails Brunel stubbornly fought for are long gone, the iron-ribbed vaults of the train sheds he designed for each terminus—Paddington Station in London and Temple Meads in Bristol—still stand and serve railroad traffic today.[9] The Great Western Railway " data-medium-file="https://cdn.accountdigital.net/Fhf3soIAhhg2rN0oqVUZMO0J_BSv" data-large-file="https://cdn.accountdigital.net/Fijn2JZJiY1iiPlxXFJdkKowbSRt?w=739" loading="lazy" width="1024" height="640" src="https://cdn.accountdigital.net/FhOeLf78Au4ONW9H2uRKenokq84m" alt="" class="wp-image-14501" srcset="https://cdn.accountdigital.net/FhOeLf78Au4ONW9H2uRKenokq84m 1024w, https://cdn.accountdigital.net/Fi_GgFaxxSyF_JLeBaV4adatJV4f 150w, https://cdn.accountdigital.net/Fhf3soIAhhg2rN0oqVUZMO0J_BSv 300w, https://cdn.accountdigital.net/Ft7LTlTqReE0HzUJR2_XF8PFniC4 768w, https://cdn.accountdigital.net/Fijn2JZJiY1iiPlxXFJdkKowbSRt 1154w" sizes="(max-width: 1024px) 100vw, 1024px">An engraving of Temple Mead, Bristol terminus of the Great Western Railway. According to legend, Brunel’s quest to build a transatlantic steamer began with an off-hand quip at a meeting of the Great Western directors in October 1835.[10] Someone grumbled over the length of the railway line, Brunel said something to the effect of: “Why not make it longer, and have a steamboat to go from Bristol to New York?” Though perhaps intended as a joke, Brunel’s remark spoke to the innermost dreams of the Bristol merchants, to be the indispensable link between England and America.  One of them, Thomas Guppy, decided to take the idea seriously, and convinced Brunel to do the same. Brunel, never lacking in self-confidence, did not doubt that his heretofore landbound engineering skills would translate to a watery milieu, but just in case he pulled Christopher Claxton (a naval officer) and William Patterson (a shipbuilder) in on the scheme. Together they formed a Great Western Steam Ship Company.[11] The Race to New York Received opinion still held that a direct crossing by steam from England to New York, of over 3,000 miles, would be impossible without refueling. Dionysius Lardner took to the hustings of the scientific world to pronounce that opinion. Dionysius Lardner, Brunel’s nemesis. One of the great enthusiasts and promoters of the railroad, Lardner was nonetheless a long-standing opponent of Brunel’s: in 1834 he had opposed Brunel’s route for the Great Western railway on the grounds that the gradient of Box Hill tunnel would cause trains to reach speeds of 120 miles-per-hour and thus suffocate the passengers.[12] He gave a talk to the British Association for the Advancement of Science in August 1836 deriding the idea of a Great Western Steamship, asserting that “[i]n proportion as the capacity of the vessel is increased, in the same ratio or nearly so must the mechanical power of the engines be enlarged, and the consumption of fuel augmented,” and that therefore a direct trip across the Atlantic would require a far more efficient engine than had ever yet been devised.[13] The Dublin-born Lardner much preferred his own scheme to drive a rail line across Ireland and connect the continents by the shortest possible water route: 2,000 miles from Shannon to Newfoundland. Brunel, however, firmly believed that a large ship would solve the fuel problem. As he wrote in a preliminary report to the company in 1836, certainly drawing on Beaufoy’s work: “…the tonnage increases as the cubes of their dimensions, while the resistance increases about as their squares; so that a vessel of double the tonnage of another, capable of containing an engine of twice the power, does not really meet with double the resistance.”[14] He, Patterson and Claxton agreed to target a 1400 ton, 400 horsepower ship. They would name her, of course, Great Western. In the post-Watt era, Britain boasted two great engine-building firms: Robert Napier’s in Glasgow in the North, and Maudslay’s in London in the south. After the death of Henry Maudslay, Marc Brunel’s former collaborator, in 1831, the business’ ownership passed to his sons. But they lacked their father’s brilliance; the key to  the firm’s future lay with the partner he had also bequeathed  to them, Joshua Field. Brunel and his father both had ties to Maudslay, and so they tapped Field to design the engine for their great ship. Field chose a “side-lever” engine design, so-called because a horizontal beam on the side of the engine rocking on a central pivot delivered power from the piston to the paddle wheels. This was the standard architecture for large marine engines, because it allowed the engine to be mounted deep in the hull, avoiding deck obstructions and keeping the ship’s center of gravity low. Field, however, added several novel features of his own devising. The most important of them was the spray condenser, which recycled some of the engine’s steam for re-use as fresh water for the boiler. This ameliorated the second-most pressing problem for long-distance steamships: the build-up of scale in the engine from saltwater.[15] The 236-foot-long, 35-foot-wide hull sported iron bracings to increase its strength (a contribution of Brunel), and cabins for 128 passengers. The extravagant, high-ceiling grand saloon provided a last, luxurious Brunel touch. By far the largest steamship yet built, Great Western would have towered over most other ships in the London docks where she was built.[16] The competing group around Junius Smith had not been idle. Smith, an American-born merchant who ran his business out of London had dreamed of a steam-powered Atlantic crossing ever since 1832, when while idling on a fifty-four day sail from England to New York; almost twice the usual duration. He formed the British and American Steam Navigation Company, and counted among his backers Macgregor Laird, the Scottish shipbuilder of the Niger River expedition. Their 1800-ton British Queen would boast a 500-horsepower engine, built by the Maudslay company’s Scottish rival, Robert Napier.[17] But Smith’s group fell behind the Brunel consortium (this despite the fact that Brunel still led the engineering on the not-yet-completed Great Western Railway); the Great Western would launch first. In a desperate stunt to be able to boast of making the first Atlantic crossing, British and American launched the channel steamer Sirius on April 4, 1838 from Cork on the west coast of Ireland, laden with fuel and bound for New York. Great Western left Bristol just four days later, with fifty-seven crew (fifteen of them just for stoking coal) to serve a mere seven passengers, each paying the princely sum of 35 guineas for passage.[18] The Steamer Great Western. H.R. Robinson. PAH8859 " data-medium-file="https://cdn.accountdigital.net/FjhWpIE1aFlqt2GYL0Hc-SWJMgY9" data-large-file="https://cdn.accountdigital.net/Flj-dfOyg0Ro7Cz2Hb7HjGMwK6WT?w=739" loading="lazy" width="1024" height="730" src="https://cdn.accountdigital.net/Fu9OpVwe0NXae4jdZHJyPyuWk0m9" alt="" class="wp-image-14505" srcset="https://cdn.accountdigital.net/Fu9OpVwe0NXae4jdZHJyPyuWk0m9 1024w, https://cdn.accountdigital.net/FuVJjI8esdiqejo2CGzqD47Y3gkI 150w, https://cdn.accountdigital.net/FjhWpIE1aFlqt2GYL0Hc-SWJMgY9 300w, https://cdn.accountdigital.net/FnNNJhcavKPniiW0ouC1ntqj7v0o 768w, https://cdn.accountdigital.net/Flj-dfOyg0Ro7Cz2Hb7HjGMwK6WT 1280w" sizes="(max-width: 1024px) 100vw, 1024px">A Lithograph of the Great Western. Despite three short stops to deal with engine problems and a near-mutiny by disgruntled coal stokers working in miserable conditions, Great Western nearly overtook Sirius, arriving in New York just twelve hours behind her. In total the crossing took less than sixteen days—about half the travel time of a fast sailing packet—with coal to spare in the bunkers. The ledger was not all positive: the clank of the engine, the pall of smoke and the ever-present coating of soot and coal dust drained the ocean of some of its romance; as historian Stephen Fox put it, “[t]he sea atmosphere, usually clean and bracing, felt cooked and greasy.” But sixty-six passengers ponied up for the return trip: “Already… ocean travelers had begun to accept the modernist bargain of steam dangers and discomforts in exchange for consistent, unprecedented speed.”[19] In that first year, Great Western puffed alone through Atlantic waters. Itmade four more round trips In 1838, eking out a small profit. The British Queen launched at last in July 1839, and British and American launched an even larger ship, SS President, the following year. Among the British Queen’s first passengers on its maiden voyage to New York was Samuel Cunard, a name that would resonate in ocean travel for a century to come, and an object lesson in the difference between technical and business success. In 1840 his Cunard Line began providing transatlantic service in four Britannia-class paddleships. Imitation Great Westerns (on a slightly smaller scale), they stood out not for their size or technical novelty but for their regularity and uniformity of service. But the most important factor in Cunard’s success was outmaneuvering the Great Western Steam Company in securing a contract with the Admiralty for mail service to Halifax. This provided a steady and reliable revenue stream—starting at 60,000 pounds a year—regardless of economic downturns. Moreover, once the Navy had come to depend on Cunard for speedy mail service it had little choice but to keep upping the payments to keep his finances afloat.[20] Thanks to the savvy of Cunard, steam travel from Britain to America, a fantasy in 1836 (at least according to the likes of Dionysius Lardner), had become steady business four years later. Brunel, however, had no patience for the mere making of money. He wanted to build monuments; creations to stand the test of time, things never seen or done before. So, when, soon after the launching of the Great Western, he began to design his next great steam ship, he decided he would build it with a hull of solid iron.

Read more
Internet Ascendant, Part 2: Going Private and Going Public

In the summer of 1986, Senator Al Gore, Jr., of Tennessee introduced an amendment to the Congressional Act that authorized the  budget of the National Science Foundation (NSF). He called for the federal government to study the possibilities for “communications networks for supercomputers at universities and Federal research facilities.” To explain the purpose of this legislation, Gore called on a striking analogy:  One promising technology is the development of fiber optic systems for voice and data transmission. Eventually we will see a system of fiber optic systems being installed nationwide. America’s highways transport people and materials across the country. Federal freeways connect with state highways which connect in turn with county roads and city streets. To transport data and ideas, we will need a telecommunications highway connecting users coast to coast, state to state, city to city. The study required in this amendment will identify the problems and opportunities the nation will face in establishing that highway.1 In the following years, Gore and his allies would call for the creation of an “information superhighway”, or, more formally, a national information infrastructure (NII). As he intended, Gore’s analogy to the federal highway system summons to mind a central exchange that would bind together various local and regional networks, letting all American citizens communicate with one another. However, the analogy also misleads – Gore did not propose the creation of a federally-funded and maintained data network. He envisioned that the information superhighway, unlike its concrete and asphalt namesake, would come into being through the action of market forces, within a regulatory framework that would ensure competition, guarantee open, equal access to any service provider (what would later be known as “net neutrality”), and provide subsidies or other mechanisms to ensure universal service to the least fortunate members of society, preventing the emergence of a gap between the information rich and information poor.2 Over the following decade, Congress slowly developed a policy response to the growing importance of computer networks to the American research community, to education, and eventually to society as a whole. Congress’ slow march towards an NII policy, however, could not keep up with the rapidly growing NSFNET, overseen by the neighboring bureaucracy of the executive branch. Despite its reputation for sclerosis, bureaucracy was created exactly because of its capacity, unlike a legislature, to respond to events immediately, without deliberation. And so it happened that, between 1988 and 1993, the NSF crafted the policies that would determine how the Internet became private, and thus went public. It had to deal every year with novel demands and expectations from NSFNET’s users and peer networks. In response, it made decisions on the fly, decisions which rapidly outpaced Congressional plans for guiding the development of an information superhighway. These decisions rested largely in the hands of a single man – Stephen Wolff. Acceptable Use Wolff earned a Ph.D. in electrical engineering at Princeton in 1961 (where he would have been a rough contemporary of Bob Kahn), and began what might have been a comfortable academic career, with a post-doctoral stint at Imperial College, followed by several years teaching at Johns Hopkins. But then he shifted gears, and took a position  at the Ballistics Research lab in Aberdeen, Maryland. He stayed there for most of the 1970s and early 1980s, researching communications and computing systems for the U.S. Army. He introduced Unix into the lab’s offices, and managed Aberdeen’s connection to the ARPANET.3 In 1986, the NSF recruited him to manage the NSF’s supercomputing backbone – he was a natural fit, given his experience connecting Army supercomputers to ARPANET. He became the principal architect of NSFNET’s evolution from that point until his departure in 1994, when he entered the private sector as a manager for Cisco Systems. The original intended function of the net that Wolff was hired to manage had been to connect researchers across the U.S. to NSF-funded supercomputing centers. As we saw last time, however, once Wolff and the other network managers saw how much demand the initial backbone had engendered, they quickly developed a new vision of NSFNET, as a communications grid for the entire American research and post-secondary education community. However, Wolff did not want the government to be in the business of supplying network services on a permanent basis. In his view, the NSF’s role was to prime the pump, creating the initial demand needed to get a commercial networking services sector off the ground. Once that happened, Wolff felt it would be improper for a government entity to be in competition with viable for-profit businesses. So he intended to get NSF out of the way by privatizing the network, handing over control of the backbone to unsubsidized private entities and letting the market take over. This was very much in the spirit of the times. Across the Western world, and across most of the political spectrum, government leaders of the 1980s touted privatization and deregulation as the best means to unleash economic growth and innovation after the relative stagnation of the 1970s. As one example among many, around the same time that NSFNET was getting off the ground, the FCC knocked down several decades-old constraints on corporations involved in broadcasting. In 1985, it removed the restriction on owning print and broadcast media in the same locality, and two year later it nullified the fairness doctrine, which had required broadcasters to present multiple views on public-policy debates.  From his post at NSF, Wolff had several levers at hand for accomplishing his goals. The first lay in the interpretation and enforcement of the network’s acceptable use policy (AUP). In accordance with NSF’s mission, the initial policy for the NSFNET backbone, in effect until June 1990, required all uses of the network to be in support of “scientific research and other scholarly activities.” This is quite restrictive indeed, and would seem to eliminate any possibility of commercial use of the network. But Wolff chose to interpret the policy liberally. Regularly mailing list postings about new product releases from a corporation that sold data processing software – was that not in support of scientific research? What about the decision to allow MCI’s email system to connect to the backbone, at the urging of Vint Cerf, who had left government employ to oversee the development of MCI Mail. Wolff rationalized this – and other later interconnections to commercial email systems such as CompuServe’s – as in support of research by making it possible for researchers to communicate digitally with a wider range of people that they might need to contact in the pursuit of their work. A stretch, perhaps. But Wolff saw that allowing some commercial traffic on the same infrastructure that was used for public NSF traffic would encourage the private investment needed to support academic and educational use on a permanent basis.  Wolff’s strategy of opening the door of NSFNET as far as possible to commercial entities got an assist from Congress in 1992, when Congressman Rick Boucher, who helped oversee NSF as chair of the Science Subcommittee, sponsored an amendment to the NSF charter which authorized any additional uses of NSFNET that would “tend to increase the overall capabilities of the networks to support such research and education activities.” This was an ex post facto validation of Wolff’s approach to commercial traffic, allowing virtually any activity as long as it produced profits that encouraged more private investment into NSFNET and its peer networks.   Dual-Use Networks Wolff also fostered the commercial development of networking by supporting the regional networks’ reuse of their networking hardware for commercial traffic. As you may recall, the NSF backbone linked together a variety of not-for-profit regional nets, from NYSERNet in New York to Sesquinet in Texas to BARRNet in northern California. NSF did not directly fund the regional networks, but it did subsidize them indirectly, via the money it provided to labs and universities to offset the costs of their connection to their neighborhood regional net. Several of the regional nets then used this same subsidized infrastructure to spin off a for-profit commercial enterprise, selling network access to the public over the very same wires used for the research and education purposes sponsored by NSF. Wolff encouraged them to do so, seeing this as yet another way to accelerate the transition of the nation’s research and education infrastructure to private control.  This, too, accorded neatly with the political spirit of the 1980s, which encouraged private enterprise to profit from public largesse, in the expectation that the public would benefit indirectly through economic growth. One can see parallels with the dual-use regional networks in the 1980 Bayh-Dole Act, which defaulted ownership of patents derived from government-funded research to the organization performing the work, not to the government that paid for it.  The most prominent example of dual-use in action was PSINet, a for-profit company initially founded as Performance Systems International in 1988. William Schrader and Martin Schoffstall, the co-founder of NYSERNet and one of vice presidents’, respectively, created the company. Schofstall, a former BBN engineer and co-author of the Simple Network Management Protocol (SNMP) for managing the devices on an IP network, was the key technical leader. Schrader, an ambitious Cornell biology major and MBA who had helped his alma mater set up its supercomputing center and get it connected to NSFNET, provided the business drive. He firmly believed that NYSERNet should be selling service to businesses, not just educational institutions. When the rest of the board disagreed, he quit to found his own company, first contracting with NYSERNet for service, and later raising enough money to acquire its assets. PSINet thus became one of the earliest commercial internet service providers, while continuing to provide non-profit service to colleges and universities seeking access to the NSFNET backbone.4 Wolff’s final source of leverage for encouraging a commercial Internet lay in his role as manager of the contracts with the Merit-IBM-MCI consortium that operated the backbone. The initial impetus for change in this dimension came not from Wolff, however, but from the backbone operators themselves.   A For-Profit Backbone MCI and its peers in the telecommunications industry had a strong incentive to find or create more demand for computer data communications. They had spent the 1980s upgrading their long-line networks from coaxial cable and microwave – already much higher capacity than the old copper lines – to fiber optic cables. These cables, which transmitted laser light through glass, had tremendous capacity, limited mainly by the technology in the transmitters and receivers on either end, rather than the cable itself. And that capacity was far from saturated. By the early 1990s, many companies had deployed OC-48 transmission equipment with 2.5 Gbps of capacity, an almost unimaginable figure a decade earlier. An explosion in data traffic would therefore bring in new revenue at very little marginal cost – almost pure profit.5 The desire to gain expertise in the coming market in data communications helps explains why MCI was willing to sign on to the NSFNET bid proposed by Merit, which massively undercut the competing bids (at $14 million for five years, versus the $40 and $25 millions proposed by their competitors6), and surely implied a short-term financial loss for MCI and IBM. But by 1989, they hoped to start turning a profit from their investment. The existing backbone was approaching the saturation point, with 500 million packets a month, a 500% year-over-year increase.7 So, when NSF asked Merit to upgrade the backbone from 1.5 Mbps T1 lines to 45Mbps T3, they took the opportunity to propose to Wolff a new contractual arrangement. T3 was a new frontier in networking – no prior experience or equipment existed for digital networks of this bandwidth, and so the companies argued that more private investment would be needed, requiring a restructuring that would allow IBM and Merit to share the new infrastructure with for-profit commercial traffic – a dual-use backbone. To achieve this, the consortium would from a new non-profit corporation, Advanced Network & Services, Inc. (ANS), which would supply T3 networking services to NSF. A subsidiary called ANS CO+RE systems would sell the same services at a profit to any clients willing to pay. Wolff agreed to this, seeing it as just another step in the transition of the network towards commercial control. Moreover, he feared that continuing to block commercial exploitation of the backbone would lead to a bifurcation of the network, with suppliers like ANS doing an end-run around NSFNET to create their own, separate, commercial Internet.  Up to that point, Wolff’s plan for gradually getting NSF out of the way had no specific target date or planned milestones. A workshop on the topic held at Harvard in March 1990, in which Wolff and many other early Internet leaders participated, considered a variety of options without laying out any concrete plans.8 It was ANS’ stratagem that triggered the cascade of events that led directly to the full privatization and commercialization of NSFNET. It began with a backlash. Despite Wolff’s good intentions, IBM and MCI’s ANS maneuver created a great deal of disgruntlement in the networking community. It became a problem exactly because of the for-profit networks attached to the backbone that Wolff had promoted. So far they had gotten along reasonably with one another, because they all operated as peers on the same terms. But with ANS, a for-profit company held a de-facto monopoly on the backbone at the center of the Internet.9 Moreover, despite Wolff’s efforts to interpret the AUP loosely, ANS chose to interpret it strictly, and refused to interconnect the non-profit portion of the backbone (for NSF traffic) with any of their for-profit networks like PSI, since that would require a direct mixing of commercial and non-commercial traffic. When this created an uproar, they backpedaled, and came up with a new policy, allowing interconnection for a fee based on traffic volume. PSINet would have none of this. In the summer of 1991, they banded together with two other for-profit Internet service providers – UUNET, which had begun by selling commercial access to Usenet before adding Internet service; and the California Education and Research Federation Network, or CERFNet, operated by General Atomics – to form their own exchange, bypassing the ANS backbone. The Commercial Internet Exchange (CIX) consisted at first of just a single routing center in Washington D.C. which could transfer traffic among the three networks. They agreed to peer at no charge, regardless of the relative traffic volume, with each network paying the same fee to CIX to operate the router. New routers in Chicago and Silicon Valley soon followed, and other networks looking to avoid ANS’ fees also joined on. Divestiture Rick Boucher, the Congressman whom we met above as a supporter of NSF commercialization, nonetheless requested an investigation of the propriety of Wolff’s actions in the ANS affair by the Office of the Inspector General. It found NSF’s actions precipitous, but not malicious or corrupt. Nevertheless, Wolff saw that the time had come to divest control of the backbone. With ANS + CORE and CIX privatization and commercialization had begun in earnest, but in a way that risked splitting the unitary Internet into multiple disconnected fragments, as CIX and ANS refused to connect with one another. NSF therefore drafted a plan for a new, privatized network architecture in the summer of 1992, released it for public comment, and finalized it in May of 1993. NSFNET would shut down in the spring of 1995, and its assets would revert to IBM and MCI. The regional networks could continue to operate, with financial support from the NSF gradually phasing out over a four year period, but would have to contract with a private ISP for internet access. But in a world of many competitive internet access providers, what would replace the backbone? What mechanism would link these opposed private interests into a cohesive whole? Wolff’s answer was inspired by the exchanges already built by cooperatives like CIX – NSF would contract out the creation of four Network Access Points (NAPs), routing sites where various vendors could exchange traffic. Having four separate contracts would avoid repeating the ANS controversy, by preventing a monopoly on the points of exchange. One NAP would reside at the pre-existing, and cheekily named, Metropolitan Area Ethernet East (MAE-East) in Vienna, Virginia, operated by Metropolitan Fiber Systems (MFS). MAE-West, operated by Pacific Bell, was established in San Jose, California; Sprint operated another NAP in Pennsauken, New Jersey, and Ameritech one in Chicago. The transition went smoothly10, and NSF decommissioned the backbone right on schedule, on April 30, 1995.11 The Break-up Though Gore and others often invoked the “information superhighway” as a metaphor for digital networks, there was never serious consideration in Congress of using the federal highway system as a direct policy model. The federal government paid for the building and maintenance of interstate highways in order to provide a robust transportation network for the entire country. But in an era when both major parties took deregulation and privatization for granted as good policy, a state-backed system of networks and information services on the French model of Transpac and Minitel was not up for consideration.12 Instead, the most attractive policy model for Congress as it planned for the future of telecommunication was the long-distance market created by the break-up of the Bell System between 1982 and 1984. In 1974, the Justice Department filed suit against AT&T, its first major suit against the organization since the 1950s, alleging that it had engaged in anti-competitive behavior in violation of the Sherman Antitrust Act. Specifically, they accused the company of using its market power to exclude various innovative new businesses from the market – mobile radio operators, data networks, satellite carriers, makers of specialized terminal equipment, and more. The suit thus clearly drew much of its impetus from the ongoing disputes since the early 1960s (described in an earlier installment), between AT&T and the likes of MCI and Carterfone. When it became clear that the Justice Department meant business, and intended to break the power of AT&T, the company at first sought redress from Congress. John de Butts, chairman and CEO since 1972, attempted to push a “Bell bill” – formally the Consumer Communications Reform Act – through Congress. It would have enshrined into law AT&T’s argument that the benefits of a single, universal telephone network far outweighed any risk of abusive monopoly, risks which in any case the FCC could already effectively check. But the proposal received stiff opposition in the House Subcommittee on Communications, and never reached a vote on the floor of either Congressional chamber.  In a change of tactics, in 1979 the board replaced the combative de Butts – who had once declared openly to an audience of state telecommunications regulators the heresy that he opposed competition and espoused monopoly – with the more conciliatory Charles Brown. But it was too late by then to stop the momentum of the antitrust case, and it became increasingly clear to the company’s leadership that they would not prevail. In January 1982, therefore, Brown agreed to a consent decree that would have the presiding judge in the case, Harold Greene, oversee the break-up of the Bell System into its constituent parts. The various Bell companies that brought copper to the customer’s premise, which generally operated by state (New Jersey Bell, Indiana Bell, and so forth) were carved up into seven blocks called Regional Bell Operating Companies (RBOCs). Working clockwise around the country, they were NYNEX in the northeast, Bell Atlantic, Bell South, Southwestern Bell, Pacific Telesis, US West, and Ameritech. All of them remained regulated entities with an effective monopoly over local traffic in their region, but were forbidden from entering other telecom markets.  AT&T itself retained the “long lines” division for long-distance traffic. Unlike local phone service, however, the settlement opened this market to free competition from any entrant willing and able to pay the interconnection fees to transfer calls in and out of the RBOCs. A residential customer in Indiana would always have Ameritech as their local telephone company, but could sign up for long-distance service with anyone. However, splitting apart the local and long-distance markets meant forgoing the subsidies that AT&T had long routed to rural telephone subscribers, under-charging them by over-charging wealthy long-distance users. A sudden spike in rural telephone prices across the nation was not politically tenable, so the deal preserved these transfers via a new organization, the non-profit National Exchange Carrier Association, which collected fees from the long-distance companies and distributed them to the RBOCS.    The new structure worked. Two major competitors entered the market in the 1980s, MCI and Sprint, and cut deeply into AT&T’s market share. Long-distance prices fell rapidly. Though it is arguable how much of this was due to competition per se, as opposed to the advent of ultra-high-bandwidth fiber optic networks, the arrangement was generally seen as a great success for de-regulation and a clear argument for the power of market forces to modernize formerly hidebound industries.  This market structure, created ad hoc by court fiat but evidently highly successful, provided the template from which Congress drew in the mid-1990s to finally resolve the question of what telecom policy for the Internet era would look like.  Second Time Isn’t The Charm Prior to the main event, there was one brief preliminary. The High Performance Computing Act of 1991 was important tactically, but not strategically. It advanced no new major policy initiatives. Its primary significance lay in providing additional funding and Congressional backing for what Wolff and the NSF already were doing and intended to keep doing – providing networking services for the research community, subsidizing academic institutions’ connections to NSFNET, and continuing to upgrade the backbone infrastructure.   Then came the accession of the 104th Congress in January 1995. Republicans took control of both the Senate and the House for the first time in forty years, and they came with an agenda to fight crime, cut taxes, shrink and reform government, and uphold moral righteousness. Gore and his allies had long touted universal access as a key component of the National Information Infrastructure, but with this shift in power the prospects for a strong universal service component to telecommunications reform diminished from minimal to none. Instead, the main legislative course would consist of regulatory changes to foster competition in telecommunications and Internet access, with a serving of bowdlerization on the side.  The market conditions looked promising. Circa 1992, the major players in the telecommunications industry were numerous. In the traditional telephone industry there were the seven RBOCs, GTE, and three large long distance companies – AT&T, MCI, and Sprint – along with many smaller ones. The new up-and-comers included Internet service providers, such as UUNET, and PSINET as well as the IBM/MCI backbone spin-off, ANS; and other companies trying to build out their local fiber networks, such as Metropolitan Fiber Systems (MFS). BBN, the contractor behind ARPANET, had begun to build its own small Internet empire, snapping up some of the regional networks that orbited around NSFNET – Nearnet in New England, BARRNet in the Bay area, and SURANet in the southeast of the U.S.  To preserve and expand this competitive landscape would be the primary goal of the 1996 Telecommunications Act, the only major rewrite of communications policy since the Communications Act of 1934. It intended to reshape telecommunications law for the digital age. The regulatory regime established by the original act siloed industries by their physical transmission medium – telephony, broadcast radio and television, cable TV; in each in its own box, with its own rules, and generally forbidden to meddle in each other’s business. As we have seen, sometimes regulators even created silos within silos, segregating the long-distance and local telephone markets. This made less and less sense as media of all types were reduced to fungible digital bits, which could be commingled on the same optical fiber, satellite transmission, or ethernet cable.  The intent of the 1996 Act, shared by Democrats and Republicans alike, was to tear down these barriers, these “Berlin Walls of regulation”, as Gore’s own summary of the act put it.13 A complete itemization of the regulatory changes in this doorstopper of a bill is not possible here, but a few examples provide a taste of its character. Among other things it: allowed the RBOCs to compete in long-distance telephone markets, lifted restrictions forbidding the same entity from owning both broadcasting and cable services, axed the rules that prevented concentration of radio station ownership. The risk, though, of simply removing all regulation, opening the floodgates and letting any entity participate in any market, was to recreate AT&T on an even larger scale, a monopolistic megacorp that would dominate all forms of communication and stifle all competitors. Most worrisome of all was control over the so-called last mile – from the local switching office to the customer’s home or office. Building an inter-urban network connecting the major cities of the U.S. was expensive but not prohibitive, several companies had done so in recent decades, from Sprint to UUNET. To replicate all the copper or cable to every home in even one urban area, was another matter. Local competition in landline communications had scarcely existed since the early wildcat days of the telephone, when tangled skeins of iron wire criss-crossed urban streets. In the case of the Internet, the concern centered especially on high-speed, direct-to-the-premises data services, later known as broadband. For years, competition had flourished among dial-up Internet access providers, because all the end user required to reach the provider’s computer was access to a dial tone. But this would not be the case by default for newer services that did not use the dial telephone network.  The legislative solution to this conundrum was to create the concept of the “CLEC” – competitive local exchange carrier. The RBOCs, now referred to as “ILECs” (incumbent local exchange carriers), would be allowed full, unrestricted access to the long-distance market only once the had unbundled their networks by allowing the CLECs, which would provide their own telecommunications services to homes and businesses, to interconnect with and lease the incumbents’ infrastructure. This would enable competitive ISPs and other new  service providers to continue to get access to the local loop even when dial-up service became obsolete – creating, in effect, a dial tone for broadband. The CLECs, in this model, filled the same role as the long-distance providers in the post-break-up telephone market. Able to freely interconnect at reasonable fees to the existing local phone networks, they would inject competition into a market previously dominated by the problem of natural monopoly.  Besides the creation of the CLECS, the other major part of the bill that affected the Internet addressed the Republicans’ moral agenda rather than their economic one. Title V, known as the Communications Decency Act, forbade the transmission of indecent or offensive material – depicting or describing “sexual or excretory activities or organs”, on any part of the Internet accessible to minors. This, in effect, was an extension of the obscenity and indecent rules that governed broadcasting into the world of interactive computing services.  How, then, did this sweeping act fare in achieving its goals? In most dimensions it proved a failure. Easiest to dispose with is the Communications Decency Act, which the Supreme Court struck down quickly (in 1997) as a violation of the First Amendment. Several parts of Title V did survive review however, including Section 230, the most important piece of the entire bill for the Internet’s future. It allows websites that host user-created content to exist without the fear of constant lawsuits, and protects the continued existence of everything from giants like Facebook and Twitter to tiny hobby bulletin boards.  The fate of the efforts to promote competition within the local loop took longer to play out, but proved no more successful than the controls on obscenity. What about the CLECs, given access to the incumbent cable and telephone infrastructure so that they could compete on price and service offerings? The law required FCC rulemaking to hash out the details of exactly what kind of unbundling had to be offered. The incumbents pressed the courts hard to dispute any such ruling that would open up their lines to competition, repeatedly winning injunctions on the FCC, while threatening that introducing competitors would halt their imminent plans for bringing fiber to the home.  Then, with the arrival of the Bush Administration and new chairman Michael Powell in 2001, the FCC became actively hostile to the original goals of the Telecommunications Act. Powell believed that the need for alternative broadband access would be satisfied by intermodal competition among cable, telephone, power communications networks, cellular and wireless networks. No more FCC rules in favor of CLECs would be forthcoming. For a brief time around the year 2000, it was possible to subscribe to third-party high-speed internet access using the infrastructure of your local telephone or cable provider. After that, the most central of the Telecom Act’s  pro-competitive measures became, in effect, a dead letter. The much ballyhooed fiber-to-the home only began to actually reach a significant number of homes after 2010, and the only with reluctance on the part of the incumbents.14 As author Fred Goldstein put it, the incumbents had “gained a fig leaf of competition without accepting serious market share losses.”15 During most of the twentieth century, networked industries in the U.S. had sprouted in a burst of entrepreneurial energy and then been fitted into the matrix of a regulatory framework as they grew large and important enough to affect the public interest. Broadcasting and cable television had followed this pattern. So had trucking and the airlines. But with the CLECs all but dead by the early 2000s, the Communications Decency Act revoked, and other attempts to control the Internet such as the Clipper chip16 stymied, the Internet would follow an opposite course.  Having come to life under the guiding hand of the state, it would now be allowed to develop in an almost entirely laissez-faire fashion. The NAP framework established by the NSF at the hand-off of the backbone would be the last major government intervention in the structure of the Internet. This was true at both the transport layer – the networks such as Verizon and AT&T that transported raw data, and the applications layer – software services from portals like Yahoo! to search engines like Google to online stores like Amazon.  In our last chapter, we will look at the consequences of this fact, briefly sketching the evolution of the Internet in the U.S. from the mid-1990s onward.  [Previous] [Next] Quoted in Richard Wiggins, “Al Gore and the Creation of the Internet” 2000. “Remarks by Vice President Al Gore at National Press Club“, December 21, 1993. Biographical details on Wolff’s life prior to NSF are scarce – I have recorded all of them that I could find here. Notably I have not been able to find even his date and place of birth. Schrader and PSINet rode high on the Internet bubble in the late 1990s, acquiring other businesses aggressively, and, most extravagantly, purchasing the naming rights to the football stadium of the NFL’s newest expansion team, the Baltimore Ravens. Schrader tempted fate with a 1997 article entitled “Why the Internet Crash Will Never Happen.” Unfortunately for him, it did happen, bringing about his ouster from the company in 2001 and PSINet’s bankruptcy the following year. To get a sense of how fast the cost of bandwidth was declining – in the mid-1980s, leasing a T1 line from New York to L.A. would cost $60,000 per month. Twenty years later, a OC-3 circuit with 100 times the capacity cost only $5,000, more than a thousand-fold reduction in price per capacity. See Fred R. Goldstein, The Great Telecom Meltdown, 95-96. Goldstein states that the 1.55 mpbs T1/DS1 line has 1/84th the capacity of OC-3, rather than 1/100th, a discrepancy I can’t account for. But this has little effect on the overall math. Office of Inspector General, “Review of NSFNET,” March 23, 1993. Fraser, “NSFNET: A Partnership for High-Speed Networking, Final Report”, 27. Brian Kahin, “RFC 1192: Commercialization of the Internet Summary Report,” November 1990. John Markoff, “Data Network Raises Monopoly Fear,” New York Times, December 19, 1991. Though many other technical details had to be sorted out, see  Susan R. Harris and Elise Gerich, “Retiring the NSFNET Backbone Service: Chronicling the End of an Era,” ConneXions, April 1996. The most problematic part of privatization proved to have nothing to do with the hardware infrastructure of the network, but instead with handing over control over the domain name system (DNS). For most of its history, its management had depended on the judgment of a single man – Jon Postel. But businesses investing millions in a commercial internet would not stand for such an ad hoc system. So the government handed control of the domain name system to a contractor, Network Solutions. The NSF had no real mechanism for regulatory oversight of DNS (though they might have done better by splitting the control of different top-level domains (TLDs) among different contractors), and Congress failed to step in to create any kind of regulatory regime. Control changed once again in 1998 to the non-profit ICANN (Internet Corporation for Assigned Names and Numbers), but the management of DNS still remains a thorny problem. The only quasi-exception to this focus on fostering competition was a proposal by Senator Daniel Inouye to reserve 20% of Internet traffic for public use: Steve Behrens, “Inouye Bill Would Reserve Capacity on Infohighway,” Current, June 20, 1994. Unsurprisingly, it went nowhere. Al Gore, “A Short Summary of the Telecommunications Reform Act of 1996”. Jon Brodkin, “AT&T kills DSL, leaves tens of millions of homes without fiber Internet,” Ars Technica, October 5, 2020. Goldstein, The Great Telecom Meltdown, 145. The Clipper chip was a proposed hardware backdoor that would give the government the ability to bypass any U.S.-created encryption software. Further Reading Janet Abatte, Inventing the Internet (1999) Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996) Shane Greenstein, How the Internet Became Commercial (2015) Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (2018) Rajiv Shah and Jay P. Kesan, “The Privatization of the Internet’s Backbone Network,” Journal of Broadcasting & Electronic Media (2007) Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

Read more
Steam Revolution: The Turbine

Incandescent electric light did not immediately snuff out all of its rivals: the gas industry fought back with its own incandescent mantle (which used the heat of the gas to induce a glow in another material) and the arc lighting manufacturers with a glass-enclosed arc bulb.[1] Nonetheless, incandescent lighting grew at an astonishing pace: the U.S. alone had an estimated 250,000 such lights in use by 1885, three million by 1890 and 18 million by the turn of the century.[2] Edison’s electric light company expanded rapidly across the U.S. and into Europe, and its success encouraged the creation of many competitors. An organizational division gradually emerged between manufacturing companies that built equipment and supply companies that used it to generate and deliver power to customers. A few large competitors came to dominate the former industry: Westinghouse Electric and General Electric (formed from the merger of Edison’s company with Thomson-Houston) in the U.S., and the Allgemeine Elektricitäts-Gesellschaft (AEG) and Siemens in Germany. In a sign of its gradual relative decline, Britain produced only a few smaller firms, such as Charles Parsons’ C. A. Parsons and Company—of whom more later.  In accordance with Edison’s early imaginings, manufacturers and suppliers expanded beyond lighting to general-purpose electrical power, especially electric motors and electric traction (trains, subways, and street cars). These new fields opened up new markets for users: electric motors, for example, enabled small-scale manufacturers who lacked the capital for a steam engine or water wheel to consider mechanization, while releasing large-scale factories from the design constraints of mechanical power transmission. They also provided electrical supply companies with a daytime user base to balance the nighttime lighting load. The demands of this growing electric power industry pushed steam engine design to its limits. Dynamos typically rotated hundreds of times a minute, several times the speed of a typical steam engine drive shaft. Engineers overcame this with belt systems, but these gave up energy to friction. Faster engines that could drive a dynamo directly required new high-speed valve control machinery, new cooling and lubrication systems to withstand the additional friction, and higher steam pressures more typical of marine engines than factories. That, in turn, required new boiler designs like the Babcock and Wilcox, which could operate safely at pressures well over 100 psi.[3] A high-speed steam engine (made by the British firm Willans) directly driving a dynamo (the silver cylinder at left). From W. Norris and Ben. H. Morgan, High Speed Steam Engines, 2nd edition (London: P.S. King & Son, 1902), 13. But the requirement that ultimately did in the steam engine was not for speed, but for size. As the electric supply companies evolved into large-scale utilities, providing power and light to whole urban centers and then beyond, they demanded more and more output from their power houses. Even Edison’s Pearl Street station, a tiny installation when looking back from the perspective of the turn of the century, required multiple engines to supply it. By 1903, the Westminster Electric Supply Corporation, which supplied only a part of London’s power, required forty-nine Willans engines in three stations to provide about 9 megawatts of power (an average of about 250 horsepower an engine). But demand continued to grow, and engines grew in response. Perhaps the largest steam engines ever built were the 12,000 horsepower giants designed by Edwin Reynolds and installed in 1901 for the Manhattan Elevated Railway Company and in 1904 for the Interborough Rapid Transit (IRT) subway company. Each of these engines actually consisted of two compound engines grafted together, each with its own high- and low-pressure cylinder, set at right angles to give eight separate impulses per rotation to the spinning alternator (an alternating current dynamo). The combined unit, engine and alternator, weighed 720 tons. But the elevated railway required eight of these monsters, and the IRT expected to need eleven to meet its power needs. The IRT’s power house, with a Renaissance Revival façade designed by famed architect Stanford White, filled a city block near the Hudson River (where it still stands today).[4] The inside of the IRT power house, with five engines installed. Each engine consists of two towers, with a disc-shaped dynamo between them. From Scientific American, October 29th, 1904. How much farther the reciprocating steam engine might have been coaxed to grow is hard to say with certainty, because even as the IRT powerhouse was going up in Manhattan, it was being overtaken by a new power technology based on whirling rotors instead of cycling pistons, the steam turbine. This great advancement in steam power borrowed from developments that had been brewing for decades in its most long-standing rival, water power. Niagara The signature electrical project of the turn of the twentieth century was the Niagara Falls Power Company. The immense scale of its works, its ambitions to distribute power over dozens of miles, its variety of prospective customers, and its adoption of alternating current: all signaled that the era of local, Pearl Street-style direct-current electric light plants was drawing to a close. The tremendous power latent in Niagara’s roaring cataract as it dropped from the level of Lake Erie to that of Lake Ontario was obvious to any observer—engineers estimated its potential horsepower in the millions—the problem was how to capture it, and where to direct it. By the late nineteenth century, several mills had moved to draw off some of its power locally. But Niagara could power thousands of factories, with each having to dig its own canals, tunnels and wheel pits to draw off the small fraction of waterfall that it required. New York State law, moreover, forbid development in the immediate vicinity of the falls to protect its scenic beauty. The solution ultimately decided on was to supply power to users from a small number of large-scale power plants, and the largest nearby pool of potential users lay in Buffalo, about twenty miles away.[5] The Niagara project originated in the 1886 designs of New York State engineer Thomas Evershed for a canal and tunnel lined with hundreds of wheel pits to supply power to an equal number of local factories. But the plan took a different direction in 1889 after securing the backing of a group of New York financiers, headed once again by J.P. Morgan. The Morgan group consulted a wide variety of experts in North America and Europe before settling on an electric power system as the best alternative, despite the unproven nature of long-distance electric power transmission. This proved a good bet: by 1893, Westinghouse had proved in California that it could deliver high-voltage alternating current over dozens of miles, convincing the Niagara company to adopt the same model.[6] Cover of the July 22, 1899 issue of Scientific American with multiple views of the first Niagara Falls Power Company power house and its five-thousand-horsepower turbine-driven generators. By 1904, the company had completed canals, vertical shafts for the fall of water, two powerhouses with a total capacity of 110,000 horsepower, and a mile-long discharge tunnel. They supplied power to local industrial plants, the city of Buffalo, and a wide swath of New York State and Ontario.[7] The most important feature of the power plant for our story, however, were the Westinghouse generators driven by water turbines, each with a capacity of 5,000 horsepower each. As Terry Reynolds, a historian of the waterwheel, put it, this was “more than ten times [the capacity] of the most powerful vertical wheel ever built.”[8] Water turbines had made possible the exploitation of water power on a previously inconceivable scale; appropriately so, for they originated from a hunger on the European continent for a power that could match British steam. Water Turbines The exact point at which a water wheel becomes a turbine is somewhat arbitrary; a turbine is simply a kind of water wheel that has reached a degree of efficiency and power that earlier designs could not approach. But the distinction most often drawn is in terms of relative motion: the water in a traditional wheel pushes the vane along with the same speed and direction as its own flow (like a person pushing a box along the floor). A turbine, on the other hand, creates “motion of the water relative to the buckets or floats of the wheel” in order to extract additional energy: that is to say, it uses the kinetic energy of the water as well as its weight or pressure. That can occur through either impulse (pressing water against the turning vanes), or reaction (shooting water out from them to cause them to turn) but very often includes a combination of both.[9] The exact origins of the horizontal water wheel are unknown, but they had been used in Europe since at least the late Middle Ages. They offered by far the simplest way to drive a millstone, since it could be attached directly to the wheel without any gearing, and remained in wide use in poorer regions of the continent well into the modern period. For centuries, the manufacturers and engineers of Western Europe focused their attention on the more powerful and efficient vertical water wheel, and this type constitutes most of our written record of water technology. Going back to the Renaissance, however, descriptions and drawings can be found of horizontal wheels with curved vanes intended to capture more of the flow of water, and it was the application of rigorous engineering to this general idea that led to the modern turbine. The turbine was in this sense the revenge of the horizontal water wheel, transforming the most low-tech type of water wheel into the most sophisticated. All of the early development of the water turbine occurred in France, which could draw on a deep well of hydraulic theory but could not so easily access coal and iron to make steam as could their British neighbors. Bernard Forest de Belidor, an eighteenth-century French engineer, recorded in his 1737 treatise on hydraulic engineering the existence of some especially ingenious horizontal wheels, used to grind flour at Bascale on the Garonne. They had curved blades fitted inside a surrounding barrel and angled like the blades of a windmill, such that “the water that pushes it works it with the force of its weight composed with the circular motion given to it by the barrel…”[10] Nothing much came of this observation for another century, but Belidor had identified what we could call a proto-turbine, where water not only pushed on the vanes but also glided down through them like the breeze on the arms of a windmill, capturing more of its energy. The horizontal mill wheels observed on the Garonne by Belidor. From Belidor, Architecture hydraulique vol. 1, part 2, Plan 5. In the meantime, theorists came to an important insight. Jean-Charles de Borda, another French engineer (there will be a lot of them in this part of the story), was only a small child in a spa town just north of the Pyrenees when Belidor was writing about water wheels. He studied mathematics and wrote mathematical treatises, became an engineer for the Army and then the Navy, undertook several scientific voyages, fought in the American Revolutionary War, and headed the commission that established the standard length of the meter. In the midst of all this he found some time in 1767 to write up a study on hydraulics for the French Academy of Sciences, in which he articulated the principle that, to extract the most power from a water wheel, the water should enter the machine without shock and leave it without velocity. Lazare Carnot, father of Sadi, restated this principle some fifteen years later, in a treatise that reached a wider audience than de Borda’s paper.[11] Though it is obviously impossible for the water to literally leave the wheel without velocity (for after all without velocity it would never leave), it was through striving for this imaginary ideal that engineers developed the modern, highly efficient water turbine. First came Jean-Victor Poncelet (from now on, if I mention someone, just assume they are French), another military engineer who had accompanied Napoleon’s Grande Armée into Russia in 1812, where he ended up a prisoner of war for two years. After returning home to Metz he became the professor of mechanics at the local military engineering academy. While there he turned his mind to vertical water wheels, and a long-standing tradeoff in their design: undershot wheels, in which the water passed under the wheel, were cheaper to construct but not very efficient, while overshot wheels, where the water came to the top of the wheel and fell on its vanes or buckets, had the opposite attributes. Poncelet combined the virtues of both by applying the principle of de Borda and Carnot. The traditional undershot waterwheel had a maximum theoretical efficiency of 50%, because the ideal wheel turned at half the speed of the water current, allowing the water to leave the vanes of the wheel behind with half of its initial velocity. The appearance of cheap sheet iron had made it possible to substitute metal vanes for wooden, and iron vanes could easily be bent in a curve. By curving the vanes of the wheel just so towards the incoming water, Poncelet found that it would run up the cupped vane, expending all of its velocity, and then fall out of the bottom of the wheel.[12] He published his idea in 1825 to immediate acclaim: “no other paper on water-wheels… had proved so interesting and commanded such attention.”[13] The Poncelet water wheel. Poncelet’s advance hinted at the possibility of a new water-powered industrial future for France. His wheel design soon became a common sight in a France eager to develop its industrial might, and richer in falling water than in reserves of coal. It inspired the Société d’Encouragement pour l’Industrie Nationale, an organization founded in 1801 to push France to be more industrially competitive with Britain, to offer a prize of 6,000 francs to anyone who “would apply on a large scale, in a satisfactory manner, in factories and manufacturing works, the water turbines or wheels with curved blades of Belidor.” The revenge of the horizontal wheel was at hand.[14] Benoît Fourneyron, an engineer at a water-powered ironworks in the hilly country near the Swiss border, claimed the prize in 1833. Even before the announcement of the prize, he had, in fact, already undertaken a deep study of hydraulic theory, reading up on Borda and his successors. He had devised and tested an improved “Belidor-style” wheel, applying the curved metal vanes of Poncelet to a horizontal wheel situated in a barrel-shaped pit, which we can fairly call the first modern water turbine. He went on to install over a hundred of these turbines around Europe, but his signal achievement was the 1837 spinning mill amid the hills of the Black Forest in Baden, which took in a head of water falling over 350 feet and generated sixty horsepower at 80% efficiency. The spinning rotor of the turbine responsible for this power was a mere foot across and weighed only forty pounds. A traditional wheel could neither take on such a head of water nor derive so much power, so efficiently, from such a compact machine.[15] The Fourneyron turbine. The inflowing water, from the reservoir A drives the rotor before emptying from its radial exterior into the basin D. From Eugène Armengaud, Traité théorique et pratique des moteurs hydrauliques et a vapeur, nouvelle edition (Paris: Armengaud, 1858), 279. Steam Turbines The water turbine was thus a far smaller and more efficient machine than its ancestor, the traditional water wheel. Its basic form had existed since at least the time of Belidor, but to achieve an efficient, high-speed design like Fourneyron’s required a body of engineers deeply educated in mathematical physics and a surrounding material culture capable of realizing those mathematical ideas in precisely machined metal. It also required a social context in which there existed demand for more power than traditional sources could ever provide: in this case, a France racing to catch up with rapidly industrializing Britain. The same relation held between the steam turbine and the reciprocating steam engine: the former could be much more compact and efficient, but put much higher demands on the precision of its design and construction. It was no great leap to imagine that steam could drive a turbine in the same way that water did: through the reaction against or impulse from moving steam. One could even look to some centuries-old antecedents for inspiration: the steam-jet reaction propulsion of Heron’s of Alexandria’s whirling “engine” (mentioned much earlier in this history), or a woodcut in Giovanni Branca’s seventeenth-century Le Machine, which showed the impulse of a steam jet driving a horizontal paddlewheel.   But it is one thing to make a demonstration or draw a picture, and another to make a useful power source. A steam turbine presented a far harder problem than a water turbine, because steam was so much less dense than liquid water. Simply transplanting steam into a water turbine design would be like blowing on a pinwheel: it would spin, but generate little power.[16] The difficulty was clear even in the eighteenth century: when confronted in 1784 with reports of a potential rival steam engine driven by the reaction created by a jet of steam, James Watt calculated that, given the low relative density of steam, the jet would have to shoot from the ends of the rotor at 1,300 feet per second, and thus “without god makes it possible for things to move 1000 feet [per second] it can not do much harm.” As historian of steam Henry Dickinson epitomized Watt’s argument, “[t]he analysis of the problem is masterly and the conclusion irrefutable.”[17] Even when future generations of metal working made the speeds required appear more feasible, one could get nowhere with traditional “cut and try” techniques with ordinary physical tools; the problem demanded careful analysis with the precision tools offered by mathematics and physics.[18] Dozens of inventors took a crack at the problem, nonetheless, including another famed steam engine designer, Richard Trevithick. None found success. Though Fourneyron had built an effective water turbine in the 1830s, the first practical steam turbines did not appear until the 1880s: a time when metallurgy and machine tools had achieved new heights (with mass-produced steels of various grades and qualities available) and a time when even the steam engine was beginning to struggle to sate modern society’s demand for power. It first appeared in two places more or less at once: Sweden and Britain. Gustaf de Laval burst from his middle-class background in the Swedish provinces into the engineering school at Uppsala with few friends but many grandiose dreams: he was the protagonist in his own heroic tale of Swedish national greatness, the engineering genius who would propel Sweden into the first rank of great nations. He lived simultaneously in grand style and constant penury, borrowing from his visions for an ever more prosperous tomorrow to live beyond his means of today. In the 1870s, while working a day job at a glassworks, he developed two inventions based on centrifugal force generated by a rapidly spinning wheel. The first, a bottle-making machine, flopped, but the second, a cream separator, became the basis for a successful business that let him leave his day job behind.[19] Then, in 1882 he patented a turbine powered by a jet of steam directed at a spinning wheel. De Laval claimed that his inspiration came from seeing a nozzle used for sandblasting at the glassworks come loose and whip around, unleashing its powerful jet into the air; it is also not hard to see some continuity in his interest in high-speed rotation. De Laval used his whirling turbines to power his whirling cream separators, and then acquired an electric light company, giving himself another internal customer for turbine power.[20] Though superficially similar to de Branca’s old illustration, de Laval’s machine was far more sophisticated. As Watt had calculated a century earlier, the low density of steam demanded high rotational speeds (otherwise the steam would escape from the machine having given up very little energy to the wheel) and thus a very high-velocity jet: de Laval’s steel rotor spun at tens of thousands of rotations per minute in an enclosed housing. A few years later he invented an hourglass-shaped nozzle to propel the steam jet to supersonic speeds, a shape that is still used in rocket engines for the same purpose today. Despite the more advanced metallurgy of the late-nineteenth century, however, de Laval still ran up against its limits: he could not run his turbine at the most efficient possible speed without burning out his bearings and reduction gear, and so his turbines didn’t fully capture their potential efficiency advantage over a reciprocating engine.[21] Cutaway view of a de Laval turbine, from William Ripper, Heat Engines (London: Longmans, Green, 1909), 234. Meanwhile, the British engineer Charles Parsons came up with a rather different approach to extracting energy from the steam, which didn’t require such rapid rotation. Whereas De Laval strove up from the middle class, Parsons came from the highest gentry. Son of the third Earl of Rosse, he grew up in a castle in Ireland, with grounds that included a lake and a sixty-foot-long telescope constructed to his father’s specifications. He studied at home under, Robert Ball, who later became the Astronomer Royal of Ireland, then went on to graduate from Cambridge University in 1877 as eleventh wrangler—the eleventh best in his class on the mathematics exams.[22] Despite his noble birth, Parsons appeared determined to find his own way in the world. He apprenticed himself at Elswick Works, a manufacturer of heavy construction and mining equipment and military ordnance in Newcastle on Tyne. He spent a couple years with a partner in Leeds trying to develop rocket-powered torpedoes before taking up as a junior partner at another heavy engineering concern, Clarke Chapman in Gateshead (back on the River Tyne).[23] His new bosses directed Parsons away from torpedoes toward the rapidly growing field of electric lighting. He turned to the turbine concept in search of a high-speed rotor that could match the high rotational speeds of a dynamo. Parsons came up with a different solution for the density problem than Laval’s. Rather than try to extract as much power as possible from the steam jet with one extremely fast rotor, he would send the steam through a series of rotors arranged horizontally. They would then not have to spin so quickly (though Parson’s first prototype still ran at 18,000 rotations per minute), and each could extract a bit of energy from the steam as it flowed through the turbine, dropping in pressure. This design extended the two-or three- stages of pressure reduction in a multi-cylinder steam engine into a continuous flow across a dozen or more rotors. Parsons’ approach created some new challenges (keeping the long, rapidly spinning shaft from bowing too far in one direction or the other, for example) but ultimately most future steam turbines would copy this elongated form.[24] Parson’s original prototype turbine and dynamo, with the top removed. Steam entered at the center and exited from both ends, which eliminated the need to deal with “end thrust,” a force pushing on one end of the turbine. From Dickinson, A Short History of the Steam Engine, plate vii. The Rise of Turbines Parsons soon founded his own firm to exploit the turbine. Because it has far less inherent friction than the piston of a traditional engine, and because none of its parts have to touch both hot and cold steam, a turbine had the potential to be much more efficient, but they didn’t start out that way. So his early customers were those who cared mainly about the smaller size of turbines: shipbuilders looking to put in electric lighting without adding too much weight or using too much space in the hull. In other applications reciprocating engines still won out.[25] Further refinements, however, allowed turbines to start to supplant reciprocating engines in electrical systems more generally: more efficient blade designs, the addition of a regulator to ensure that steam entered the turbine only at full pressure, the superheating of steam at one end and the condensing of it at the other to maximize the fall in temperature across the entire engine. Turbo-generators—electrical dynamos driven by turbines—began to find buyers in the 1890s. By 1896, Parsons could boast that a two-hundred-horsepower turbine his firm constructed for a Scottish electric power station ran at 98% of its ideal efficiency, and Westinghouse had begun to develop turbines under license in the United States.[26] Cutaway view of a fully developed Parsons-style turbine. Steam enters at left (A) and passes through the rotors to the right. From Ripper, Heat Engines, 241. At the same time, Parsons was pushing for the construction of ships with turbine powerplants, starting with the prototype Turbinia, which drove nine propellers with three turbines and achieved a top speed of nearly forty miles-per-hour. Suitably impressed, the British Admiralty ordered turbine-powered destroyers (starting with Viper in 1897), but the real turning point came in 1906 with the completion of the first turbine-driven battleship (Dreadnought) and transatlantic steamers (Lusitania and Muaretania), all supplied with Parsons powerplants.[27] HMS Dreadnought was remarkable not only for her armament and armor, but also for her speed of 21 knots (24 miles per hour), made possible by Parsons turbines. The very first steam turbines had demonstrated their advantage over traditional engines in size; a further decade-and-a-half of development allowed them to realize their potential advantages in efficiency; and now these massive vessels made clear their third advantage: the ability to scale to enormous power outputs. As we saw, the monster steam engines at the subway power house in New York could generate 12,000 horsepower, but the turbines aboard Lusitiania churned out half again as much, and that was far from the limit of what was possible. In 1915, the Interborough Rapid Transit Company, facing ever-growing demand for power with the addition of a third (express) track to its elevated lines, installed three 40,000 horsepower turbines for electrical generation, obsoleting Reynolds’ monster engines of a decade earlier. By the 1920s, 40,000 horsepower turbines were being built in the U.S., and burning half as much coal per watt of power generated as the most efficient reciprocating engines.[28] Parsons lived to see the triumph of his creation. He spent his last years cruising the world, and preferred to spend the time between stops talking shop with the crew and engineers rather than lounging with other wealthy passengers. He died in 1931, at age 76, in the Caribbean while aboard ship on the (turbine-powered of course) Duchess of Richmond.[29] Meanwhile, power usage shifted towards electricity, made widely available by the growth of steam and water turbines and the development of long-distance power transmission, not by traditional steam engines. Niagara was just a foretaste of the large-scale water power projects made feasible by the newly found capacity to transmit that power wherever it was needed: the Hoover Dam and Tennessee Valley Authority in the U.S., the Rhine power dams in Europe, and later projects intended to spur the modernization of poorer countries, from the Aswan Dam on the Nile and the Gezhouba Dam on the Yangtze. In regions with easy access to coal, however, steam turbines provided the majority of all electric power until far in the twentieth century. Cheap electricity transformed industry after industry. By 1920, manufacturing consumed half of the electricity produced in the U.S., mainly through dedicated electric motors at each tool, eliminating the need for the construction and maintenance of a large, heavy steam engine and for bulky and friction-heavy shafts and belts to transmit power through the factory. The capital barriers to starting a new manufacturing plant thus dropped substantially along with the recurring cost of paying for power, and the way was opened to completely rethink how manufacturing plants were built and operated. Factories became cleaner, safer, and more pleasant to work in, and the ability to organize machines according to the most efficient work process rather than the mechanical constraints of power delivery produced huge dividends in productivity.[30] A typical pre-electricity factory power distribution system, based on line shafts and belts (in this case driving power looms). All the machines in the factory have to be organized around the driveshafts. [Z22, CC BY-SA 3.0] The 1910 Ford Highland Park plant represents a hybrid stage on the way to full electrification of every machine; the plant still had overhead line shafts (here for milling engine blocks), but each area was driven by a local electric motor, allowing for a much more flexible arrangement of machinery. By that time, the heyday of the piston-driven steam engine was over. For large-scale installations, it could no longer compete with turbines (whether powered by liquid water or steam). At the same time, feisty new competitors, diesel and gasoline engines, were gnawing away at its share of the lower horsepower market. The warning shot fired by the air engine had finally caught up to steam. It could not outrun thermodynamics, and the incredibly energy-dense new fuel source that had come bubbling up out of the ground: rock oil, or petroleum.

Read more
Internet Ascendant, Part 1: Exponential Growth

In 1990, John Quarterman, a networking consultant and UNIX expert, published a comprehensive survey of the state of computer networks. In a brief section on the potential future for computing, he predicted the appearance of a single global network for “electronic mail, conferencing, file transfer, and remote login, just as there is now one worldwide telephone network and one worldwide postal system.” But he did not assign any special significance to the Internet in this process. Instead, he assumed that the worldwide net would “almost certainly be run by government PTTs”, except in the United States, “where it will be run by the regional Bell Operating Companies and the long-distance carriers.” It will be the purpose of this post to explain how, in a sudden eruption of exponential growth, the Internet so rudely upset these perfectly natural assumptions. Passing the Torch The first crucial event in the creation of the modern Internet came in the early 1980s, when the Defense Communication Agency (DCA) decided to split ARPANET in two. The DCA had taken control of the network in 1975. By that time, it was clear that it made little sense for the ARPA Information Processing Techniques Office (IPTO), a blue sky research organization, to be involved in running a network that was being used for participants’ daily communications, not for research about communication. ARPA tried and failed to hand off the network to private control by AT&T. The DCA, responsible for the military’s communication systems, seemed the next best choice. For the first several years of this new arrangement, ARPANET prospered under a regime of benign neglect. However, by the early 1980s, the Department of Defense’s aging data communications infrastructure desperately needed an upgrade. The intended replacement, AUTODIN II, which DCA had contracted with Western Union to construct, was foundering. So DCA’s leaders appointed Colonel Heidi Hieden to come up with an alternative. He proposed to use the packet-switching technology that DCA already had in hand, in the form of ARPANET, as the basis for the new defense data network. But there was an obvious problem with sending military data over ARPANET – it was rife with long-haired academics, including some who were actively hostile to any kind of computer security or secrecy, such as Richard Stallman and his fellow hackers at the MIT Artificial Intelligence Lab. Heiden’s solution was to bifurcate the network. He would leave the academic researchers funded by ARPA on ARPANET, while splitting the computers used at national defense sites off onto a newly formed network called MILNET. This act of mitosis had two important consequences. First, by decoupling the militarized and non-militarized parts of the network, it was the initial step toward transferring the Internet to civilian, and eventually, private, control. Secondly, it provided the proving ground for the seminal technology of the Internet, the TCP/IP protocol, which had first been conceived half a decade before. DCA required all the ARPANET nodes to switch over to TCP/IP from the legacy protocol by the start of 1983. Few networks used TCP/IP at that point in time, but now it would link the two networks of the proto-Internet, allowing message traffic to flow between research sites and defense sites when necessary. To further ensure the long-term viability of TCP/IP for military data networks, Heiden also established a $20 million fund to pay computer manufacturers to write TCP/IP software for their systems (1). This first step in the gradual transfer of the Internet from the military to private control provides as good an opportunity as any to bid farewell to ARPA and the IPTO. Its funding and influence, under the leadership of J.C.R. Licklider, Ivan Sutherland, and Robert Taylor, had produced, directly or indirectly, almost all of the early developments in interactive computing and networking. The establishment of the TCP/IP standard in the mid-1970s, however, proved to be the last time it played a central role in the history of computing (2). The Vietnam War provided the decisive catalyst for this loss of influence. Most research scientists had embraced the Cold war defense-sponsored research regime as part of a righteous cause to defend democracy. But many who came of age in the 1950s and 1960s lost faith in the military and its aims due to the quagmire in Vietnam. That included Taylor himself, who quit IPTO in 1969, taking his ideas and his connections to Xerox PARC. Likewise, the Democrat-controlled Congress, concerned about the corrupting influence of military money on basic scientific research, passed amendments requiring defense money to be directed to military applications. ARPA reflected this change in funding culture in 1972 by renaming itself as DARPA, the Defense Advanced Research Projects Agency. And so the torch passed to the civilian National Science Foundation (NSF). By 1980, with $20 million dollars in funding, the NSF accounted for about half of federal computer science research spending in the U.S, about $20 million (3). Much of that funding would soon be directed toward a new national computing network, the NSFNET. NSFNET In the early 1980s, Larry Smarr, a physicist at the University of Illinois, visited the Max Planck Institute in Munich, which hosted a Cray supercomputer that it made readily available to European researchers. Frustrated at the lack of equivalent resources for scientists in the U.S., he proposed that the NSF fund a series of supercomputing centers across the country (4). The organization responded to Smarr and other researchers with similar complaints by creating the Office of Advanced Scientific Computing in 1984, which went on to fund a total of five such centers, with a total five-year budget of $42 million. They stretched from Cornell in the northeast of the country to San Diego in the southwest. In between, Smarr’s own university (Illinois) received its own center, the National Center for Supercomputing Applications (NCSA). But these centers alone would only do so much to improve access to computer power in the U.S. Using the computers would still be difficult for users not local to any of the five sites, likely requiring a semester or summer fellowship to fund a long-term visit. And so NSF decided to also build a computer network. History was repeating itself – making it possible to share powerful computing resources with the research community was exactly what Taylor had in mind when he pushed for the creation of ARPANET back in the late 1960s. The NSF would provide a backbone that would span the continent by linking the core supercomputer sites, then regional nets would connect to those sites to bring access to other universities and academic labs. Here NSF could take advantage of the support for the Internet protocols that Heiden had seeded, by delegating the responsibility of creating those regional networks to local academic communities. Initially, the NSF delegated the setup and operation of the network to the NCSA at the University of Illinois, the source of the original proposal for a national supercomputer program. The NCSA, in turn, leased the same type of 56 kilobit-per-second lines that ARPANET had used since 1969, and began operating the network in 1986. But traffic quickly flooded those connections (5). Again mirroring the history of ARPANET, it quickly became obvious that the primary function of the net would be communications among those with network access, not the sharing of computer hardware among scientists. One can certainly excuse the founders of ARPNET for not knowing that this would happen, but how could the same pattern repeat itself almost two decades later? One possibility is that it’s much easier to justify a seven-figure grant to support the use of eight figures worth of computing power, than to justify dedicating the same sums to the apparently frivolous purpose of letting people send email to one another. This is not to say that there was willful deception on the part of the NSF, but that just as the anthropic principle posits that the physical constants of the universe are what they are because otherwise we couldn’t exist to observe them, so no publicly-funded computer network could have existed for me to write about without a somewhat spurious justification. Now convinced that the network itself was at least as valuable as the supercomputers that had justified its existence, NSF called on outside help to upgrade the backbone with 1.5 megabit-per-second T1 lines (6). Merit Network, Inc., won the contract, in conjunction with MCI and IBM, securing $58 million in NSF funding over an initial five year grant to build and operate the network. MCI provided the communications infrastructure, IBM the computing hardware and software for the routers. Merit, a non-profit that ran a computer network that linked the University of Michigan campuses (7), brought experience operating an academic computer network, and gave the whole partnership a collegiate veneer that made it more palatable to NSF and the academics who used NSFNET. Nonetheless, the transfer of operations from NCSA to Merit was a clear first step towards privatization. Traffic flowed through Merit’s backbone from almost a dozen regional networks, from the New York State Education and Research Network (NYSERNet), interconnected at Cornell in Ithaca, to the California Education and Research Federation Network (CERFNet -no relation to Vint), which interconnected at San Diego. Each of these regional networks also internetted with countless local campus networks, as Unix machines appeared by the hundreds in college labs and faculty offices. This federated network of networks became the seed crystal of the modern Internet. ARPANET had connected only well-funded computer researchers at elite academic sites, but by 1990 almost anyone in post-secondary education in the U.S – faculty or student – could get online. There, via packets bouncing from node to node – across their local Ethernet, up into the regional net, then leaping vast distances at light speed via the NSFNET backbone – they could exchange email or pontificate on Usenet with their counterparts across the country. With far more academic sites now reachable via NSFNET than ARPANET, The DCA decommissioned that now-outmoded network in 1990, fully removing the Department of Defense from involvement in civilian networking. Takeoff Throughout this entire period, the number of computers on NSFNET and its affiliated networks – which we may now call the Internet (8) – was roughly doubling each year. 28,000 in December 1987, 56,000 in October 1988, 159,000 in October 1989, and so on. It would continue to do so well into the mid-1990s, at which point the rate slowed only slightly (9). The number of networks on the Internet grew at a similar rate – from 170 in July of 1988 to 3500 in the fall of 1991. The academic community being an international one, many of those networks were overseas, starting with connections to France and Canada in 1988. By 1995, the Internet was accessible from nearly 100 countries, from Algeria to Vietnam (10). Though it’s much easier to count the number of  machines and networks than the number of actual users, reasonable estimates put that latter figure at 10-20 million by end of 1994 (11).  Any historical explanation for this tremendous growth is challenging to defend in the absence of detailed data about who was using the Internet for what, at what time. A handful of anecdotes can hardly suffice to account for the 350,000 computers added to the Internet between January 1991 and January 1992, or the 600,000 in the year after that, or the 1.1 million in the year after that. Yet I will dare to venture onto this epistemically shaky ground, and assert that three overlapping waves of users account for the explosion of the Internet, each with their own reasons for joining, but all drawn by the inexorable logic of Metcalfe’s Law, which indicates that the value (and thus the attractive force) of a network increases with the square of its number of participants. First came the academic users. The NSF had intentionally spread computing to as many universities as possible. Now every academic wanted to be on board, because that’s where the other academics were. To be unreachable by Internet email, to be unable to see and participate in the latest discussions on Usenet, was to risk missing an important conference announcement, a chance to find a mentor, cutting-edge pre-publication research, and more. Under this pressure to be part of the online academic conversation, universities quickly joined onto the regional networks that could connect them to the NSFNET backbone. NEARNET, for example, which covered the six states of the New England region, grew to over 200 members by the early 1990s. At the same time, access began to trickle down from faculty and graduate students to the much larger undergraduate population. By 1993, roughly 70% of the freshman class at Harvard had edu email accounts. By that time the Internet also became physically ubiquitous at Harvard and its peer institutions, which went to considerable expense to wire Ethernet into not just every academic building, but even the undergrad dormitories (12). It was surely not long before the first student stumbled into his or her room after a night of excess, slumped into their chair, and laboriously pecked out an electronic message that they would regret in the morning, whether a confession of love or a vindictive harangue. In the next wave, the business users arrived, starting around 1990. As of that year, 1151 .com domains had been registered. The earliest commercial participants came from the research departments of high-tech companies (Bell Labs, Xerox, IBM, and so on) They, in effect, used the network in an academic capacity. Their employers’ business communications went over other networks. By 1994, however, over 60,000 .com domain names existed, and the business of making money on the Internet had begun in earnest (13).  As the 1980s waned, computers were becoming a part of everyday life at work and home in the U.S, and the importance of a digital presence to any substantial business became obvious. Email offered easy and extremely fast communication with co-workers, clients, and vendors. Mailing lists and Usenet provided both new ways of keeping up to date with a professional community, and new forms of very cheap advertising to a generally affluent set of users. A wide variety of free databases could be accessed via the Internet – legal, medical, financial, and political. New graduates arriving into the workforce from fully-wired campuses also became proselytes for the Internet at their employers. It offered access to a much larger set of users than any single commercial service (Metcalfe’s Law again), and once you paid a monthly fee for access to the net, almost everything else was free, unlike the marginal hourly and per-message fees charged by CompuServe and its equivalents. Early entrants to the Internet marketplace included mail-order software companies like The Corner Store of Litchfield, Connecticut, which advertised in Usenet discussion groups, and The Online Bookstore, an electronic books seller founded over a decade before the Kindle by a former editor at Little, Brown (14). Finally came the third wave of growth, the arrival of ordinary consumers, who began to access the Internet in large numbers in the mid-1990s. By this point Metcalfe’s Law was operating in overdrive. Increasingly, to be online meant to be on the Internet. Unable to afford T1 lines to their homes, consumers almost always accessed the Internet over a dial-up modem. We have already seen part of that story, with the gradual transformation of commercial BBSes into commercial Internet Service Providers (ISPs). This change benefited both the users (whose digital swimming pool suddenly grew into an ocean) and the BBS itself, which could run a much simpler business as an intermediary between the phone system and a T1 on-ramp to the Internet, without maintaining their own services. Larger online services followed a similar pattern. By 1993, all of the major national-scale services in the U.S. – Prodigy, CompuServe, GEnie and upstart America Online (AOL) – offered their 3.5 million combined subscribers the ability to send email to Internet addresses. Only laggard Delphi (with less than 100,000 subscribers), however, offered full Internet access (15). Over the next few years, though, the value of access to the Internet – which continued to grow exponentially – rapidly outstripped that of accessing the services’ native forums, games, shopping and other content. 1996 was the tipping point – by October of that year, 73% of those online reported having used the World Wide Web, compared to just 21% a year earlier (16). The new term “portal” was coined, to describe the vestigial residue of content provided by AOL, Prodigy, and others, to which people subscribed mainly to get access to the Internet.  The Secret Sauce We have seen, then, something of how the Internet grew so explosively, but not quite enough to explain why. Why, in particular, did it become so dominant in the face of so much prior art, so many other services that were striving for growth during the era of fragmentation that preceded it? Government subsidy helped, of course. The funding of the backbone aside, when NSF chose to invest seriously in networking as an independent concern from its supercomputing program, it went all in. The principal leaders of the NSFNET program, Steve Wolff and Jane Caviness, decided that they were building not just a supercomputer network, but a new information infrastructure for American colleges and universities. To this end, they set up the Connections program, which offset part of the cost for universities to get onto the regional nets, on the condition that they provide widespread access to the network on their campus. This accelerated the spread of the Internet both directly and indirectly. Indirectly, since many of those regional nets then spun-off for-profit enterprises using this same subsidized infrastructure  to sell Internet access to businesses. But Minitel had subsidies, too. The most distinct characteristic of the Internet, however, was it layered, decentralized architecture, and attendant flexibility. IP allowed networks of a totally different physical character to share the same addressing system, and TCP ensured that packets were delivered to their destination. And that was all. Keeping the core operations of the network simple allowed virtually any application to be built atop it. Most importantly, any user could contribute new functionality, as long as they could get others to run their software. For example, file transfer (FTP) was among the most common uses of the early Internet, but it was very hard to find servers that offered files of interest for download except by word-of-mouth. So enterprising users built a variety of protocols to catalog and index the net’s FTP servers, such as Gopher, Archie, and Veronica. The OSI stack also had this flexibility, in theory, and the official imprimatur of international organizations and telecommunications giants as the anointed internetworking standard. But possession is nine-tenths of the law, and TCP/IP held the field, with the decisive advantage of running code on thousands, and then millions, of machines. The devolution of control over the application layer to the edges of the network had another important implication. It meant that large organizations, used to controlling their own bailiwick, could be comfortable there. Businesses could set up their own mail servers and send and receive email without all the content of those emails sitting on someone else’s computer. They could establish their own domain names, and set up their own websites, accessible to everyone on the net, but still entirely within their own control.  The World Wide Web – ah – that was the most striking example, of course, of the effects of layering and decentralized control. For two decades, systems from the time-sharing systems of the 1960s through to the likes of CompuServe and Minitel had revolved around a handful of core communications services – email, forums, and real-time chat. But the Web was something new under the sun. The early years of the web, when it consisted entirely of bespoke, handcrafted pages, were nothing like its current incarnation. Yet bouncing around from link to link was already strangely addictive – and it provided a phenomenally cheap advertising and customer support medium for businesses. None of the architects of the Internet had planned for the Web. It was the brainchild of Tim Berners-Lee, a British engineer at the European Organization for Nuclear Research (CERN), who created it in 1990 to help disseminate information among the researchers at the lab. Yet could easily rest atop TCP/IP, and re-use the domain-name system, created for other purposes, for its now-ubiquitous URLs. Anyone with access to the Internet could put up a site, and by the mid-1990s it seemed everyone had – city governments, local newspapers, small businesses, and hobbyists of every stripe.  Privatization In this telling of the story of the Internet’s growth, I have elided some important events, and perhaps left you with some pressing questions. Notably, how did businesses and consumers get access to an Internet centered on NSFNET in the first place – to a network funded by the U.S. government, and ostensibly intended to serve the academic research community? To answer this, the next installment will revisit some important events which I have quietly passed over, events which gradually but inexorably transformed a public, academic Internet into a private, commercial one. [Previous] [Next] Further Reading Janet Abatte, Inventing the Internet (1999) Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996) John S. Quarterman, The Matrix (1990) Peter H. Salus, Casting the Net (1995) Footnotes Note: The latest version of the WordPress editor appears to have broken markdown-based footnotes, so these are manually added, without links. My apologies for the inconvenience. Abbate, Inventing the Internet, 143. The next time DARPA would initiate a pivotal computing project was with the Grand Challenges for autonomous vehicles of 2004-2005. The most famous project in-between, the billion-dollar AI-based Strategic Computing Initiative of the 1980s, produced a few useful applications for the military, but no core advances applicable to the civilian world.  “1980 National Science Foundation Authorization, Hearings Before the Subcommittee on Science, Researce [sic] and Technology of the Committee on Science and Technology”, 1979.  Smarr, “The Supercomputer Famine in American Universities” (1982)  A snapshot of what this first iteration of NSFNET was like can be found in David L. Mills, “The NSFNET Backbone Network” (1987)  The T1 connection standard, established by AT&T in the 1960s, was designed to carry twenty-four telephone calls, each digitally encoded at 64 kilobits-per-second.  MERIT originally stood for Michigan Educational Research Information Triad. The state of Michigan pitched in $5 million of its own to help its homegrown T1 network get off the ground. Of course, the name and concept of Internet predates the NSFNET. The Internet Protocol dates to 1974, and there were networks connected by IP prior to NSFNET. ARPANET and MILNET we have already mentioned. But I have not been able to find any reference to “the Internet” – a single, all-encompassing world-spanning network of networks – prior to the advent of the three-tiered NSFNET. See this data. Given this trend, how could Quarterman fail to see that the Internet was destined to dominate the world? If the recent epidemic has taught is anything, it is that exponential growth is extremely hard for the human mind to grasp, as it accords with nothing in our ordinary experience.  These figures come from Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996).  See Salus, Casting the Net, 220-221.  Mai-Linh Ton, “Harvard, Connected: The Houses Got Internet,” The Harvard Crimson, May 22, 2017. IAPS, “The Internet in 1990: Domain Registration, E-mail and Networks;” RFC 1462, “What is the Internet;” Resnick and Taylor, The Internet Business Guide, 220.  Resnick and Taylor, The Internet Business Guide, xxxi-xxxiv. Pages 300-302 lay out the pros and cons of the Internet and commercial online services for small businesses.  Statistics from Rosalind Resnick, Exploring the World of Online Services (1993). Pew Research Center, “Online Use,” December 16, 1996. Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

Read more
The Electronic Computers, Part 1: Prologue

As we saw in the last installment, the search by radio and telephone engineers for more powerful amplifiers opened a new technological vista that quickly acquired the name electronics. An electronic amplifier could easily be converted into a digital switch, but one with vastly greater speed than its electro-mechanical cousin, the telephone relay. Due to its lack of mechanical parts, a vacuum tube could switch on or off in a microsecond or less, rather than the ten milliseconds or more required by a relay. Between 1939 and 1945, three computers were built on the basis of these new electronic components. It is no coincidence that the dates of construction of these machines lie neatly within the the period of the Second World War. This conflict — unparalleled in history in the degree to which it yoked entire peoples, body and mind, to the chariot of war — permanently transformed the relationship between states on the one hand, and science and technology on the other, and brought forth a vast array of new devices. The story of each of these first three electronic computers is entangled with that of the war. One, dedicated to the decryption of German communications, remained shrouded in secrecy until the 1970s, when it was far too late to be of any but historical interest. Another, a machine most readers will have heard of, was ENIAC, a military calculator finished too late to aid the war effort. But here we will consider the earliest of the three machines, the brainchild of one John Vincent Atanasoff. Atanasoff In 1930, John Atanasoff, the American-born son of an immigrant from Ottoman Bulgaria, finally achieved his youthful dream of becoming a theoretical physicist. As with most such dreams, however, the reality was not all that might be hoped for. In particular, like most students in engineering and the physical sciences in the first half of the twentieth century, Atanasoff had to endure the grinding burden of constant calculation. His dissertation at the University of Wisconsin, on the polarization of helium, required eight tedious weeks of computation to complete, with the aid of a mechanical desktop calculator. John Atanasoff as a young man By 1935, now well settled in as a professor at Iowa State University, Atansoff decided to do something about that burden. He began to consider all the possible ways he could build a new, more powerful kind of computing machine. After rejecting analog methods (like the MIT differential analyzer) as too limited and imprecise, he eventually decided he would build a digital machine, one that represented numbers as discrete values rather than continuous measurements. He was familiar with base-two arithmetic from his youth, and saw that it mapped much more neatly onto the on-off structure of a digital switch than the familiar decimal numbers. So he also decided to make a binary machine. Finally, he decided that in order to be as fast and flexible as possible, his machine would need to be electronic, using vacuum tubes to perform calculations. Atanasoff also needed to decide on a problem space – what exactly would his computer be designed to do? He eventually decided that it would solve systems of linear equations, by reducing them steadily down to equations of a single variable (using an algorithm called Guassian elimination) – the same kind of calculation that dominated his dissertation work. It would support up to thirty equations each with up to thirty variables. Such a computer could solve problems of importance to scientists and engineers, while not, it seemed, being impossibly complex to design. The State of the Art By the mid-1930s, electronic technology had diversified tremendously from its origins some twenty-five years earlier, and two developments were especially relevant to Atanasoff’s project: the trigger relay, and the electronic counter. Since the nineteenth century, telegraph and telephone engineers had had access to a handy little contrivance called the latch. A latch is a bistable relay that uses permanent magnets to hold it in whatever state that you left it – open or closed – until it receives an electric signal to switch states. But vacuum tubes could not do this. They had no mechanical component, and were only ‘closed’ or ‘open’ insofar as electricity was currently flowing to the grid or not. In 1918, however, two British physicists, William Eccles and Frank Jordan, wired together two tubes in such a way as to create what they called a “trigger relay” – an electronic relay that would stay on indefinitely once triggered by an initial impulse. Eccles and Jordan created their new circuit for telecommunications purposes, on behalf of the British Admiralty at the tail end of the Great War. But the Eccles-Jordan circuit (later known as a flip-flop) could also be seen as storing a binary digit – a 1 when transmitting a signal, otherwise a 0. Thus n flip-flops could represent an n-digit binary number. About a decade after the flip-flop came the second major advance in electronics that impinged on the world of computing: electronic counters. Once again, as so often in the early history of computing, tedium was the mother of invention. Physicists studying the radiation of sub-atomic particles found themselves having to either listen to clicks or stare at photographic records for hours on end in order to to measure the particle emission rate of various substances by counting detection events. Mechanical or electro-mechanical counters held out the tantalizing possibility of relief, but moved too slowly: they could not possibly register multiple events that occurred, say, a millisecond apart. The pivotal figure in solving this problem was Eryl Wynn-Williams, who worked under Ernest Rutherford at the Cavendish Laboratory in Cambridge. Wynn-Williams was handy with electronics, and had already used tubes (valves, in British parlance) to build amplifiers for making particle events audible. In the early 1930s, he realized that valves could also be used to create what he called a “scale-of-two” counter; that is to say, a binary counter. It was, in essence a series of flip-flops, with the added ability to ripple carries upward through the chain.1 Wynn-Williams’ counter quickly became part of the essential laboratory apparatus for anyone involved in nuclear physics. Physicists built very small counters, with perhaps three binary digits (i.e., able to count up to seven). That was sufficient to buffer a slower mechanical counter, capturing the closely spaced events that the slower-moving mechanical parts alone would miss.2 But in theory such counters could be extended to capture numbers of arbitrary size or precision. They were, strictly speaking, the first digital, electronic computing machines. The Atanasoff-Berry Computer Atanasoff was familiar with all this history, and it helped to convince him of the feasibility of an electronic computer. But he would not end up directly using either scale-of-two counters or flip-flops in his design. He at first tried using slightly modified counters as the basis for the arithmetic in his machine – for what is addition but repeated counting? But for reasons that are somewhat obscure he could not make the counting circuits reliable, and had to devise his own add-subtract circuitry. The use of flip-flops for short-term storage of binary numbers was out of the question for him due to his limited budget and his ambition to handle thirty coefficients at a time. This had serious consequences, as we shall see momentarily. By 1939, Atanasoff had completed the design of his computer. He now needed someone with suitable expertise to help him build it. He found such a person in a graduate student in Iowa State’s electrical engineering department named Clifford Berry. By the end of the year, Atanasoff and Berry had a small-scale prototype working. The following year they completed the full thirty-coefficient computer. In the 1960s, a writer who unearthed their story dubbed this machine the Atanasoff-Berry Computer (ABC), and that name has stuck. However, all the kinks were not quite worked out. In particular, the ABC had a fault in roughly one binary digit per 10,000, which would be fatal to any very large computation. Clifford Berry with the ABC in 1942 Nonetheless, here in Atanasoff and his ABC, one might locate the root and source of all modern computing. Did he not create (with Berry as his able assistant), the first binary, electronic, digital machine? Are these not the essential characteristics of the millions, nay, billions, of devices that now shape and dominate economies, societies, and cultures across the world? So the argument runs.3 But let us step back a moment. The adjectives digital and binary are not special to the ABC.  For example, the Bell Complex Number Computer (CNC), developed at around the same time, was a binary, digital, electromechanical computer that could do arithmetic in the complex plane. The ABC and CNC were also similar (and dissimilar from any modern computer) insofar as both solved problems only within a limited domain, and could not take an arbitrary sequence of instructions. This leaves electronic. But, although its mathematical innards were indeed electronic, the ABC ran at electro-mechanical speeds. Because it was not financially feasible for Atanasoff and Berry to use vacuum tubes to store thousands of binary digits, they instead used electro-mechanical components to do so. The few hundred triodes which performed the core mathematical operations of the ABC were surrounded by spinning drums and whirring punch-card machines to store intermediate values between each computation step. Atanasoff and Berry did heroic work to read and write punched cards at tremendous speed by scorching them electrically rather than actually punching them. But this caused problems of its own: it was the punch-card apparatus that was responsible for the 1 in 10,000 error rate. Moreover, even with their best efforts, the machine could “punch” no faster than one line per second, and thus the ABC could perform only one full computation cycle per second in each of its thirty arithmetic units. For the rest of that second, the vacuum tubes sat idly tapping their fingers while the machinery churned with agonizing slowness around them. Atanasoff and Berry had harnessed a proud thoroughbred to a hay wagon.4 Schematic of the ABC. The drums stored short-term inputs and outputs in capacitors. The “thyraton punching circuit” and card reader wrote and read the results of a complete reduction step (eliminating a variable from the system of equations).  Work on the ABC halted by the middle of 1942, when Atanasoff and Berry enlisted into the rapidly growing American war machine, which required minds as well as bodies. Atanasoff was called to the Naval Ordnance Laboratory in Washington to lead a team developing acoustic mines. Berry married Atanasoff’s secretary and found a position at a defense contractor in California, ensuring that he would not be drafted. Atanasoff prodded Iowa State for a time to patent his creation, but to no avail. He moved on to other things after the war, and never worked seriously on computers again. The computer itself was junked in 1948 to make room for the office of a new graduate student. Perhaps Atanasoff simply began his work too early. Relying on modest grants from the university, he was able to spend only a few thousand dollars on the ABC’s construction, and therefore frugality trumped all other concerns in his design. Had he waited until the early 1940s, he might have secured a government grant for a fully electronic device. As it was –  limited in application, difficult to operate, not very reliable, and not all that fast – the ABC was not a promising advertisement for the usefulness of electronic computing.  The American war machine, despite its hunger for computational labor, left the ABC to rust in Ames, Iowa.5 Computing Engines at War The First World War primed the pump for a massive investment in science and technology at the outset of the Second. A few short years had seen the practice of war on land and sea transformed by poison gas, tanks, magnetic mines, aerial reconnaissance and bombardment, and more. No political or military leader could fail to notice such a rapid transformation. Rapid enough that a research investment seeded early enough at the onset of hostilities could give rise to new instruments of war in time to turn the tide in one’s favor. The United States, rich in material and minds (many of them refugees from Hilter’s Europe), and standing aloof from the immediate struggle for survival and dominance faced by other nations, was able to take this lesson especially to heart. This manifested most obviously in the marshaling of tremendous industrial and intellectual resources to create the first atomic weapons. Less well-known but no less expensive or important was a massive investment in radar technology, centered especially at the MIT “Rad Lab.” Likewise, the nascent field of automatic computing received its own windfall of wartime funding, although on a much smaller scale. We have already had occasion to notice the variety of electro-mechanical computing projects spurred by the war effort. Relay computers were a known quantity, relatively speaking, since telephone switching circuits with thousands of relays had been operating for years. Electronic components, on the other hand, had not yet been proven to work at that scale. Most experts believed that an electronic computer would be fatally unreliable (the ABC being a case in point) or take too long to build to be useful to the war. Despite the sudden availability of government money, therefore, wartime electronic computing projects were few and far between. Just three were initiated, only two of which resulted in a working machine. In Germany, telecommunications engineer Helmut Schreyer impressed upon his friend Konrad Zuse the value of an electronic machine, as opposed to the electromechanical “V3” that Zuse was building for the aircraft industry (later known as the Z3). Zuse eventually agreed to take on this secondary project with Schreyer, and the Research Institute for Aviation offered the funding for a 100 tube prototype in late 1941. But, preempted by higher priority war work, and later slowed by frequent air raid damage, the men never got the machine to work reliably. 6 Zuse and Schreyer working on an electromechanical computer.  Meanwhile, the first electronic computer to do useful work was built at a secret facility in Britain, where a telecommunications engineer proposed a radical new approach to cryptanalysis based on valves. We will pick up with that story next time.   Further Reading Alice R. Burks and Arthur W. Burks, The First Electronic Computer: The Atansoff Story (1988) David Ritchie, The Computer Pioneers (1986) Jane Smiley, The man Who Invented the Computer (2010)            

Read more
The Switch: Introduction

The history of nearly any technology, when examined closely, is a complex braid. What might appear on the surface to be a single ‘invention’ is revealed to be a series of often unrelated ideas and motivations, recombinations and repurposings, that coalesce at last, after decades, into something that we dub the sewing machine, or the telephone.To take just one strand of one story, consider the airplane: the Wright Flyer only became possible because of new, powerful and compact engines built for the emerging automotive industry. And these engines themselves originated in a desire to make small-scale industrial power sources for craftsmen, for whom a large, expensive, and tempermental steam engine was not a practical option.Perhaps no technological domain better exemplifies this fact than computing. A myriad of reasons drove many different people over several centuries to try their hand at automatic computation: to construct mathematical and astronomical tables, to solve complex systems of differential equations for engineering projects, to calculate the course of an artillery shell, to count and categorize populations, to understand the essence of logical thought, etc. The devices they brought to bear on the problem were equally diverse: systems of intermeshed gears of a complexity that boggles the eye, pins dropping into wells of mercury, spinning disks, clacking electromagnetic relays, even tubes and tanks and pistons filled with water.Most of those approaches, however, have long since fallen by the wayside. It would be the electronic switch that proved the most successful general-purpose solution to the problem of computing, the solution which lies at the heart of all modern computers. And the development of that switch came almost exclusively from people who were not looking to compute at all, but rather were looking to communicate.  Consider London, in the middle of the nineteenth century. At 1 Dorset Street was a large house that served as both home and workshop for Charles Babbage, mathematician, economist, and inventor. For decades he tinkered with computing machines based on mechanical parts: gears, driveshafts, cams, and so forth. Frustrated at his lack of progress and the increasing distraction from street noise as his neighborhood turned from quiet backwater to developed urban center, he began investing much of his energy into a campaign against the growing plague of street musicians. When he died, the few fragments of his great dream, the analytical engine, lay gathering dust in his workshop: a curiosity to many, an absurdity to some. An impossibility, perhaps.Meanwhile, just a mile to the west, the first commercial system for communication by electricity had opened, carrying information along the Great Western Railway between Paddington Station and West Drayton (near today’s Heathrow Airport). It was the start of a new sector of industry and technology known as telecommunications. That sector, in its turn, gave rise to to multiple waves of computing technology, based on electricity rather than mechanism. It would nurture and sustain Babbage’s successors for a century.

Read more
Inter-Networking

In their 1968 paper, “The Computer as a Communications Device,” written while the ARPANET was still in development, J.C.R. Licklider and Robert Taylor claimed that the linking of computers would not stop with individual networks. Such networks, they predicted, would merge into a “labile network of networks” that would bind a variety of “information processing and storage facilities” into an interconnected whole. Within less than a decade, such formerly theoretical speculations had already acquired an immediate practical interest. Because by the mid-1970s, computer networks were proliferating. Networks Proliferate They were proliferating across a variety of new media, institutions, and places. ALOHAnet was one of several new academic networks funded by ARPA in the early 1970s – the others being the PRNET, which connected mobile trucks with packet radio, and the satellite-based SATNET. Along similar lines, other countries, especially the U.K. and France, were developing their own research networks. Local networks, because of their smaller scale and lower cost, were multiplying even more quickly. Other than Xerox PARC’s Ethernet,  one could also find the Octopus at Lawrence Radiation Laboratory in Berkeley, California; the Ring at the University of Cambridge; and the Mark II network at the British National Physical Laboratory. Around the same time, businesses also began to offer fee-based access to privately-funded packet networks. This enabled a new, national marketplace for on-line computer services. In the 1960s, various companies had launched businesses offering access to specialized databases (for legal or financial data) or to time-shared computers, to anyone with a terminal. But these were prohibitively expensive to access cross-country via the regular telephone network, which made it hard for such services to expand beyond local markets. A few larger services firms (Tymshare, for example) built their own internal networks, but commercial packet networks brought the costs down to a reasonable level for users of smaller services. The first such network came about via a defection of ARPANET experts. In 1972, several employees left Bolt, Beranek, and Newman (BBN), the company in charge of ARPANET’s construction and operation, to form Packet Communications, Inc. Though that company ultimately failed, the sudden shock catalyzed BBN to form its own private network, called Telenet. With Larry Roberts, the architect of ARPANET, at its helm, Telenet operated successfully for five years before being acquired by GTE. Given this explosion of network diversity, how could Licklider and Taylor’s vision of single unified system ever come about? Even were it organizationally feasible to simply merge all of these systems into ARPANET – which of course it was not – the incompatibilities among their protocols would have made it a technical impossibility. Yet ultimately these many heterogeneous networks (and their descendants) did interlink, into a confederated communication system that we know as the Internet. It began not with any grand, global plan, but with an obscure research project run by a middle-ranking ARPA manager named Robert Kahn. Bob Kahn’s Problem Kahn completed a Ph.D. thesis on electronic signal processing at Princeton in 1964, in between rounds of golf on the links adjacent to the Graduate College. After a brief stint as a professor at MIT, he took a position nearby at BBN, initially intending a short leave of absence to immerse himself in industry and learn how practical men decided which research problems were worthy of investigation. His pursuits at BBN, fortuitously, included research into the possible behavior of computer networks, for it was just a short time later that BBN received the bid request for ARPANET. Kahn became absorbed in the project, providing the bulk of the design for the system’s network architecture. Kahn’s profile photo from a 1974 paper His short leave of absence became a six-year stint, with Kahn serving as the networking expert at BBN for the duration of the ramp up of ARPANET into its fully operational state. By 1972, however, he was tired of the topic, and, more importantly, tired of being buffeted by the constant politicking and jostling for advantage among the BBN division heads. So he accepted an offer from Larry Roberts (before Roberts himself had left for Telenet) to become a program manager at ARPA, heading a research program to develop automated manufacturing technology, with potentially hundreds of millions in funding at his command. He washed his hands of ARPANET and set off south for a clean start in a green field. Then, within months of his arrival in Washington D.C., Congress quashed the automated manufacturing project. Kahn wanted to pack up and return to Cambridge immediately, but Roberts convinced him to stay on to help develop new networking projects for ARPA. And so Kahn, unable to escape the bonds of his own expertise, found himself managing PRNET, a packet radio network intended to bring the benefits of packet-switched networks to the operational military in the field. The PRNET project, launched under the auspices of Stanford Research Institute (SRI), was intended to extend the basic technical kernel of packet broadcasting from ALOHANET to support repeaters and multiple stations, including mobile vans. However, it was obvious to Kahn early on that the network by itself would be sorely lacking in utility, for it was a computer network with scarcely any computers. When it became operational in 1975, it consisted of one computer at SRI and four repeater stations positioned around the San Francisco Bay. Mobile field stations could not economically support the size and power requirements of a 1970s mainframe. All of the significant computing resources available resided in the ARPANET, which used a totally different set of protocols and had no way of interpreting a message broadcast on PRNET. How, he began to wonder, could his infant network be interlinked with its far more mature cousin? Kahn turned to an old acquaintance from the early ARPANET days for help in crafting the answer. VintonCerf had gotten interested in computers as a math undergraduate at Stanford, and decided to go back to grad school in computer science at UCLA after a couple years at IBM’s Los Angeles office. He arrived in 1967, and, with his old high school friend Steve Crocker, joined Len Kleinrock’s Network Measurement Center, the UCLA branch of ARPANET. There he and Crocker became experts in protocol design, as leading voices in the Network Working Group, which developed both the base Network Control Program (NCP) for sending messages on ARPANET and the higher level file transfer and remote login protocols. Cerf’s profile photo as a Stanford professor from a 1974 paper Cerf met Kahn in early 1970, when the latter flew out the UCLA from BBN to put the network through its paces with some load testing. He generated congestion in the network with the help of software built by Cerf for generating artificial traffic. As Kahn had expected, the network collapsed under the stress, and he recommended changes to improve congestion control. In the ensuing years, Cerf continued on with what looked like a promising academic career. Around the same time that Kahn decamped from BBN for Washington D.C., Cerf traveled up the opposite coast, to take up an assistant professorship at Stanford. Kahn knew a lot about computer networks, but had no experience with the details of protocol design – he was a signals processing guy, not a computer scientist. He knew Cerf would be perfect to supply those skills, which would be crucial to any attempt to link ARPANET and PRNET. Kahn reached out to him about inter-networking, and they met several times throughout 1973 before holing up at the Cabana Hyatt in Palo Alto to produce their seminal paper, “A Protocol for Packet Network Intercommunication,” published in the May 1974 issue of IEEE Transactions on Communications. It presented the design for a Transmission Control Program (TCP) – the P later became protocol – the cornerstone for the software of the modern Internet. Outside Influences Not two people or one moment are more closely identified with the invention of the Internet than Cerf and Kahn and this 1974 paper. Yet, the creation of the Internet was not truly an event that happened at a point in time, but a process that unfolded over years of development. The initial protocol described in Cerf and Kahn’s 1974 paper was tweaked and revised numerous times over the ensuing years. Not until 1977 was the first cross-network link tested; the protocol was not split into two layers – the now-ubiquitous TCP and IP – until 1978; and ARPANET did not adopt it for its own use until 1982.1 The participants in that process of invention extended well beyond the two most well-known principals. In the early years, an organization called the International Packet Network Working Group (INWG) served as the main venue for their collaboration. ARPANET debuted to the wider technical world in October 1972, at the first International Conference on Computer Communications, amid the swooping curves of the modernist Washington Hilton. In addition to Americans like Cerf and Kahn, several prominent European network experts attended, among them Louis Pouzin of France and Donald Davies from the U.K. At the instigation of Larry Roberts, they decided to form an international working group to discuss packet-switching systems and protocols, modeled on the Network Working Group that established the protocols for ARPANET. Cerf, a newly minted Stanford professor, agreed to serve as chair.  One of the first topics that this new International NWG took up was the problem of inter-networking. Among the important early contributors to this discussion was Robert Metcalfe, whom we previously met as the architect of Xerox PARC’s Ethernet. Though Metcalfe could not say so to any of his colleagues, by the time of the publication of Cerf and Kahn’s paper,  he and his colleagues were already well underway with the design of their own internet protocol, the PARC Universal Packet, or PUP. The need for an internet at Xerox became pressing as soon as the Alto/Ethernet network became a success. PARC had another other local network, of Data General Nova minicomputers, and there was ARPANET, of course. Looking further in the future, the PARC leadership foresaw that every Xerox site would need its own Ethernet, and these would need to be connected in some fashion (probably via Xerox’s own internal ARPANET equivalent). To enable it to masquerade as an ordinary message, the PUP packet nestled within the outer packet of whatever host network it was travelling across – the PARC Ethernet, say. When the packet reached the gateway computer between Ethernet and another net (e.g., ARPANET), that computer would unwrap the PUP packet, read its address, and re-wrap it in an ARPANET packet with the appropriate headers to send it onward to its destination. Though Metcalfe could not directly disclose what Xerox was up to, the practical experience he had acquired there inevitably trickled back into INWG discussions, in filtered form. Evidence of his influence survives in the fact that Cerf and Kahn’s 1974 paper recognizes his contribution, and Metcalfe would later show a glimmer of resentment that he did not rate the recognition of co-authorship2. PUP likely affected the design of the modern Internet again later in the 1970s, when Jon Postel instigated the decision to split TCP and IP in order to avoid having to run the intricate TCP protocol on the gateways between networks. IP (Internet Protocol) was a simplified addressing protocol with none of TCP’s complex logic for ensuring the delivery of every bit. The Xerox networking protocol, – by then publicly known and rechristened as Xerox Network Systems (XNS), had already made the same division. Another source of influence on the early internet protocols came from Europe, especially from a network developed in the early 1970s, as an offshoot of Plan Calcul, a program set in motion by Charles de Gaulle to nurture a native French computing industry. De Gualle had long been concerned about America’s growing political, commercial, financial and cultural dominance of Western Europe. He aimed to re-establish France as an independent world power, rather than a pawn in the great Cold War game between the U.S. and the Soviet Union. Two events in the 1960s particularly threatened that independence with respect to the computer industry. First, the United States refused export licenses on its most powerful computers, which France intended to use to aid in the design of its own hydrogen bomb. Second, an American company, General Electric, became the majority owner of France’s only major manufacturer of computing machinery – Compagnie des Machines Bull3 – and then shortly thereafter discontinued several major Bull product lines. Hence the Plan Calcul, to ensure that France could provide for its own computing needs. To oversee Plan Calcul, De Gaulle created the délégation à l’informatique (roughly translated, the “delegation on computing”), reporting directly to his Prime Minister. In early 1971, that delegation selected an engineer by the name of Louis Pouzin to oversee the creation of a french ARPANET. The delegation believed that packet networks would play a crucial role in computing in the coming years, and so native technical expertise in that field would be essential to Plan Calcul’s success. Pouzin at a conference in 1976. Pouzin, a graduate of the École Polytechnique, the premier engineering school for all of France, had worked as a young engineer for France’s national telephone equipment manufacturer, and then moved to Bull. There he convinced his employers that they needed to know more about the cutting edge work happening in the United States. So he spent two-and-a-half years while a Bull employee, from 1963 to 1965, helping to build the Compatible Time-Sharing System (CTSS) at MIT. This experience made him the foremost expert on time-shared, interactive computing in all of France – likely on the entire European continent. Architecture of the Cyclades network Pouzin called the network he was tasked to build Cyclades, after a constellation of Greek islands in the Aegean Sea. Like its namesake, each computer on the network was, to a large extent, an island entire of itself. For Cyclades’ primary contribution to networking technology was the concept of a datagram, the simplest possible variety of packet communication. The idea consisted of two complementary parts: Datagrams are independent: Unlike the data in a telephone call or an ARPANET message, each datagram can be processed independently. There is no reliance on any prior messages, whether based on ordering or some protocol for establishing a connection (e.g. dialing a phone number). Datagrams are host-to-host: All responsibility for ensuring a message is sent reliably to a destination rests with the sender and receiver, not with the network, which is merely a “dumb” pipe. The datagram concept was anathema to Pouzin’s peers at the French post, telegraph and telephone authority (PTT), which was building its own network in the 1970s based on telephone-like circuit connections and terminal-to-computer  (rather than computer-to-computer) communication, under the supervision of another Polytechnique grad, Rémi Després. Culturally, the idea of giving up on reliability within the network was repellent to the PTT mindset, molded by decades of experience in trying to make the telephone and telegraph systems as robust as possible. While economically and politically, the idea of surrendering control of all applications and services to host computers at the periphery of the network threatened to make the PTT into nothing but a fungible commodity. Nothing works better to deeply entrench one’s opinions than firm resistance to them, however, and so the needling presence of the PTT’s virtual circuits only helped to confirm Pouzin in the correctness of his datagram, host-to-host approach to protocols. Pouzin and his fellow Cyclades engineers participated actively in INWG and the various conferences where the ideas behind TCP were hashed out, and they were not shy about putting forth their opinions on how a network of networks should function. Like Metcalfe, both Pouzin and his colleague Hubert Zimmerman earned mentions in the 1974 TCP paper, and at least one other coleague, an engineer by the name of Gerard Le Lann also helped Cerf with hashing out the protocols. Cerf later recalled that “the sliding window flow control for TCP came straight out of discussions with Louis Pouzin and his people… I remember Bob Metcalfe and Le Lann and I sort of lying down of the living room in my house in Palo Alto on this giant piece of paper, trying to sketch what the state diagrams were for these protocols.”4 The datagram concept mapped neatly onto the behavior of broadcast networks like Ethernet and ALOHANET, which sent their messages willy-nilly into a noisy, uncaring ether (in contrast to the more telephonic ARPANET, which required in-order delivery between IMPs across a reliable AT&T line to function). It made sense to align the protocols for inter-networking with the lowest-common-denominator datagram-like networks rather than their more elaborate cousins, and indeed that is just what Kahn and Cerf’s TCP did. I could continue still further in this vein, by describing the British role in the early inter-networking conversations, but I don’t wish to belabor the point – that the two names most closely tied to the invention of the Internet are not the only ones that mattered. TCP Conquers All What happened, then, to this early promise of inter-continental collaboration? How is it that Cerf and Kahn are hailed everywhere as the fathers of the Internet, yet we hear very little about Pouzin and Zimmerman? To understand this requires, for starters, getting down into the procedural weeds of the INWG’s early years. In keeping with the spirit established by the ARPA network working group and its Requests for Comment (RFCs), the INWG created its own system of “General Notes.” In keeping with this practice, after about a year of collaborative work, Kahn and Cerf presented the preliminary version of TCP to the IWNG as note 39 in September 1973. This was effectively the same document that they published in IEEE Transactions the following spring. In April 1974 the Cyclades team, under the authorship of  Hubert Zimmerman and Michel Elie, published a counterproposal, designated INWG 61. The differences consisted in different views on certain engineering trade-offs, mainly around how packets are subdivided and re-assembled when crossing networks with small maximum packet sizes. This rift was minor, but the need to settle on a consensus had acquired a sudden urgency due to the plans announced by the Comité Consultatif International Téléphonique et Télégraphique (CCITT) to consider packet networking standards. CCITT, the standardization body of the International Telecommunications Union, operated on a four year cycle of Plenary Assemblies. Proposals for consideration in the 1976 assembly were due in the fall of 1975, and no further changes would be possible between then and the next assembly in 1980. A scramble of meetings within INWG led up to a final vote in favor of a new protocol drafted by the representatives of the most important institutions in the world of computer networking – Cerf from ARPANET, Zimmerman from Cyclades, Roger Scantlebury from the British National Physical Laboratory, and Alex McKenzie of BBN. The new proposal, INWG 96, split the difference between 39 and 61, and seemed likely to establish the direction for network interconnection for the foreseeable future. But in truth, the compromise proved the last gasp of international collaboration in inter-networking, a fact foreshadowed by the ominous abstention of Bob Kahn from the INWG vote on whether to accept it. As it happened, the vote came too late to make the CCITT deadline, and Cerf further undermined its standing at CCITT with a cover letter indicating that it lacked the full consensus support of the INWG. Any proposal from INWG was likely dead-on-arrival anyway, because the telecom authorities that dominated CCITT had no interest in the datagram networks being cooked up by computer researchers. They wanted to control the flow of traffic within the network, not delegate that power to host computers that they didn’t control. Instead they ignored inter-networking altogether, and agreed on a single-network virtual circuit protocol designated X.255 The Europeans, led especially by Zimmerman, made another try via a different standards body, one less dominated by the power of the telecom authorities, the International Organization for Standardization (ISO). The Open Systems Interconnection (OSI) standard that resulted had some technical advantages over TCP/IP. Notably, it lacked IP’s limited and hierarchical addressing system, whose limitations required several cheap hacks to allow for the explosive growth of the Internet in the 1990s6. But for a number of reasons, the process dragged out interminably without producing working software. For one thing, ISO’s processes, well-suited to blessing already established technical practices, were not appropriate for still-nascent technology. Once the TCP/IP Internet took off in the early 1990s, OSI became irrelevant. So much for the arena of standards setting, but what about the on-the-ground practicality of network-building? The Europeans began earnestly working on an implementation of INWG 96 to link Cyclades and the National Physical Laboratory, as part of the the European Informatics Network. But Kahn and the other leaders of the ARPA Internet project did not really care to derail the TCP train for the sake of international collaboration. Kahn had already disbursed funds for TCP implementations on ARPANET and PRNET, and he didn’t want to start over. Cerf made an attempt to rally support in the U.S. for the compromise he had forged at the INWG, but finally gave up on it. He also gave up on the stresses of life as an assistant professor, following Kahn’s footsteps to become a program manager at ARPA and withdrawing from active participation in the INWG. Why was the desire of the Europeans to establish a unified front and an official, global standard so weakly requited? The primary reason lay in the relative position of the American and European telecom authorities. The Europeans had to face constant pressure against the datagram model from the post and telecom authorities (the PTTs), which operated as administrative departments within their national governments. Because of these pressures, they had a much stronger incentive to care about building a consensus within the official standards-making processes. The rapid demise of Cyclades, which fell out of political favor in 1975 and lost all funding in 1978, provides a case study in the power of the PTTs. Pouzin blamed Cyclades’ death on the administration of Valéry Giscard d’Estaing. d’Estaing came to power in 1974, and set up a government peopled with École nationale d’administration (ENA) types, whom Pouzin disdained – if Polytechnique was something like the MIT of France, ENA was its Harvard Business School. d’Estaing’s administration focused French information technology policy around the idea of “national champions,” and a national champion computer network required the backing of the PTT. Cyclades could never acquire that support; instead Pouzin’s rival Després led the construction of a virtual-circuit X.25 network called Transpac. The situation in the United States was quite different. A&T did not have the political leverage of its international peers, not being part of the American administrative state,  On the contrary, it was in fact in the process of being heavily constrained and weakened by that state, barred from interference in computer networking and computer services, and soon to be be dismantled entirely. ARPA could proceed with its Internet program under the umbrella of protection from the powerful Department of Defense, without any adverse political pressure. It funded TCP implementations on a variety of computers, and used its leverage to force all of the hosts on ARPANET itself to convert to the new protocol in 1983. The most influential computing network in the world, many of whose nodes happened to be the most influential academic computing institutions in the world, thus became a TCP/IP shop. TCP/IP thus became the foundation stone of the Internet, and not just an internet, because of the relative political and financial freedom of ARPA compared to any other computer networking organization. OSI notwithstanding, ARPA became the dog, and the rest of the network research community the indignant tail. From the perspective of 1974, one can clearly see the many lines of influence that led into Cerf and Kahn’s TCP paper, and the many potential avenues of international development that might have followed from it. But from the perspective of 1995, all roads led backward to one seminal moment, one American organization, and two revered names. [Previous] [Next] Further Reading Janet Abbate, Inventing the Internet (1999) John Day, “The Clamor Outside as INWG Debated,” IEEE Annals of the History of Computing  (2016) Andrew L. Russell, Open Standards and the Digital Age (2014) Andrew L. Russell and Valérie Schafer, “In the Shadow of ARPANET and Internet: Louis Pouzin and the Cyclades Network in the 1970s,” Technology and Culture (2014)        

Read more
The Unraveling, Part 2

After authorizing private microwave networks in the Above 890 decision, the FCC might have hoped that they could leave those networks penned in their quiet little corner of the market and forget about them. But this quickly proved impossible. New challengers continued to press against the existent regulatory framework. They proposed a variety of new ways to use or sell telecommunications services, and claimed that the telecommuncations incumbents were obstructing their path. The FCC responded by steadily slicing away portions of AT&T’s monopoly, allowing competitors into various parts of the telecommunications market. AT&T responded with actions and rhetoric designed to counter, or at least mitigate the effects of, the new competition: publicly propounding their opposition to further FCC action, and setting new rates that sliced profits to the bone. From within the company, these seemed like natural responses to new competitive threats, but from the outside they only served as evidence that stronger measures would be needed to curb an insidious monopoly. When regulators pushed for telecom competition, they did not mean to encourage a struggle between for dominance between contending parties, and may the best company win. Their intent was to create and support lasting alternatives to AT&T. AT&T’s efforts to escape the net closing around it thus only served to ensnare it more deeply.1 The new threats came at both the edge and the the center of AT&T’s network, tearing away AT&T’s control over the terminal equipment attached to its lines by customers, and over the long-distance lines that interlinked the whole United States into a single telephone system. Each of these threats started with lawsuits by two small, seemingly insignificant upstarts: Carter Electronics and Microwave Communications, Incorporated (MCI), respectively. But the FCC not only favored the upstarts, but chose to interpret their cases expansively, as representing the needs of a whole new class of competitor which AT&T would have to accept and respect. Yet, in terms of the legal framework of regulation, nothing had changed since the Hush-a-Phone case of the 1950s. At that time the FCC had firmly rejected the claims of a far more innocuous challenger than Carter or MCI. The same 1934 Communications Act that had created the FCC in the first place still governed its actions in the 1960s and 70s. The shift in FCC policy did not come from new congressional action, but from a change in political philosophy within the commission. That change, in turn, was prompted to a large degree by the rise of the electronic computer. The emerging hybridization between computers and communication networks helped to set the conditions of its own further development. An Information Society For decades, the FCC had seen its main responsibility as maximizing access and fairness within a relatively stable and uniform telecommunications system. From the mid-1960s, however the FCC staff developed a different idea of their mission, and increasingly focused on maximizing innovation within a dynamic and diverse market. In large part this change can be attributed to the emergence of the new, though relatively tiny, market in data services. The data service industry originally had nothing to do with the telecommunications business at all. Its origins lay in service bureaus – companies that did data-processing on behalf of clients, then shipped them the results, a concept that in predated the modern computer by decades. IBM, for example, had offered on-demand processing for clients who couldn’t afford to lease their own mechanical tabulating equipment since the 1930s. In 1957, as part of an anti-trust deal with the U.S. Justice Department, they spun this business off into its a separate subsidiary, the Service Bureau Corporation, by that time running on modern electronic computers. Likewise, Automatic Data Processing (ADP), began as a manual payroll processing business in the late 1940s before computerizing in the late 1950s. In the 1960s, however, the first on-line data services began to appear, which allowed users to interact with a remote computer by terminal over a private, leased telephone line. Most famous of these was SABRE, a derivative of SAGE, designed to handle reservations for American Airlines using IBM computers. Just as with the first time-sharing systems, however, once you had multiple users talking to the same computer, it was a small step to letting those users talk to each other. It was this new way of using computers, as a mailbox, that brought computers to the attention of the FCC. In 1964, Bunker-Ramo2, a company best known as a defense contractor, decided to diversify into data services by acquiring Teleregister. Among Teleregister’s lines of business was a service called Telequote, which had provided stock information to brokers over telephone lines since 1928. Teleregister, however, was not itself a regulated common carrier. It relied on private lines leased from Western Union for communications between its users and its data center. Bunker-Ramo Telequote III terminal. It could display information about requested stocks, as well as market summary data. " data-medium-file="https://cdn.accountdigital.net/FgXKyn3jmYrb3JR4kL9Y5Z8Bu2y4?w=300" data-large-file="https://cdn.accountdigital.net/FgXKyn3jmYrb3JR4kL9Y5Z8Bu2y4?w=472" loading="lazy" class=" size-full wp-image-13324 aligncenter" src="https://cdn.accountdigital.net/FgXKyn3jmYrb3JR4kL9Y5Z8Bu2y4?w=739" alt="102716113.01.01.lg_" srcset="https://cdn.accountdigital.net/FgXKyn3jmYrb3JR4kL9Y5Z8Bu2y4 472w, https://cdn.accountdigital.net/FgXKyn3jmYrb3JR4kL9Y5Z8Bu2y4?w=150&h=135 150w, https://cdn.accountdigital.net/FgXKyn3jmYrb3JR4kL9Y5Z8Bu2y4?w=300&h=269 300w" sizes="(max-width: 472px) 100vw, 472px">Bunker-Ramo Telequote III terminal. It could display information about requested stocks, as well as market summary data. Telequote’s state-of-the-art system in the 1960s, Telequote III, allowed users to use a terminal with a tiny CRT to screen to punch up the price of a stock stored on Telequote’s remote computer. In 1965, Bunker-Ramo proposed the next iteration, Telequote IV, with the additional feature of allowing brokers in different offices to submit buy or sell orders to one another via their terminals. Western Union, however, refused to allow their lines to be used for this purpose. They claimed that using the computer to transmit messages between users would turn a purported private line into a de facto common carrier message-switching service (not unlike Western Union’s own telegraph service), and requiring the operator (Bunker-Ramo) to be regulated by the FCC. The FCC decided to turn this dispute into an opportunity to answer a broader question – how should the growing contingent of on-line data services be treated, vis-a-vis telecommunications regulation? The resulting investigation is now known simply as the Computer inquiry. The ultimate conclusions of that inquiry are less important for us at this point than their effects on the mentality of the FCC staff. Long-established boundaries and definitions seemed liable to be redrawn or abandoned, and this shake-up conditioned the FCC’s mindset for the challenges to come. Every so often, over the previous decades, a new communications technology had emerged. Each developed independently and acquired its own distinct character and its own regulatory rules: telegraphy, telephony, radio, television. But with the emergence of the computers these distinct lines of development began to converge on the imagined horizon into an intertwined information society. Not just the FCC, but the intelligentsia in general anticipated major changes afoot. Sociologist Daniel Bell wrote of the coming “post-industrial society”, management expert Peter Drucker spoke of “knowledge workers” and the “age of discontinuity.” Books, papers, and conferences abounded in the second half of the 1960s on the topic of a coming world based in information or knowledge rather than material production. The authors of these works referred often to emergence of high-speed, general-purpose computers, and the new ways that they would allow data to be transmitted and processed within communications networks in the coming decades. Some of the newer FCC commissioners, appointed by Presidents Kennedy and Johnson, were themselves active in these intellectual circles. Kenneth Cox and Nicholas Johnson both participated in a Brookings Institute symposium on “Computers, Communications, and the Public Interest,” whose chair imagined “a national or regional communication network that connects video and computer facilities at universities to homes and classrooms in local communities …The citizenry could be students ‘from cradle to coffin…” Johnson later wrote a book on the prospect of using computers to transform broadcast TV into an interactive medium, entitled How to Talk Back to Your Television Set. Beyond these general intellectual currents that were pushing communications regulation in new directions, one man in particular had a particular interest in steering regulation onto a new course, and played a major role in shifting the FCC’s attitudes. Bernard Strassburg belonged to the layer of the FCC bureaucracy just below the seven politically-appointed commissioners. The career civil servants who populated most of the FCC were divided into bureaus based on the technological area that they regulated. The commissioners relied on the legal and technical expertise of the bureaus to guide them in the rulings process. The domain of the Common Carrier Bureau, to which Strassburg belonged, lay in the wireline telephone and telegraph industry, consisting primarily of AT&T and Western Union. Strassburg joined the Common Carrier Bureau during World War II, rose to become its Chairman by 1963, and played a major role in pushing the FCC to chip away at AT&T’s dominance over the following decade. His distrust of AT&T originated with the anti-trust suit that the Justice Department launched against the company in 1949. At issue, as we’ve mentioned before, was the question of whether Western Electric, AT&T’s manufacturing arm, inflated its prices in order to allow AT&T, in turn, to artificially inflate its profits. Strassburg became convinced during the investigation that it was simply impossible to answer the question, given AT&T’s near-total monopsony in telephone equipment. There was no telephone equipment market to compare against to determine what constituted reasonable prices. AT&T was simply too large and powerful to effectively regulate, he concluded3. Much of his advice to the Commission in the coming years could be traced to this belief that competition needed to be forced into AT&T’s world, to weaken it sufficiently to make it regulable. Challenge at the Center: MCI The first serious challenge to AT&T’s long distance network, since its inception at the turn of the twentieth century, began with an unlikely man. John Goeken was a salesman and small businessman with at least as much enthusiasm as good sense. Like many boys of his time, he had developed an interest in radio equipment as a youth. He joined the Army out of high school as a microwave radio technicia, and, after completing his active service, he went into radio sales for General Electric (GE) in Illinois. His day job didn’t fill his need for entrepreneurship, however, so he also developed a side business with a group of friends, selling more GE radios in other parts of Illinois outside of his assigned territory. Goeken in the mid-90s, when he was working on an in-flight telephone When GE got wind of the operation and shut it down in 1963, Goeken began to look for other ways to supplement his income. He decided to build a microwave line from Chicago to St. Louis, selling radio access to the line to truckers, bargemen, flower delivery vans, and other small businesses along the route who had a need for inexpensive, mobile communications. He believed that AT&T’s private-line service was “gold-plated” – over-staffed and over-engineered – and that by being leaner and more cost-conscious he could provide lower prices and better service to the smaller users neglected by Ma Bell. Goeken’s concept did not conform to then-current FCC rules – the Above 890 ruling had authorized private companies to build microwave systems for their own use. Under pressure from smaller businesses without the wherewithal to build and maintain a whole system, a 1966 ruling had allowed multiple entities to share a single private microwave systems. But this still did not authorize them to become common carriers themselves, retailing service to third parties. Moreover, the reason that AT&T’s prices appeared excessive was not due to gilded wastefulness, but regulated cost-averaged rates. AT&T charged for private line service according to the distance and number of lines leased, whether those lines lay along the high-density Chicago-St. Louis route or a low-density route with little traffic across the Great Plains. Regulators and telephone companies had intentionally devised this structure to level the playing field between areas with differing population densities. MCI was thus proposing to engage in a form of arbitrage – taking advantage of the differential between the market and the regulated price on a high-traffic route to extract guaranteed profits. AT&T called this cream-skimming, a term that served as their primary rhetorical touchstone in the debates to come. It’s not clear whether Goeken did not initially know these facts, or chose blithely to ignore them. In any case, he went after his new idea with gusto, on a shoestring budget funded mainly by credit cards. He and his partners, all of similarly modest means, nonetheless dared to form a company to take on the over-mighty AT&T, which they called Microwave Communications, Inc . Goeken flew around the country looking for investors with deeper pockets, with little success. He had better luck, however, arguing MCI’s case before the FCC. The first hearings on the case began in 1967. Strassburg was intrigued. He saw in MCI an opportunity to achieve his goal of weakening AT&T, by further prying open the market for private lines. But he wavered at first about whether to follow through. Goeken did not impress him as a serious and effective businessman. MCI, he worried, might not be the best test case. He was nudged off the fence by an economist from the University of New Hampshire named Manley Irwin. Irwin had a steady consulting gig with the Common Carrier Bureau, and had helped to formulate the terms of the Computer inquiry. He convinced Strassburg that the nascent on-line data service market revealed by that inquiry needed companies like MCI that would provide new offerings; that AT&T alone would never be able to fulfill all the potential of the coming information society. Strassburg later reflected that “the ‘fallout’ from the Computer Inquiry… substantiated MCI’s claim that its entry into the specialized intercity market would be in the public interest.”4 With the blessings of the Common Carrier Bureau in-hand, MCI breezed through the initial hearing, then squeaked by with approval before the full commission in 1968, which split 4-3 along party lines. All the Democrats (Cox and Johnson included) voted in favor of approving MCI’s license. The Republicans, led by the chair, Rosel Hyde, dissented. The Republicans did not want to disrupt a well-balanced regulatory system with a scheme concocted by fly-by-night operators of questionable technical and business savvy. They pointed out that the decision, though limited on its face to a single company and a single route, carried profound implications that would transform the telecommunications market. Strassburg and the pro-approval commissioners treated the MCI case as an experiment, to see if a business could successfully operate alongside AT&T in the private line services market. But in fact it was a precedent, and, once approved, dozens of other companies would immediately come out of the woodwork to file their own applications. Reversing the experiment, the Republicans saw, would effectively be impossible. Morever, MCI and similar specialized entrants could scarcely survive with just a scattering of disconnected routes like the one from Chicago to St. Louis. They would demand interconnection with AT&T, and force the FCC to continue making changes to the regulatory structure. The land rush predicted by Hyde and the other Republicans did indeed ensue, with thirty-one companies filing 1713 separate applications for a total of 40,000 miles of microwave network within two years of the MCI decision.5 The FCC lacked the capacity to carry out individual hearings on all of these applications, and so it gathered them all together as a single docket on Specialized Common Carrier Services. In May 1971, with Hyde out, they unanimously decided to open the market fully to competition. Meanwhile, MCI, still starved for money, found a new wealthy investor to set its finances in order, William C. McGowan. McGowan was virtually Goeken’s opposite, a sophisticated and established businessman with a degree from Harvard, who had built successful consulting and venture capital businesses in New York City.  Within a few years, McGowan took effective control of MCI and pushed Goeken out. He had a very different vision for the company from that of his predecessor. He had no intention of messing around with bargemen and florists, nibbling around the periphery of the telecommunications market wherever AT&T deigned not to notice him. Instead he would go right for the heart of the regulated network, competing directly in all forms of long-distance communications. Bill McGowan in later years The stakes and implications of the original experiment with MCI thus continued to ratchet upward. Having committed itself to seeing MCI succeed, the FCC now found itself taken for a ride, as McGowan’s demands continued to broaden. Arguing (again, as predicted), that MCI could not survive as a small collection of disconnected routes, he demanded a wide variety of interconnection rights into the AT&T network; for example the right to connect to what was called a “foreign exchange,” allowing MCI’s network to connect directly into AT&T’s local telephone exchanges at the terminus of MCI private lines. AT&T’s responses to the new specialized common carriers did not help its cause. It answered the intrusion of competitors by introducing much lower rates on private lines along high-traffic routes, abandoning regulated, rate-averaged prices. If it thought this would appease the FCC by showing competitive spirit, it misconstrued the FCC’s purpose. Strassburg and his allies were not trying to help consumers by reducing communications prices, at least not directly. Instead they were trying to help new producers enter the market, thereby weakening AT&T’s power. Thus AT&T’s new competitive rates were seen by the FCC and other observers, especially at the Justice Department, as vindictive and anti-competitive, because they threatened the financial stability of new entrants like MCI. AT&T’s combative new president, John deButts, also did himself no favors with his aggressive rhetorical responses to competitive incursions. In a 1973 speech before the National Association of Regulatory Utility Commissioners, he belittled the FCC with his call for “a moratorium on further experiments in economics.” This kind of intransigence infuriated Strassburg, and further convinced him of the necessity of taking AT&T down a peg. The FCC duly ordered the interconnections requested by MCI in 1974. McGowan’s escalation climaxed with Execunet, launched the following year. Advertised as as a new kind of metered service for sharing private lines among small businesses, it gradually became apparent to both the FCC and AT&T that Execunet was in fact a competing long-distance phone network. It allowed a customer in one city to pick up a phone, dial a number, and reach arbitrary customers in another city (taking advantage of MCI’s foreign exchange connections) for a charge based on the distance and duration of the call. No dedicated point-to-point line came into the picture at all. Execunet connected MCI customers directly to any AT&T customer in any major city. At this point the FCC finally balked. It had intended to use MCI as a cudgel to beat back the complete dominance of AT&T, but this was a blow too far.  By this time, however, AT&T had other allies in the courts and the Justice Department, and continued to advance its case. The unraveling of the AT&T monopoly, once begun, was not easily stopped. Challenge at the Periphery: Carterfone While the MCI case was playing out, another threat approached. The similarities between the Carterfone and MCI stories are striking. In both cases, an upstart entrepreneur – possessed of more gumption and grit than good business sense – brought a successful challenge against the largest corporation in the United States. Both men, however – Jack Goeken and our new protagonist, Tom Carter – were shortly thereafter eased out of their companies by sharper operators and then faded into obscurity. Both men began as protagonists, but ended as pawns. Tom Carter was born in 1924 in Mabank, Texas, south east of Dallas. Another young radio enthusiast, he joined the Army at 19, becoming, like Goeken, a radio technician. He spent the latter years of World War II manning a broadcasting station in Juneau, providing news and entertainment to troops at far-flung outposts across Alaska. After the war he returned to Texas and formed Carter Electronics Corporation in Dallas, operating a two-way radio station that he leased out to other businesses – florists with delivery vans; oil companies with operators out at drilling rigs. Over and over, Carter heard requests from clients for a way to patch their mobile radios directly into the phone network, rather than having to relay messages to people in town through the base station operator. Carter devised an instrument to satisfy this need, which he called Carterfone. It consisted of a black plastic lozenge with a molded top designed to cradle a telephone handset, containing a microphone and a speaker, both wired to the radio transmitting/receiving station. To connect someone in the field with someone on the telephone, the base station operator still had to place a call manually, but then they could then rest the handset in the cradle, and the two parties could converse uninterrupted. A voice-activated switch tripped the radio’s send/receive mode, sending when the person on the telephone was speaking and receiving otherwise. He began selling the device in 1959, with a manufacturing operation that consisted of a small brick building in Dallas where senior citizens assembled Carterfones on plain wooden tables. A 1959 Caterfone. The phone handset would rest in the cradle and activate the device via the small switch at top. Carter’s invention was not entirely novel. Bell had its own mobile radio telephone service, which it first offered in St. Louis, Missouri in 1946. Twenty years later it served 30,000 customers. But there was plenty of room for a competitor like Carter – AT&T only offered the service in about a third of the United States, and the waiting list could be years long. Moreover, Carter offered a significantly cheaper price, if (large caveat) one already had a access to a radio tower: a one-time $248 purchase for the equipment, versus a $50-60 lease for a Bell mobile phone. Carterfone was, from AT&T’s point-of-view, a “foreign attachment”, a piece of third-party equipment attached to its network, a practice that it forbade. In the earlier Hush-a-Phone case, the courts had forced AT&T allow simple mechanical attachments to a telephone, but Carterfone did not fall in that category, being acoustically-coupled to the network – that is, it transmitted and received sound over the telephone line. Due to the small scale of Carter’s operations, it was two years before AT&T took notice, and started to warn retailers carrying Carterfone that their customers risked having their telephone service shut off – the same angle used to attack Hush-a-Phone over a decade earlier. With these kinds of tactics, AT&T chased Carter out of one market after another. Unable to reach any kind of deal with his antagonists, Carter decided to sue in 1965. None of the big Dallas firms would take the case, so Carter ended up at  the small office of Walter Steele, with only three lawyers to its name. One of them, Ray Besing, later painted this character portrait of the man who arrived in his office: He fancied himself a handsome man, with his side-combed white hair, which was all the whiter thanks to Grecian Formula, but his double-knit suit and cowboy boots presented a different kind of image. He was a self taught man, handy with any kind of electronic, radio, or telephone equipment. Not much of a business man. A strict family man with an equally strict wife. Yet he sought to appear a cool, successful businessman even though he was basically broke. The case came before the FCC’s preliminary examiner in 1967. AT&T and its allies (primarily the other, smaller telephone companies and the state telephone regulatory agencies) argued that Carterfone was not a simple attachment at all, but a piece of interconnection equipment, that unlawfully coupled AT&T’s network into local mobile radio networks. This violated the telephone company’s end-to-end responsibility for communications within its system. But as with MCI, the Common Carrier Bureau issued a statement decisively in favor of Carter. Once again the belief in a coming world of digital information services, simultaneously integrated and diversified, loomed in the background. How could a single monopoly supplier foresee and satisfy all the market needs for terminals and other equipment for all these coming applications? The final decision of the commission, on June 26, 1968, concurred with the CCB and found that AT&T’s foreign attachments rule was not only unlawful, but had been unlawful from its inception – therefore Carter stood eligible for back damages. AT&T, the FCC ruled, had failed to properly distinguish potentially harmful attachments (ones that might send errant control signals into the network, for example) from essentially harmless ones such as Carterfone. AT&T would have to allow Carterfones immediately, and devise technical standards for the safe interconnection of third-party devices. Shortly after the decision, Carter tried to exploit his success by going into business with two partners, including one of his lawyers, forming Carterfone Corporation. After pushing Carter out of the company, his partners made millions by selling to the British telecom giant Cable and Wireless. The Carterfone itself disappeared; the company continued on selling teletypewriters and computer terminals. Carter’s story has a curious epilogue. In 1974, he actually went into business with Jack Goeken, founding the Florist Transworld Delivery system to send flowers on demand. It was just the kind of market – using telecommunications to support small businesses – that both men had wanted to serve in the first place. Carter soon quit that company, too, however, and moved back to his roots southeast of Dallas, where, in the mid-80s, he operated a small radio telephone company called Carter Mobilefone. He remained there until his death in 1991.6 Unraveled Like Carter and Goeken, the FCC had set into motion forces it could not control or even fully understand. By the mid 1970s, Congress, the Justice Department and the courts took the debate over AT&T’s future out of the FCC’s hands. The climax of AT&T’s great unraveling, of course, came with final break-up of AT&T, carried out in 1984. But we have already gotten well ahead of the rest of our story. The world of computer networking did not feel the full implications of MCI’s victory, and the intrusion of competition into the long-distance market, until the 1990s, when private data networks began to proliferate. The decisions on terminal equipment had a more immediate effect. Acoustically coupled computer modems could now be manufactured by anyone and connected to the Bell system, under the sheltering hand of the Carterfone ruling, making them less expensive and easier to find. But the most important implication of AT&T’s unraveling lay in the big picture, rather than the particulars of individual rulings. Many of the early visionaries of the information age imagined a single, unified American computer-communications network, under the aegis of AT&T, or perhaps even the federal government itself. Instead computer networks developed piecemeal, in fragments, which were only gradually connected, or, “inter-networked.” No single overarching corporation controlled the various sub-networks as had been the case with Bell and its local operating companies; they came to one another  not as master and subordinate, but as peers. But that, too, is getting ahead of ourselves. To continue our story we must turn back to the mid-1960s, to see where computer networks came from in the first. [Previous part] Further Reading Ray G. Bessing, Who Broke Up AT&T? (2000) Philip L. Cantelon, The History of MCI: The Early Years (1993) Peter Temin with Louis Galambos, The Fall of the Bell System: A Study in Prices and Politics (1987) Richard H. K. Vietor, Contrived Competition: Regulation and Deregulation in America (1994)

Read more
An Expeditious Method of Conveying Intelligence

To begin our story of the switch, we must seek the origins of the electric telegraph. From this device arose the telecommunications industry, which, in turn, was the wellspring of digital computing as we know it.  It came about only after many efforts over nearly a century to convey intelligence (intelligence meaning roughly what we mean by information) by electricity. One important caveat to keep in mind as we go along, is that the men described here used categories and concepts to think about electricity that are quite different from our own. Our physics textbooks have packaged up the messy past into a tidy collection of concepts and equations, eliding centuries of development and conflict between competing schools of thought. Ohm never wrote the formula V = IR, nor did Maxwell create Maxwell’s equations. Though I will not attempt to explore all the twists and turns of the intellectual history of electricity, I will do my best to present ideas as they existed at the time, not as we retrospectively fit them into our modern categories. The Electrical Fluid The phenomenon of electrical attraction was known since ancient times. In the 6th century B.C., Thales of Miletus recorded his observations on the effect of rubbing a piece of amber (elektron, in Greek) with cat fur, noting that feathers and other light objects were suddenly attracted to the amber. Little was made of this curiosity, however, for many centuries.  With the rise of the experimental natural philosophy in the 17th and 18th centuries, though, savants began paying much more attention to such oddities of nature. According to the Aristotelian worldview that dominated European philosophy through the Renaissance, only readily observable regularities provided insight into the truth of the natural world. Artificially produced phenomena could, by definition, have little to say about nature. The new experimentalists overturned this belief. Quite to the contrary, as Francis Bacon wrote in his Great Instauration of 1620:  I mean [the natural history I propose] to be a history not only of nature free and at large (when she is left to her own course and does her work her own way)… but much more of nature under constraint and vexed; that is to say, when by art and the hand of man she is forced out of her natural state, and squeezed and moulded. …the nature of things betrays itself more readily under the vexations of art than in its natural freedom. The English philosopher William Glibert was the first to coin a term recognizing that the ‘vexation’ of amber  was only part of a more general phenomenon. In his 1600 treatise On the Magnet, he called this phenomenon electricitus, “behaving like amber”. He expounded on many other substances that had the power of attraction when rubbed, including gemstones, glass, and sulfur.  Still hewing to the ancient model of matter as a composite of the four elements of fire, air, water, and earth, Gilbert believed it was the watery part, or “aqueous humor”, of these substances that gave them their electric power. 1 He did not imagine, though, that that power could ever be used as a means of communication. The attractive force worked only at extremely short distances. Gilbert gives an electrical demonstration to Queen Elizabeth’s court (detail from a 19th century painting by Alfred Acklund Hunt) By the start of the 18th century, others had figured out new ways to generate electricity. They discovered that by placing their hand on a spinning globe they could build up a powerful electrical force, and even transmit it through a piece of thread. Some years later, Stephen Gray found that he could extend this transmission up to several hundred yards. 2 Other similar electrical machines followed. The ‘sulfur globe’ generator of Otto von Geuricke (ca. 1660) By this time, savants had begun to think of electricity as a fluid that was built up and then discharged, flowing from one place to another. Unlike Gilbert, they did not believe this fluid was ordinary water, but rather some immaterial substance. Some imagined several different subtle fluids were responsible for light, magnetism, electricity; even life. Others believed there was a single aetheric fluid behind all these phenomena, which manifested itself in different ways. 3 The greatest vessel yet known for this fluid was found in 1746, with the invention of the so-called Leyden Jar (named for the town where it first found fame). This apparatus, in its fully developed form, consisted of a glass jar coated inside and out with metal foil, with a metal terminal protruding from the top that connected to the inner foil. 4  The Leyden jar   With  an electrical machine connected to the terminal, it could store tremendous amounts of the electrical fluid, as if one could simply pour it into the jar. That fluid was then discharged in a tremendous shock when the terminal was linked to the outer foil. A whole scientific sub-culture of “electricians” had by this time emerged. With the electrical generator and Leyden jar in hand, electricity was easy to experiment with, amenable to mathematical subtlety but did not require it, and (not least) made for spectacular and exciting demonstrations. Ben Franklin, most renowned of the electricians, even proposed in a letter that several such devices wired together might be used to kill and cook a turkey for dinner. He called this multi-jar configuration a ‘battery’ (by analogy to a battery of guns): 5 …a turkey is to be killed for our dinner by the electrical shock, and roasted by the electrical jack, before a fire kindled by the electrified bottle, when the healths of all the famous electricians in England, Holland, France and Germany are to be drank in electrified bumpers, under the discharge of guns from an electrical battery. With the power of the jar at hand, it became obvious that the electrical fluid could be transmitted over longer distances, and seemingly instantaneously. Experimenters proved as much by sending shocks through a variety of media, including rivers and lakes. Especially famous were the Abbé Nollet’s demonstrations in France. He sent a shock through 180 soldiers of the royal guard; then a mile-long chain of Carthusian monks, each connected to the next by an iron wire in his hand.6 Experimentation by this time had shown that metal wires such as these provided the best medium of transmission — that they were “conductors” of electricity. The Projectors From these new tools – friction-based generators, Leyden jars, and conductive metal wire – arose the first attempts to communicate  by electricity. In 1753, one “C.M.”, whose identity has never been conclusively determined, put forth in Scot’s Magazine his plan for “An Expeditious Method of Conveying Intelligence.” He described a system with one wire per letter system, with each wire ending in a ball of pith (a spongy plant material).  When a charge was sent through the wire, the electrified pith would lift a corresponding piece of paper, indicating the letter. Nothing more is known of C.M. or whether his device was ever built.7 A variety of others, however, followed his lead over the next century. In 1774, Swiss philosopher George-Louis LeSage proposed a 24-wire system, similar to the one described by the mysterious C.M., with the 24 corresponding letters arranged like the keys of a harpsichord. He contemplated presenting his design to Frederick the Great, “to judge for himself of its utility”, but if he ever did so the Prussian monarch was evidently unimpressed, since we hear nothing more of it.  What LeSage’s telegraph might have looked like, from a 19th century engraving (note the electrical machine on the right) Twenty years later, a Spainard, Don Francisco Salvá, proposed an approach based on the human body itself – a person would hold the far end of the wire and thus receive the message quite directly when a charged Leyden jar was applied to the near end. He did not say how he would find volunteers for the task of holding a wire all day in the hopes of receiving a shock. He later built a more humane system, based on generating sparks between foils of tin, which he demonstrated to the Spanish court. 8 Similar examples from the decades around 1800 could be multiplied, to the point of tedium. These telegraphic experimenters came from the periphery of electrical science. The Franklins, Voltas, Faradays, and others who probed deep into the nature of electricity did not busy themselves with schemes to convey intelligence. It was an age of “projectors”, men with grandiose plans, from establishing a Scottish colony on the isthmus of Panama, to realizing the ancient dream of alchemical transmutation. They were skewered by Jonathan Swift, who filled the Academy of Lagado in Gulliver’s Travels with men vainly striving to extract sunbeams from cucumbers, and other such nonsense. Blindfolding our hindsight, we could easily dismiss men like LeSage and Salvá as projectors of this sort.9 They faced a number of obstacles to a practical and efficient system: A reliable source. Electric machines and Leyden jars were finicky and potentially dangerous devices that could not provide a smooth flow of electrical fluid (what we would call a ‘steady current’). Moreover, in modern terms they produced very high voltage, which meant that they were very susceptible to losses on poorly insulated wire.10 An effective means for detecting a signal and translating it into language. This was a dual problem of coming up with a sufficiently sensitive detector, and a way of encoding language in that detector. Most of the electrical projectors tried in some way to directly represent letters on the far end of the wire, whether with one wire per letter or contrivances such as synchronized wheels or multiple needles to indicate the desired letter. A conceptual framework to guide experimentation in fruitful directions. Ohm would not lay out his famous law until 1827, and it did not become known outside Germany until the 1840s. Until then it was hard to fathom why certain combinations of wire, electrical source, and detector worked wonderfully, while others failed utterly.11 In future installments, we shall see how these obstacles were overcome, over several decades, and mostly by-the-by — as a byproduct of efforts to solve entirely different problems. But first we must look at the incumbent against which all who tried to “convey intelligence” by electricity would be compared – the telegraph. For before the telegraph with which we are familiar, there was this: The telegraph, or ‘far writer’ [previous part] [next part] Further Reading John Joseph Fahie, A History of Electric Telegraphy, to the Year 1837 (1884) Thomas J. Hankins, Science and the Enlightenment (1985) J. L. Heilbron, Electricity in the 17th and 18th Centuries (1979) E.A. Marland, Early Electrical Communication (1964)

Read more
The Transistor, Part 2: Out Of The Crucible

The crucible of war prepared the ground for the transistor. The state of technical knowledge about semiconductor devices advanced enormously from 1939 to 1945. There was one simple reason: radar. The single most important technology of the war, its applications included detecting incoming air raids, locating submarines, guiding nightfighters to their targets, and aiming anti-aircraft guns and naval cannon. Engineers even managed to pack tiny radar sets inside of artillery shells, to make them detonate when they passed near a target – the proximity fuse. The origins of this potent new military technology, however, lay in a much more peaceful domain: the scientific exploration of the upper atmosphere. Radar In 1901, the Marconi Wireless Telegraph Company succeeded in sending a wireless message across the Atlantic, from Cornwall to Newfoundland. This was a fact confounding to modern science. If radio transmissions traveled in straight lines (as they certainly ought to) this feat should have been impossible. There is no straight line between England and Canada that does not cross through the Earth, so Marconi’s message should have continued right on into outer space. American engineer Arthur Kennelly and British physicist Oliver Heaviside both independently hypothesized that the explanation must lay in a layer of ionized gas in the upper atmosphere, capable of reflecting radio waves back down towards Earth.1 By the 1920s, scientists had developed new equipment first to prove the existence of this ionosphere, then to probe its structure. They used vacuum tubes to generate  shortwave radio pulses, shaped antennas to direct them up into the atmosphere and register their echoes, and cathode ray tubes to display the results. The longer the delay until the echo returned, the farther away the ionosphere must be. The technique was known as ionospheric sounding, and it provided the basic technical infrastructure required for radar2. It was only a matter of time for those with the right knowledge, resources, and motivation to realize the potential terrestrial applications of such equipment3. This became increasingly likely as radio grew more and more commonplace, and more and more people noticed signal interference from nearby ships, aircraft, and other large objects. Knowledge of the techniques for probing the upper atmosphere spread during the second International Polar Year (1932-1933), which included a project to map the ionosphere from various stations in the Arctic. Shortly thereafter teams in Britain, the U.S., Germany, Italy, the Soviet Union, and elsewhere all developed basic radar systems.4 Robert Waston-Watt with his 1935 radar apparatus Then came war, and with it the importance of radar to the state – and thus the resources available to develop it – increased sharply. In the United States, these resources coalesced around a new organization founded in 1940 at the Massachusetts Institute of Technology (MIT) known as the “Rad Lab”5. Though less famous in later years than the Manhattan Project, the Rad Lab recruited equally exceptional physics talents from across the United States. Five of the early recruits (who included Luis Alvarez and I. I. Rabi) would go on to win Nobel prizes.6 By end of war the lab had about 500 Ph.D. scientists and engineers, out of a total of almost 4,000 employees. Half a million dollars – as much as the entire budget for ENIAC – was spent on the “Radiation Laboratory Series” alone. It consisted of twenty-seven volumes that captured and disseminated the knowledge gained at the lab during the war.7 MIT’s Building 20, home to the Rad Lab One major area of research for the Rad Lab was high-frequency radar. Early radars all transmitted in wavelengths measured in meters. But higher-frequency beams, with wavelengths measured in centimeters – microwaves – would have more compact antennas, and disperse less over long distances, offering huge advantages in range and precision. Microwave radars would be able to fit in the nose of a plane, and detect objects as small as a submarine periscope. First to crack the problem were a team of British physicists at the University of Birmingham. In 1940, they developed a device they called the cavity magnetron, which acted as a kind of electromagnetic “whistle,” turning an incoherent blast of electricity into a powerful and precisely tuned beam of microwaves8. This was a microwave transmitter a thousand times more powerful than its next closest competitor; it unlocked the door to practical high-frequency radar transmitters. But such a transmitter would need a companion, a receiver capable of registering high-frequencies. Here we rejoin the story of semiconductors. Cutaway view of a cavity magnetron The Second Coming of the Cat’s Whisker For it turned out that vacuum tubes were not at all well suited for receiving microwave radar signals. The gap between hot cathode and cold anode creates capacitance, which causes the circuit to fail at high frequencies. The best available technology for high-frequency radar was the old-fashioned cat’s whisker – a little twist of wire pressed into a semiconductor crystal. This fact was discovered independently in several places, but most relevant to our story were the events that took place in New Jersey. In 1938, Bell Labs received a contract from the Navy to develop a fire-control radar in the forty-centimeter range – a much shorter wavelength, and thus higher frequency, than the existing state of the art in this pre-cavity magnetron era. The main research work for the project was assigned to the Bell Labs branch in Holmdel, due south of Staten Island. It did not take the researchers there long to realize what they needed for their high-frequency receiver, and soon engineer George Southworth was rummaging through radio shops in Manhattan to find an old cat’s whisker detector. As expected, it worked much better than a tube-based receiver, but the results were inconsistent. So Southworth sought out an electrochemist named Russel Ohl, and asked him to try to improve the uniformity of response from point-contact crystal detectors. Ohl was an unusual character who believed himself a figure of technical destiny, receiving periodic flashes or visions of things to come. For example, he claims to have known since 1939 that a silicon amplifier would be invented, but that he would not be fated to invent it. After studying dozens of options, he settled on silicon as the best material for Southworth’s receivers. The problem was to control its content so that he could control its electrical properties. Industrial silicon ingots were commonplace, being used in steel mills to strengthen the metal, but the mills hardly cared if the silicon contained, say, 1% phosphorus. So, with the help of a pair of metallurgists Ohl set out to render much purer ingots than any had ever before attempted. As they worked, his team discovered that some of their crystal samples rectified in one direction, and some in the other: they dubbed these “n-type” and “p-type.” Further analysis suggested that different impurities were responsible for each. Silicon sits in the fourth column of the periodic table, meaning it has four electrons in its outermost shell. In an ingot of perfectly pure silicon, each of those four electrons would bond with a neighbor. Impurities from the third column, such as boron, which had one fewer electron to share, created a “hole,” an extra space where current could move within the crystalline structure. This resulted in a p-type semiconductor (with an excess of positive carriers). Elements from the fifth column, such as phosphorus, supplied extra free electrons for carrying current, creating an n-type semiconductor. The crystal structure of silicon All of this research was quite fascinating, yet by 1940 Southworth and Ohl were not much closer to a working high-frequency radar set than when they started. The British, meanwhile, pressed for immediate practical results by the looming threat of the Luftwaffe, had already created production-ready microwave-range detectors to pair with their magnetron-based transmitters. The balance of technical achievement would soon tip to the western side of the Atlantic, however. For Churchill decided to reveal all of Britain’s technical secrets to the Americans, even before they entered the war as full belligerents (as he assumed they, eventually, must do). He felt the risk of a leak worthwhile, balanced against the chance of bringing the full industrial might of the United States to bear on problems like atomic weaponry and radar. The British Technical and Scientific Mission (more popularly known as the Tizard Mission) arrived in Washington in September 1940, carrying among its baggage a bounty of technical wonders. The revelation of the extraordinary power of the cavity magnetron, and the effectiveness of British crystal detectors at receiving the resulting signals, galvanized the American effort on semiconductors, in support of high-frequency radar. There was a great deal of work to do, mostly in the realm of materials science. To meet the scale of demand, semiconductor crystals “had to be produced by the millions in far higher quality than previously available. Rectification needed to be improved, vulnerability to shock and burnout reduced, and variations between batches of crystal minimized.”9 A silicon point-contact rectifier The Rad Lab opened new research departments to investigate the properties of semiconductor crystals and how they could be altered to maximize their value as receivers. Silicon and germanium were the most promising materials, so the Rad Lab hedged its bets and launched an adjunct research program to study each: silicon at Penn, and germanium at Purdue.  Industrial giants Bell, Westinghouse, Du Pont, and Sylvania started their own semiconductor research programs, and also began working up new production lines for crystal detectors. These combined efforts brought the purity standard of silicon and germanium crystals from 99% at the start of the war up to 99.999% at the end – only one impurity in every 100,000 atoms. In the process, a substantial cadre of scientists and engineers acquired intimate familiarity with both the abstract properties of germanium and silicon and the concrete techniques needed to manipulate their composition: melting them, growing them into crystals, and doping them with tightly controlled doses of impurities (such as boron, which was shown to improve conductivity).10 Then the war ended. The demand for radar equipment disappeared, but the knowledge and skills acquired during the war did not, and the dream of a solid state amplifier had not been forgotten. The race would now be on to build one. At least three teams were well-positioned to claim the prize. West Lafayette First, at Purdue University, was the group under an Austrian-born physicist named Karl Lark-Horovitz. His talent and influence had single-handedly brought Purdue’s physics department out of obscurity, and accounts for the Rad Lab’s decision to center their germanium research at his lab. Karl Lark-Horowitz in 1947, center with pipe (at a meeting of American Association for the Advancement of Science officers) By the early 1940s, silicon was considered clearly the best material for radar rectifiers, but the material just below it on the periodic table also seemed worthy of further study. Germanium had a large practical advantage over its cousin simply because its lower melting point made it so much easier to work with: about 940 degrees (Celsius), versus about 1400 for silicon – roughly the same as steel. Because of it its high melting point, it was extremely hard to make a crucible that did not bleed into the molten silicon, dirtying it. So Lark-Horovitz and his fellow physicists spent the war studying the chemical, electrical, and physical properties of germanium. The most crucial problem to overcome was “back voltage”: germanium rectifiers would stop rectifying, allowing current to flow backwards, at very low voltages. This surge of back current would then fry other components of the radar. One of Lark-Horovitz’s graduate students, Seymour Benzer, spent over a year studying this problem, and finally came up with a tin doping mix that stopped back surges of well over one hundred volts. Shortly thereafter, Bell Lab’s manufacturing arm, Western Electric, began churning out rectifiers based on Benzer’s work for the military. Work on germanium at Purdue continued on after the war. In June 1947, Benzer, now a professor, reported a strange anomaly: some experiments had generated high-frequency electrical oscillations within the germanium crystals. His colleague Ralph Bray, meanwhile, had continued studying “spreading resistance,” a project he began during the war. Spreading resistance described the way in which electricity flows within the germanium crystal under the point contact of the rectifier. Bray found that high-voltage pulses greatly diminished the resistance of n-type germanium to these flows. Without realizing it, he was witnessing the effect of so-called “minority” charge carriers. In n-type semiconductors, excess negative charges serve as the majority carriers, but positive “holes” could also carry current, and in this case the high-voltage pulses were tearing holes in the germanium structure, creating these minority carriers. Bray and Benzer were tantalizingly close to a germanium amplifier, but did not realize it. Benzer cornered Bell Labs scientist Walter Brattain at a conference in January 1948 to discuss spreading resistance. He suggested to Brattain that maybe if they placed another point contact nearby the first to pick up the current, they might be able to grasp what was happening inside the surface. Brattain quietly agreed with the suggestion and walked away. As we shall see, he knew all too well what such an experiment might uncover. Aulnay-sous-Bois The Purdue group had both the material techniques and the grounding in theoretical physics needed to make the leap to the transistor. But they could have stumbled on it only by chance. They were interested in the physics of a material, not in pursuing a new kind of device.11 Quite different conditions obtained in Aulnay-sous-Bois, France, where two former radar researchers from Germany, Heinrich Welker and Herbert Mataré, led a team focused entirely on building semiconductor devices for industry. Welker had first studied and then taught physics at the University of Munich, under the famed theorist Arnold Sommerfeld. From 1940 onward he abandoned the strictly theoretical and began working on radar for the Luftwaffe. Mataré (of Belgian descent), was raised in Aachen, where he studied physics. He joined the research arm of German radio giant Telefunken in 1939. During the war he moved his work east from Berlin to an abbey in Silesia to avoid Allied bombing raids, then back west to avoid the advancing Soviets, before finally falling into the hands of the American army. Just like their counterparts in the Allied powers, the Germans knew by the early 1940s that crystal detectors were ideal radar receivers and that silicon and germanium were the most promising materials from which to build them. Both Mataré and Welker spent their war years focused on improving the effective use of those materials in rectifiers. After the war, both underwent repeated interviews about their war work, before finally receiving an invitation from a French intelligence agent to come to Paris in 1946. The Compagnie des Freins & Signaux (Brake and Signal Company), a French subsidiary of Westinghouse, had received a contract form the French telephone authority to build solid-state rectifiers, and sought out, via their government contacts, German scientists to help them. This union of erstwhile enemies might appear awkward, but the arrangement was in fact quite congenial to both parties. The French, having been defeated in 1940, never had the chance to build up an indigenous expertise in semiconductors, and desperately needed the skills of the Germans. For their part, the Germans could not effectively pursue any kind of high-tech research in their own occupied and war-torn country, and so leaped at the opportunity to continue their work. Welker and Mataré set up shop in a two-story house in the Paris suburb of Aulnay-sous-Bois, and, with the help of a team of technicians, established a successful production line for germanium rectifiers by the end of 1947. They then turned their attention to greater prizes: Welker returning to his prior interest in superconductivity, and Mataré to amplification. Herbert Mataré in 1950 Mataré had experimented during the war with rectifiers with two point contacts – duodiodes – in an attempt to reduce noise in the circuit. He now resumed those experiments, and soon found that the second cat’s whisker, when placed within 100 millionths of a meter, could sometimes modulate the flow of current through the first. He had built a solid-state amplifier, albeit a rather useless one. To achieve a more consistent effect, he turned to Welker, who had developed a expertise in germanium crystals during the war. Welker’s team grew larger and purer germanium crystal samples, and with that improvement in materials, by June 1948 Mataré’s point-contact amplifiers became reliable. X-ray image of a production Westinghouse transistron based on Mataré’s design, showing the two point contacts touching the germanium Mataré even had a theoretical model to explain what was happening: he believed that the second contact tore holes in the germanium, facilitating the flow of current through the first contact by providing minority carriers. (Welker did not agree, and thought the phenomenon based on some kind of field effect). Before they had a chance to further develop either their device or their ideas, however, he and Welker learned that a group of Americans had developed exactly the same concept – a germanium amplifier with two-point contacts – over six months earlier. Murray Hill At the end of the war, Mervin Kelly reformed the Bell Labs semiconductor research group, with Bill Shockley at its head. It would be a larger and better funded operation now, and relocated from the original Bell Labs building in Manhattan to the sprawling new suburban campus in Murray Hill, New Jersey. The Murray HIll campus ca. 1960 In order to become reacquainted with the state of the art in semiconductors (after spending the war doing operations research) Shockley visited Russel Ohl’s lab in Holmdel in the spring of 1945. Ohl had spent his war years working on silicon, and the time had not been idly spent. He was able to show Shockley a kind of crude amplifier he had built, which he called a “desister”. He took a point-contact silicon rectifier and sent a battery current through its bulk. The heat from the battery current seemed to reduce the resistance of the flow through the point contact, and turned the rectifier into an amplifier, capable of transferring incoming radio signals onto a circuit loud enough to power a speaker. The effect was crude and unreliable, not at all suitable for commercialization. But it was enough to confirm Shockley’s opinion that a semiconductor amplifier was indeed possible, and should be the first priority in solid-state research. The meeting with Ohl’s team also convinced Shockley that silicon and germanium should be the primary materials of interest. Not only did they offer attractive electrical properties, but men like Ohl’s metallurgist colleagues, Jack Scaff and Henry Theuerer, had made massive advances in growing, purifying and doping these crystals during the war, far exceeding the techniques available for any other semiconductor material.12 Shockley’s group would spend no more time on the copper-oxide rectifiers of the pre-war era. With Kelly’s help, Shockley began to gather his new team. Among the key members were Walter Brattain, who had helped Shockley with his first attempt at a semiconductor amplifier (in 1940), and John Bardeen, a younger physicist and a new recruit to Bell Labs. Bardeen had perhaps the deepest expertise in the physics of solids of anyone on the team – he had written his dissertation on electron energy levels within the structure of metal sodium.13 Shockley’s first line of attack on the solid-state amplifier relied on what was later known as the “field effect.” He would suspend a metal plate just above an n-type semiconductor (which has a surplus of negative charge). When he applied a positive charge to the plate, it would pull the crystal’s excess electrons up to the surface, creating a river of negative charge where current could flow easily.  The signal to be amplified (represented by the level of charge on the plate) would thus be able to modulate the main circuit (flowing through the semiconductor surface). His theoretical understanding of the physics of solids told him this should work. But, despite repeated testing and experimentation, it never did. By March 1946, Bardeen had a well-developed theory to explain why: the surface of the semiconductor behaves differently than the interior, at the quantum level. The negative charges pulled to the surface get trapped there in “surface states”, and block the further penetration of the electric field from the plate into the material. The rest of this team found this analysis convincing, and thus set off on a new research program in three parts: Prove that surface states exist Explore their properties Figure out how defeat them and make a working field-effect transistor After a year and a half of study and experimentation, Brattain had a breakthrough on November 17, 1947. He found that when an ion-filled liquid like water was placed between the plate and the semiconductor, the electric field from the plate pushed ions down against the semiconductor, where they neutralized the charges trapped in the surface states. He could now control the electrical behavior of a hunk of silicon by varying the charge on the plate. This success gave Bardeen an idea for a new approach to building an amplifier: surround a rectifier’s point-contact with electrolytic water, then use a second wire in the water to manipulate the surface states, thus control the level of conductivity through the primary contact. And so Bardeen and Brattain entered the home stretch. Bardeen’s idea worked, but the amplification was feeble, and came through only at very low frequencies, below the range audible to humans – and thus it was useless as a telephone or radio amplifier. So Bardeen suggested switching to the back-voltage resistant germanium developed at Purdue, on the assumption that it had fewer charges trapped in its surface states. Suddenly they got massive amplification, though in the opposite direction from what they expected. They had discovered the minority-carrier effect – rather than the expected electrons, holes introduced through the electrolyte were boosting the current flow through the germanium. The current on the wire in the electrolyte had effectively created a p-type layer (a region of excess positive charge) at the surface of the n-type germanium. Further experimentation showed that the electrolyte was not needed: simply placing two point contacts very close together on the germanium surface sufficed to allow the current from one to modulate the current through the other. To achieve a very close spacing, Brattain wrapped a piece of thin gold foil around a triangular piece of plastic, then carefully slit the foil open at the tip. He then used a spring to press the whole triangle against the germanium, causing the two cut edges to touch the surface about two thousandths of an inch apart. This gave Bell Labs’ prototype transistor its signature look: Brattain and Bardeen’s prototype transistor Like Mataré and Welker’s device, this was more or less a classic “cat’s whisker”, but with two point contacts instead of one. On December 16th, it generated significant power and voltage gain, at 1000 cycles, well within the audible range. A week later, with minor refinements, Bardeen and Brattain achieved 100x voltage gain and 40x power gain, and demonstrated to Bell executives that their device could reproduce audible speech.14 John Pierce, another member of the solid-state group, coined the term transistor as a riff on Bell’s name for the copper-oxide rectifier, the varistor.10 Bell kept their new creation under wraps for the next six months. They wanted to ensure they had a head start in realizing the commercial possibilities of the transistor before anyone else got their hands on it. The press conference was set for June 30, 1948, just in time to shatter any dreams of immortality that Welker and Mataré may have harbored. In the meantime the semiconductor group quietly tore itself apart. For as soon as he heard about Bardeen and Brattain’s achievement, their boss, Bill Shockley, began working to ensure that he would get the credit for it. Though he had played only a supervisory role, in all the announcement publicity Shockley got equal if not higher billing – as is clear from this publicity shot, which puts him in the center and at the lab bench: 1948 Bell Labs publicity photo of Bardeen, Shockley, and Brattain But equal credit was not good enough for Shockley. So, even before anyone outside Bell knew of the transistor, he set about reinventing it, to make it his own. It was only the first of many such reinventions. [Previous part] [Next part] Further Reading Robert Buderi, The Invention That Changed the World (1996) Michael Riordan, “How Europe Missed the Transistor,” IEEE Spectrum (Nov. 1, 2005) Michael Riordan and Lillian Hoddeson, Crystal Fire (1997) Armand Van Dormael, “The ‘French’ Transistor,” http://www.cdvandt.org/VanDormael.pdf (1994)

Read more
The Transistor, Part 1: Groping in the Dark

The road to a solid-state switch was a long and complex one. It began with the discovery that certain materials behaved oddly with respect to electricity – differently than any existing theory said they ought to. What followed is a story that reveals the “scienceification” and “instutionalization” of technology in the twentieth century. Dilettantes, amateurs and professional inventors with little or no formal scientific training made major contributions to telegraphy, telephony, and radio. As we shall see, however, almost every advance in the history of solid-state electronics came from a university-trained scientist (typically with a Ph.D. in physics) working at a major university or corporate research lab. Anyone with access to a machine shop and some basic mechanical skills could construct a relay from wire, metal, and wood. The vacuum tube requires more specialized tools to make and evacuate a glass envelope. Solid-state devices, however, disappeared down a rabbit hole from which the digital switch has never returned, descending ever deeper into worlds comprehensible only by abstract mathematics, and accessible only by a panoply of tremendously expensive equipment. Galena In 1874, Ferdinand Braun, a 24-year-old physicist at the Thomas Gymnasium1 in Leipzig, produced the first of many important scientific publications in his long career. “On the Conduction of Electrical Currents through Metal Sulfides” was accepted by Pogendorff’s Annalen, the premier journal for work in the physical sciences. Despite its dull title, Braun’s paper described a series of fascinating and perplexing experimental results. Ferdinand Braun Braun became intrigued by the sulfides – mineral crystals consisting of sulfur bound to some metal – from the work of Johann Hitorff. As far back as 1833, Michael Faraday had noted that the conductivity of  silver sulfide increased with temperature, the exact opposite of the behavior of metallic conductors. Hitorff had reported his meticulous quantitative measurements of this effect in the 1850s, in both silver and copper sulfides. Now Braun, using a clever experimental apparatus that pressed a metal wire into the sulfide crystal with a spring in order to ensure good contact, found something far stranger. The conductivity of his crystals was directional – that is to say, current would flow well in one direction, but if he reversed the polarity on the battery, suddenly the current dropped dramatically. The crystals acted more like conductors (such as normal metals) in one direction but more like insulators (such as glass or rubber) in the other. This property was known as rectification, for its ability to straighten (rectify) a “wiggly” alternating current into a “flat” direct current. Around the same time, researchers discovered other strange properties in materials such as selenium, which could be smelted out from some metal sulfide ores. Selenium increased in conductivity or even generated voltage when exposed to light, and could also be used to rectify current. Was there any connection to the sulfide crystals? Without any theoretical model to explain what was happening, confusion reigned. But the lack of a theory was no obstacle to practical applications. By the late 1890s, Braun had become a full professor at the University of Strasbourg – recently annexed from France in the Franco-Prussian War and rechristened the Kaiser-Wilhelm University. There he became enmeshed in the exciting new world of radiotelegraphy, or wireless. Approached by a group of entrepreneurs, he agreed to join their venture to build a wireless system based on transmission through the water. But he and his partners soon abandoned their original idea in favor of the aerial transmission used by Marconi and others. Among the aspects of radio which Braun’s group sought to improve was the then-standard wireless receiver, the coherer. It relied on the fact that Hertzian waves would cause metal filings to cohere into a clump, allowing current from a battery to pass through to a signaling device. It worked, but responded only to relatively strong signals, and required constant tapping to decohere the filings. Braun remembered his old experiments with sulfide crystals, and in 1899, he reconstructed his old experimental apparatus with a new purpose – as a detector of wireless signals. It used the rectification effect to transform the tiny, oscillating current produced by passing radio waves into a direct current that could drive a small speaker, producing audible crackles with each dot and dash. This device later became known as the “cat’s whisker” detector, after the appearance of the twist of wire used to lightly touch the top of the crystal. In British India (modern Bangladesh), the scientist and inventor Jagadish Bose built a similar device, perhaps as early as 1894. Others soon followed with detectors based on silicon and carborundum (silicon carbide). But it was galena, or lead sulfide, which has been smelted for lead since ancient times, that became the preferred material for these crystal detectors. Cheap and easy to construct, they became wildly popular among early radio hobbyists. Moreover, unlike the strictly binary coherer (the filings were either cohered or not), the crystal rectifier could reproduce a continuous signal. And so it could render audible transmissions of voice and music, not just Morse code dots and dashes. A galena cat’s whisker detector. The small bit of wire on the left is the ‘whisker’ and chunk of silver material below it is the galena crystal. However, as many a frustrated hobbyist would learn, it could take minutes or even hours of tedious hunting around on the surface of the crystal to find that magic spot that would produce good rectification. And the unamplified sounds they produced were feeble and tinny. By the 1920s, vacuum tube receivers with triode amplifiers had made crystal detectors all but obsolete for most purposes. Only their low cost remained an attraction. This brief interlude as a radio receiver seemed to be the extent of the practical value of the curious electrical properties that Braun and others had uncovered. Copper Oxide Then, in 1920, another physicist, named Lars Grondahl,2 found something odd in his experimental apparatus. Grondahl, the first of several bright and restless men from the American West in our story, was the son of a civil engineer. His father, who immigrated from Norway in 1880, spent decades working on the new railroads of California, Oregon, and Washington. Grondahl seemed at first to leave his father’s world of engineering behind, pursuing a Ph.D. in physics at Johns Hopkins, and going into academia. But then he, too, found his way into the railroad business, taking a position as director of research for Union Switch and Signal, a subsidiary of industrial giant Westinghouse that supplied equipment to the railroad industry. Different accounts provide contradictory reasons for Grondahl’s initial motivation for his investigations, but whatever the reason, he began experimenting with disks of copper that had been heated on one side to create an oxidized layer. While testing out the disks, he noticed an asymmetry in the current flows – the resistance was three times as strong in one direction as in the other. The copper/copper-oxide disk was rectifying current, just like a sulfide crystal. Schematic of a copper-oxide rectifier Grondahl spent the next six years developing this phenomenon into a production-ready rectifier, with the help of another Union Switch researcher, Paul Geiger, before filing a patent and announcing his find to the American Physical Society in 1926. It was an immediate commercial hit. With no fragile filament, it was far more reliable than a vacuum tube rectifier based on Fleming’s “valve” principle, and could be made more cheaply. Unlike Braun’s crystal rectifiers, it worked on the first try, and, because of the much larger contact area between metal and oxide, it worked across a much greater range of currents and voltages. It could charge batteries, detect signals in a variety of electrical systems, and act as a safety bypass in high-power generators. When used as a photocell rather than a rectifier, the disks could act as light meters, and were especially useful in photography. Other researchers developed selenium rectifiers around the same time, which found similar applications. A copper-oxide rectifier stack. Putting multiple copper/copper-oxide discs in series increased their resistance to backpressure, making them suitable in higher-voltage applications. A few years later, two Bell Labs physicists, Joseph Becker and Walter Brattain, took up the topic of the copper rectifier – they wanted to know how it worked, and how it could be put to work for the Bell System. Brattain in later years – circa 1950 Brattain hailed from the same part of the country as Grondahl, the Pacific Northwest, where he grew up on a farm just miles from the Canadian border. In high school he developed an interest in physics, showed an aptitude for it, and eventually took his Ph.D. at the University of Minnesota in the late 1920s, arriving at Bell Labs in 1929. Among other coursework, he had studied the newest theoretical physics emerging from Europe, known as quantum mechanics.3 The Quantum Revolution This new theoretical armature had slowly evolved over the previous three decades, and would, in due time, help to explain all the strange phenomena that had been observed over the years in materials like galena, selenium and copper oxide. A cohort of mostly young scientists, mostly from Germany or its neighbors, had created a great quantum upheaval in physics. Everywhere they looked they found, not the smooth, continuous world that they had been taught of, but strange, discrete lumps. It began in the 1890s. Max Planck, a highly renowned full professor at the University of Berlin, had decided to tackle a well-known but still unsolved problem: how does a “black-body” (an ideal substance that absorbs all incoming energy without reflection) emit radiation across the electromagnetic spectrum? Various models had been tried, none of which matched the experimental results – they failed at either the low or at the high end of the spectrum. Planck found that if he assumed that energy was emitted from the body in little “packets” of discrete size, he could make a simple law for the relationship between frequency and energy that perfectly matched all the empirical results. Shortly thereafter, Einstein found the same to be true for the absorption of light (the first hint of the photon), and J.J. Thomson showed that electricity, too, was not carried by a continuous fluid or wave but by a discrete particle – the electron. Niels Bohr then created a model that explained the radiation given off by excited atoms by positing distinct electron orbits in the atom, each with its own energy level. The name is misleading, for they behaved nothing like macroscopic planetary orbits – in Bohr’s model electrons moved instantaneously from one orbit, or energy level, to the next, without passing through any intermediate point. Finally, in the 1920s, Erwin Schrödinger, Werner Heisenberg, Max Born, and others built a general mathematical framework known as quantum mechanics, which subsumed all the ad hoc quantum models that had been built over the previous two decades. By this time also, physicists had become fairly confident that materials like selenium and galena, that displayed rectifying and photovoltaic properties, belonged to a distinct class of materials that they dubbed semiconductors. This classification took so long for several reasons: First, insulators and conductors were themselves expansive categories. So-called “conductors” vary greatly in their conductivity, and likewise (to a less degree) with insulators, and it was not obvious that any given semiconductor could not be assigned to one class or the other. Moreover, until the middle of the twentieth century, it was not possible to obtain or create highly purified materials, and any strangeness in the conductive properties of a natural mineral could always be attributed to impurities. Now physicists had available both the mathematical tools of quantum mechanics and a new distinctive class of materials to apply them to. British theorist Alan Wilson was the first to effectively put these together to provide an overall model of what semiconductors are and how they work, in 1931. Wilson first argued that conducting materials are distinguished from insulating ones by the state of their energy bands. Quantum mechanics had posited that electrons can only exist in a finite number of energy levels, which in a single atom are carved into shells or orbitals. When those atoms are compressed together in a material structure, however, it is better to think of continuous energy bands that run through the material. In conductors, there are empty slots available in the material’s highest energy band, and an electrical force can easily jostle electrons up into these free spaces. In insulators, by contrast, the band is full, and it is a long climb up to the next, conduction band, where electricity could move freely. This led him to the conclusion that impurities – foreign atoms in the material’s structure – must contribute to semi-conduction. They could either contribute excess electrons to the material, electrons that could easily jump up into the conduction band, or contribute holes – a lack of electrons relative to the rest of the material – creating empty energy slots into which free electrons could move. The former later became known as n-type semiconductors (for their excess negative charge), and latter p-type.  Wilson finally proposed that the rectification of current by semiconductors could be explained terms of quantum tunneling, the sudden jump of electrons across a thin electrical barrier within the material. The theory seemed plausible, yet it predicted that current in a rectifier would flow from copper oxide to copper, when in fact it did just the opposite.4 Thus, despite the advances made by Wilson, semiconducting materials still proved extremely resistant to explanation. As was slowly becoming apparent, microscopic changes in their crystalline structure and the concentrations of impurities could have outsize effects on their macroscopic electrical behaviors. Undeterred by this lack of understanding – for indeed no one could yet explain the experimental phenomena observed by Braun some 60 years prior – Brattain and Becker developed an efficient production process for copper-oxide rectifiers for their employer. The Bell System quickly moved to replace vacuum tube rectifiers throughout its system with this new device, which its engineers dubbed the varistor, since its resistance varied with direction. The Golden Prize Mervin Kelly, a physicist and former head of Bell Labs’ vacuum tube department, was fired up by this accomplishment. Electronic vacuum tubes had proved invaluable to Bell over the previous twenty years or so, and could perform functions impossible for the earlier generation of mechanical and electro-mechanical components. But they ran hot, burned out regularly, consumed large amounts of power, and were a huge maintenance burden. Kelly meant to reconstruct the Bell system yet again on more reliable and durable electronic components – solid-state components like the varistor, which require no sealed gas or vacuum envelope to function, nor any heated filament. In 1936 he became head of research for Bell Labs as a whole, and began to redirect his organization towards this vision. With a solid-state rectifier in hand, the obvious next step for the field was a solid-state amplifier. Coincidentally, of course, just like the vacuum tube amplifier, such a device could also function as a digital switch. This was of particular interest to Bell, which still had vast numbers of electro-mechanical digital switches in its telephone exchanges. But more widely sought after was a more reliable and compact, less-power-hungry and cooler replacement for the vacuum tube in telephone systems, radios, radars and other analog equipment, where it was used to amplify feeble signals into something perceptible by human ears or eyes. In 1936, Bell Labs finally lifted the hiring freeze that it had imposed during the Great Depression. Kelly immediately began acquiring experts in quantum mechanics to help fuel his solid-state research program, among them William (Bill) Shockley, another Westerner, from Palo Alto, California.  The topic of his freshly minted thesis from MIT could not have better suited Kelly’s needs: “Electronic Bands in Sodium Chloride.” At the same time, Brattain and Becker continued their investigation of the copper-oxide rectifier, in pursuit of the greater prize of a solid-state amplifier. The most obvious way to make one was by analogy to the vacuum tube. Just as Lee De Forest had taken a vacuum tube rectifier and placed an electrified grid between the source and the sink of the current, so did Brattain and Becker imagine inserting a grid into the interface between the copper and copper oxide, where the act of rectification was presumed to occur. However, given the thinness of this layer, it seemed to them impossible to actually do this, and they made no real headway. Meanwhile, developments elsewhere showed that Bell Labs was not the only party interested in solid-state electronics. In 1938, Rudolf Hilsch and Robert Pohl published the results of their experiments at the University of Göttingen with a working solid-state amplifier, created by inserting a grid into a crystal of potassium bromide. It was a laboratory device of no practical value – most notably, it operated at frequencies of one hertz or less.5 Still, such a milestone could not fail to excite anyone interested in the world of solid-state. That same year, Kelly placed Shockley in a new independent solid-state research group, and gave him and his colleagues – Foster Nix and Dean Wooldridge – free reign to explore the possibilities of the medium. Shockley’s first major inspiration in this new role came from reading British physicist Nevill Mott’s 1938 “Theory of Crystal Rectifiers,” which finally explained how Grondahl’s copper-oxide rectifier worked. Mott used the mathematics of quantum mechanics to work out how an electrical field formed at the junction of conducting metal and semiconducting oxide, and how electrons ‘jump’ over this electric barrier, rather than tunneling through it as Wilson had proposed. Current flows more easily from metal to semi-conductor than vice-versa because the metal has many more free electrons available.6 This led Shockley to exactly the same idea that Brattain and Becker had considered and rejected years earlier – to make a solid-state amplifier by inserting a piece of oxidized copper mesh into the copper-oxide interface. He hoped that applying current to the mesh would grow the barrier, constricting the flow of current from copper to oxide and thus creating an inverted, amplified version of the signal on the mesh. His first, crude effort was an utter failure, so he went for help to someone with more highly-polished laboratory skills who was very familiar with rectifiers – Walter Brattain. Though he had no doubts about the outcome, Brattain agreed to humor Shockley, and built a much more sophisticated version of the ‘mesh’ amplifier. It, too, failed utterly. Then war intervened, leaving Kelly’s new research program in disarray. Kelly himself took charge of the radar working group at Bell Labs, under the auspices of the main American radar research center at MIT. Brattain worked under him for a short time before moving on to study the magnetic detection of submarines for the Navy. Wooldridge worked on fire-control systems7, Nix on gaseous diffusion for the Manhattan Project, and Shockley went into operations research, in support first of the antisubmarine campaign in the Atlantic, then the strategic bombing campaign in the Pacific. Despite this short-term disruption, however, the war proved no impediment to the growth of solid-state electronics. On the contrary, it brought a massive new influx of resources to the field, and a new focus on two materials in particular: germanium and silicon. [Previous part] [Next part] Further Reading Ernest Bruan and Stuart MacDonald, Revolution in Miniature (1978) Friedrich Kurylo and Charles Susskind, Ferdinand Braun (1981) G. L. Pearson and W. H. Brattain, “History of Semiconductor Research,” Proceedings of the IRE (December 1955). Michael Riordan and Lillian Hoddeson, Crystal Fire (1997)

Read more
The Era of Fragmentation, Part 1: Load Factor

By the early 1980s, the roots of what we know now as the Internet had been established – its basic protocols designed and battle-tested in real use – but it remained a closed system almost entirely under the control of a single entity, the U.S. Department of Defense. Soon that would change, as it expanded to academic computer science departments across the U.S. with CSNET. It would continue to grow from there within academia, before finally opening to general commercial use in the 1990s.But that the Internet would become central to the coming digital world, the much touted “information society,” was by no means obvious circa 1980. Even for those who had heard of it, it remained little more than a very promising academic experiment. The rest of the world did not stand still, waiting with bated breath for its arrival. Instead, many different visions for bringing online services to the masses competed for money and attention.Personal ComputingBy about 1975, advances in semiconductor manufacturing had made possible a new kind of computer. At few years prior, engineers had figured out how to pack the core processing logic of a computer onto a single microchip – a microprocessor. Companies such as Intel began to offer high-speed short-term memory on chips as well, to replace the magnetic core memory of previous generations of computers. This brought the most central and expensive parts of the computer under the sway of Moore’s Law, which, in turn, drove the unit price of chip-based computing and memory relentlessly downward for decades to come. By the middle of the decade, this process had already brought the price of these components low enough that a reasonably comfortable middle-class American might consider buying and building a computer of his or her own. Such machines were called microcomputers (or, sometimes, personal computers).The claim to the title of the first personal computer been fiercely contested, with some looking back as far as Wes Clark’s LINC or the Lincoln Labs TX-0, which, after all, were wielded interactively by a single user at a time. Putting aside strict questions of precedence, any claimant to significance based on historical causality must concede to one obvious champion. No other machine had the catalytic effect that the MITS Altair 8800 had, in bringing about the explosion of microcomputing in the late 1970s.The Altair 8800, atop optional 8-inch floppy disk unitThe Altair fell into the electronic hobbyist community like a seed crystal. It convinced hobbyists that it was possible for a person build and own their own computer at a reasonable price, and they coalesced into communities to discuss their new machines, like the Homebrew Computer Club in Menlo Park. Those hobbyist cells then launched the much wider wave of commercial microcomputing based on mass-produced machines that required no hardware skills to bring to life, such as the Apple II and Radio Shack TRS-80.By 1984, 8% of U.S. households had their own computer, a total of some seven million machines1. Meanwhile, businesses were acquiring their own fleets of personal computers at the rate of hundreds of thousands per year, mostly the IBM 5150 and its clones2. At the higher end of the price range for single-user computers, a growing market had also appeared for workstations from the likes of Silicon Graphics and Sun Microsystems – beefier computers equipped standard with high-end graphical displays and networking hardware, intended for use by scientists, engineers and other technical specialists.None of these machines would be invited to play in the rarefied world of ARPANET. Yet many of their users wanted access to the promised fusion of computers and communications that academic theorists had been talking up in the popular press since Taylor and Licklider’s 1968 “Computer As a Communication Device,” and even before. As far back as 1966, computer scientist John McCarthy had promised in Scientific American that “[n]o stretching of the demonstrated technology is required to envision computer consoles installed in every home and connected to public-utility computers through the telephone system.”  The range of services such a system could offer, he averred, would be impossible to enumerate, but he put forth a few examples: “Everyone will have better access to the Library of Congress than the librarian himself now has. …Full reports on current events, whether baseball scores, the smog index in Los Angeles or the minutes of the 178th meeting of the Korean Truce Commission, will be available for the asking. Income tax returns will be automatically prepared on the basis of continuous, cumulative annual records of income, deductions, contributions and expenses.”Articles in the popular press described the possibilities for electronic mail, digital games, services of all kinds from legal and medical advice to online shopping. But how, practically, would all these imaginings take shape? Many answers were in the offing. In hindsight, this era bears the aspect of a broken mirror. All of the services and concepts that would characterize the commercial internet of the 1990s – and then some – were manifest in the 1980s, but in fragments, scattered piecemeal across dozens of different systems. With a few exceptions3, these systems did not interconnect, each stood isolated from the others, a “walled garden,” in later terminology. Users on one system had no way to communicate or interact with those on another, and the quest to attract more users was thus for the most part a zero-sum game.In this installment, we’ll consider one set of participants in this new digital land grab, time-sharing companies looking to diversity into a new market with attractive characteristics.Load FactorIn 1892, Samuel Insull, a protégé of Thomas Edison, headed west and to lead a new  branch of Edison’s electrical empire, the Chicago Edison Company. There he consolidated many of the core principles of modern utility management, among them the concept of the load factor – the average load on the electrical system divided by its highest load. The higher the load factor the better, because any deviation below 1/1 represents waste – expensive capital capacity that’s needed to handle the peak of demand, but left idle in the troughs. Insull therefore set out to fill in the troughs in the demand curve by developing new classes of customers that would use electricity at different times of day (or even in different seasons), even if it meant offering them discounted rates. In the early years of electrical power, the primary demand came from domestic lighting, with most demand in the evening. So Insull promoted its use for industrial machinery to increase daytime use. This still left dips in the morning and evening rush, so he convinced the Chicago streetcar systems convert to electrical traction. And so Insull maximized the value of his capital investments, even though it often meant offering lower prices[^hughes].Insull in 1926, when he was pictured on the cover of Time magazine.[^hughes]: Thomas P. Hughes, Networks of Power (1983), 216-225. The same principles still applied to capital investments in computers nearly a century later, and it was exactly the desirability of a balanced load factor and the incentive for offering lower off-peak prices that made possible two new online services for microcomputers that launched nearly simultaneously in the summer of 1979: CompuServe and The Source.CompuServeIn 1969, the newly-formed Golden United Life Insurance company of Columbus, Ohio created a subsidiary called the Compu-Serv Network. The founder of Golden United wanted to be a cutting-edge, high-tech company with computerized records, and so he had hired a young computer science grad named John Goltz to lead the effort. Goltz, however, was gulled by a DEC salesman into buying a PDP-10, an expensive machine with far more computer power than Golden United currently needed. The idea behind Compu-Serv was to turn that error into an opportunity, by selling the excess computer power to paying customers who would dial into the Compu-Serv PDP-10 via a remote terminal. In the late 1960s this time-sharing model for selling computer service was spreading rapidly, and Golden United wanted to get its own cut of the action. In the 1970s the time-sharing subsidiary spun off to operate independently, re-branded itself as CompuServe, and built its own packet-switching network in order to be able to offer affordable, nationwide access to its computer centers in Columbus.A national market not only gave the company access to more potential customers, it also extended the demand curve for computer time, by spreading it across four time zones. Nonetheless, there were still a large gulf of time between the end of business hours in California and the start of business on the East Coast, not to mention the weekends. CompuServe CEO Jeff Wilkins saw an opportunity in the growing fleet of home computers, many of whose owners whiled away their evening and weekend hours on their electronic hobby. What if they were offered access to email, message boards, and games on CompuServe computers, at discounted rates for evening and weekend access ($5 an hour, versus $12 during the work day4)?So Wilkins launched a trial of a service he called MicroNET (intentionally held at arms length from the main CompuServe brand) and after a slow start it gradually proved a resounding success. Because of CompuServe’s national data network, most users only had to dial a local number to reach MicroNET, and thus avoided long-distance telephone charges, despite the fact that the actual computers they were connecting to resided in Ohio. His experiment having proved itself, Wilkins dropped the MicroNET name and folded the service under the CompuServe brand. Soon the company began to offer services tailored to the needs of microcomputer users, such as games and other software available for sale on-line.But by far the most popular services were the communications platforms. For long-lived public content and discussions there were the forums, ranging across every topic from literature to medicine, from woodworking to pop music. Forums were generally left to their own devices by CompuServe, being administered and moderated by ordinary users who took on the role of “sysops” for each forum. The other main communications platform was the “CB Simulator”, coded up over the weekend by Sandy Trevor, a CompuServe executive. Named after citizen band (CB) radio, a popular hobby at the time, it allowed users to have text-based chats in real-time in dedicated channels, a similar model to the ‘talk’ programs offered on many time-sharing systems. Many dedicated users would hang out for hours on CB Simulator, shooting the breeze, making friends, or even finding lovers.The SourceHot on the heels of MicroNET – launching just eight days later in July of 1979 – came another on-line service for microcomputers that arrived at essentially the same place as Jeff Wilkins, despite starting from a very different angle. William (Bill) Von Meister, a son of German immigrants, whose father had helped establish zeppelin service between Germany and the U.S., was a serial enterpreneur. He no sooner got some new enterprise off the ground than he lost interest, or was forced out by disgruntled financial backers. He could not have been more different than the steady Wilkins. As of the mid-1970s, his greatest successes to date were in electronic communications – Telepost, a service which sent messages across the country electronically to the switching center nearest its recipient, and then covered the last mile via next-day mail; and TDX, which used computers to optimize the routing of telephone calls, reducing the cost of long-distance telephone service within large businesses.Having, predictably, lost interest in TDX, Von Meister’s newest enthusiasm in the late 1970s was Infocast, which he planned to launch in McClean, Virginia. In effect, it was an extension of the Telepost concept, except instead of using mail for the last mile delivery, he would use the FM radio sideband (basically the same mechanism that’s used to transmit station identification, artist, and song title to the screens of modern radios) to deliver digital data to computer terminals. In particular, he planned to target highly distributed business with lots of locations that needed regular information updates from their central office, such as banks, insurance companies, and grocery stores.Bill Von MeisterBut what Von Meister really wanted to build was a national network to deliver data into homes, to terminals by the millions, not thousands.  Convincing a business to spend $1000 on a special FM receiver and terminal was one thing, however, to ask the same of consumers was quite another matter. So Von Meister went casting about for another means to deliver news, weather, and other information into homes; and he found it, in the hundreds of thousands of microcomputers that were sprouting like mushrooms in american offices and dens, in homes ready-equipped with telephone connections. He partnered with Jack Taub, a deep-pocketed and well-connected businessman who loved the concept and wanted to invest. Taub and Von Meister initially called the new service CompuCom, a mix of truncation and compounding typical for a computer company of the day, but later settled on a much more abstract and visionary name – The Source.The main problem they faced was a lack of any technical infrastructure with which to deliver this vision. To get it they partnered with two companies with, collectively, the same resources as CompuServe – time-shared computers and a national data communications network, both of which sat mostly idle on evenings and weekends. Dialcom, headquartered across the Potomac in Silver Springs, Maryland, provided the computing muscle. Like CompuServe, it had begun in 1970 as a time-sharing service5, though by the end of the decade it offered many other digital services. Telenet, the packet-switched network spun off by Bolt, Beranek and Newman earlier in the decade, provided the communications infrastructure. By paying discounted rates to Dialcom and Telenet for off-peak service, Taub and Von Meister were able to offer access to The Source for $2.75 an hour on nights and weekends, after an initial $100 membership fee6Other than the pricing structure, the biggest difference between The Source and CompuServe was how they expected people to use their systems. The early services that CompuServe offered, such as email, the forums, CB, and the software exchange, generally assumed that users would form their own communities and build their own superstructures atop a basic hardware and software foundation, much like corporate users of time-sharing systems. Taub and Von Meister, however, had no cultural background in time-sharing. Their business plan centered around providing large amounts of information for the upscale, professional consumer: a New York Times database, United Press International news wires, stock information from Dow Jones, airline pricing, local restaurant guides, wine lists. Perhaps the single most telling detail was that Source users were welcomed by a menu of service options on log-in, CompuServe users by a command line.In keeping with the personality differences between Wilkins and Von Meister, the launch of The Source was as grandiose as MicroNET’s was subtle, including a guest appearance by Isaac Asimov to announce the arrival of science fiction become science fact. Likewise in keeping with Von Meister’s personality and his past, his tenure at The Source would not be lengthy. The company immediately ran into financial difficulties due to his massive overspending. Taub and his brother had a large enough ownership share to oust Von Meister, and they did just that in October of 1979, just a few months after the launch party.The Decline of Time-SharingThe last company to enter the microcomputing market due to the logic of load factor was General Electric Information Services (GEIS), a division of the electrical engineering giant. Founded in the mid-1960s, when GE was still trying to compete in the computer manufacturing business, GEIS was conceived as a way to try to outflank IBM’s dominant position in computer sales. Why buy from them, GE pitched, when you can rent from us? The effort made little dent in IBM’s market share, but made enough money to receive continued investment into the 1980s, by which point GEIS owned a worldwide data network and two major computing centers one of them in Cleveland, Ohio and the other in Europe.In 1984, someone at GEIS noticed the growth of The Source and CompuServe (the latter had, by that time, over 100,000 users), and saw a way to put their computing centers to work in off-peak hours. To build their own consumer offering they recruited a CompuServe veteran, Bill Louden. Louden, disgruntled with managers from the corporate sales side who began muscling in on the increasingly lucrative consumer business, had jumped ship with a group of fellow defectors to try to build their own online service in Atlanta, called Georgia OnLine. They tried to turn the lack of access to a national data network into a virtue, by offering services tailored for the local market, such as an events guide and classified ads, but the company went bust, so Louden was very receptive to the offer from GEIS.Louden called the new service GEnie, a backronym for General Electric Network for Information Exchange. It offered all of the services that The Source and CompuServe had by now made table stakes in the market – a chat application (CB simulator), bulletin boards, news, weather, and sports information.GEnie was the last personal computing service born out of the time-sharing industry and the logic of the load factor. By the mid-1980s, the entire economic balance of power had begun to shift. As small computers proliferated in the millions, offering digital services to the mass market became a more and more enticing business in its own right, rather than simply a way to leverage existing capital. In the early days, The Source and CompuServe were tiny, with only a few thousand subscribers each in 1980. A decade later, millions of subscribers paid monthly for on-line services in the U.S. – with CompuServe at the forefront of the market, having absorbed its erstwhile rival, The Source. The same process also made time-sharing less attractive to businesses – why pay all the telecommunications costs and overhead of accessing a remote computer owned by someone else, when it was becoming so easy to equip your own office with powerful machines? Not until fiber optics drove the unit cost of communications into the ground would this logic reverse direction again.Time-sharing companies were not the only route to the consumer market, however. Rather than starting with mainframe computers and looking for places to put them to work, others started from the appliance that millions already had in their homes, and looked for ways to connect it to a computer.

Read more
The Rail Revolution

As we noted last time, twenty years elapsed from the time when Trevithick gave up on the steam locomotive before rails would begin to seriously challenge canals as major transport arteries for Britain, not mere peripheral capillaries. To complete that revolution required improvements in locomotives, better rails, and a new way of thinking about the comparative economics of transportation. Locomotives: The Trevithick Tradition The evolution of locomotive technology in the 1810s and 1820s took place entirely in the coal-mining regions of the north, and almost entirely along the River Tyne near Newcastle, into whose waters a torrent of coal flowed over a of tangle of railways. Because of this, Trevithick’s most lasting impact on history did not come from Penydarren, nor the “dragon,” nor Catch-me-who-Can, but an engine built for Christopher Blackett, proprietor of the Tyneside colliery of Wylam. Blackett’s colliery would become the most prolific locomotive-building center of the 1810s. In 1804, Blackett had learned of Trevithick’s locomotive, and had a skilled workman who had been at Penydarren reproduce the design for him in Northumberland. Nothing came of this first attempt, as Blackett realized that the five miles of wooden rails at his colliery would never survive the attentions of the five-ton locomotive. He put it to use as a stationary engine instead. After relaying his tracks in cast iron, he wrote to Trevithick in 1808 about trying again, but by that time the disillusioned inventor had already given up on locomotives for other schemes.[2] The story of exactly what happened at Wylam next is not entirely clear, and is further muddied by competing claims for precedence as the key figure in the construction of the first reallocomotive, claims pursued with a partiality verging on mendacity by the protagonists and their descendants well into the twentieth century.[3] But sometime in the 1810s, Blackett decided to try again, and shift for himself this time, having the locomotive construction done at his own works under the direction of his own “viewer” (the title for the general manager of a coal mine), William Hedley, with consultation from his smith foreman, Timothy Hackworth.[4] It may be that Blackett was stimulated to action by the activities of John Blenkinsop at the Middleton Colliery Railway near Leeds. The belief that a smooth wheel could not drive a vehicle on a smooth track still had currency, and inventors continued to look for alternative forms of steam traction: in 1813 one inventor, William Brunton, constructed a literal translation of a horse into mechanical form that would pull a vehicle along with metal legs.[5] Blenkinsop’s solution was a cog railway engine, built by the mechanic Matthew Murray, with a toothed drive wheel running in a rack set on the outside edge of the track. This Middletown engine ran consistently for years afterward, hauling up to thirty wagons at a leisurely three miles an hour.[6] The Blenkinsop-Murray rack locomotive Salamanca, named after a victorious Anglo-Portuguese battle against Napoleonic forces. Whether influenced by Blenkinsop or not, Blackett (like Trevithick) used a hand-powered truck to convince himself that a smooth-wheeled vehicle could in fact work, then had Hedley and Hackworth construct his first real locomotive. They clearly modeled their design on Trevithick’s Penydarren, with a return flue boiler and a flywheel. This first engine was too feeble. Nothing deterred, Blackett tried again. This second engine, known to history as Puffing Billy (it was originally named after Blackett’s daughter Jane), made considerable advances on Trevithick’s plan: it had two alternating pistons, which eliminated the need for a flywheel to sustain the vehicle’s momentum through the dead zones in the stroke. This change also made it easy to supply power to wheels on both sides, which avoided heavily wearing one side of the rail. Rather than direct gearing, vertical rods connected to small geared spur wheels brought power from the engine down to the wheels. However, Billy was too heavy even for the cast iron track, and consistently broke the rails. So, Blackett tried a third time. This time the builders placed the engine on two four-wheeled trucks, spreading the weight over twice as many wheels. This did the trick. Finally, Wylam had a usable steam locomotive.[7] The eight-wheeled Wylam locomotive design. One might wonder why Blackett persisted through so many failures. What we might see in retrospect as determination appeared to most contemporaries as folly, if not madness. Although the steam locomotive concept had a certain romantic appeal to nineteenth-century gearheads, economic forces also made it worthwhile to seek out any possible replacement for horse-power at exactly this time. Since the beginning of the Napoleonic Wars, Britain had been cut off from European trade and had been supplying its own armies overseas, and the price of horses and the grain to feed them rose accordingly. Oat prices in the 1810s were 50% or more higher than they had been in the 1790s, and the demands of the army’s operations also made the horses themselves dear. So, it is no coincidence that multiple steam locomotive experiments sprung up in this period.[8] George Stephenson had the same cost-cutting reason in mind when he built his first locomotive in 1814. Stephenson, like his father before him, became a steam engine minder in the Newcastle coal district, working his way up from assistant fireman (responsible for stoking the furnace) to brakeman (responsible for regulating the speed of the machinery that lifted cages of coal out of the mine).[9] But he was not an ordinary sort of workman: when his colleagues went to drink and bet on dogfights, he instead disassembled his engine to better understand its workings, cleaned it, and put it together again.[10]  In 1806, his young wife and infant daughter died, leaving him alone with a three-year-old son and infirm parents to care for. He considered leaving for a fresh start in the United States, but lacked he money. Nonetheless, he scraped together the funds to ensure that his son Robert would benefit from a more formal education than he did, and Robert tutored his father in turn, advancing the elder Stephenson’s mechanical and scientific knowledge. A turn of fortune finally came in 1810, when George repaired a faulty pumping engine that had defied all the attempts to its operators to make it run well enough to drain the pit. Stephenson thus gained a reputation as an “engine-doctor,” a kind of consulting engineer for problem engines in the region. This led to a position as “engine-wright” at the Killingworth High Pit colliery in 1812, with a salary of one hundred pounds a year, marking a permanent departure from the laboring class.[11] Stephenson, with the support of Killingworth’s owner, Thomas Liddell, was determined to bring down the cost of transporting coal from the mine to the river. He added inclines in several sections with a rope pull that used the weight of descending wagons to drag returning wagons up the incline. But he believed still more savings could be found with a steam locomotive. He and the workmen at Killingworth completed their first attempt, the Blücher, in July 1814. It was named in honor of the Prussian general who had helped to secure the defeat of Napoleonic France just a few months before.  Stephenson had learned, and borrowed, from the work at Middleton and at Wylam, but introduced one major improvement: the so-called “steam blast,” a suction force created by releasing the spent steam from the cylinders into the furnace exhaust pipe, rather than into the open air. His initial motivation for redirecting the steam may have been to serve as a muffler: neighbors complained consistently of the loud noise created by the squeal of steam from early locomotives. But the ultimate value of this change came from the fact that it acted like a bellows, drawing air through the furnace and thus combusting the coal more vigorously, delivering more power to the wheels. With the enhanced power from the steam blast, Stephenson had an economically sound engine, but it still ran in an unsatisfactory, jerky fashion. Stephenson identified the problem as the gears used to deliver power to the wheels in all locomotives since Trevithick’s. So, in 1815 he had a secondlocomotive constructed, which dispensed with the gearing by sending power from the piston through a rigid connecting rod directly to a pin on the wheel: the engine could thus work the wheel like a crank. This was trickier than it sounds, because he could not rely on the left and right rails running totally even. The connecting rod therefore required a ball-and-socket joint so each side could move up and down with the axle as it tilted one way or the other.[12] Stephenson’s Killingworth engine. Rails: A Materials Revolution So, the locomotive advanced bit by bit, becoming ever more powerful, reliable, and efficient. But the iron beast strode on feet of clay – its rails. Well, in fact, the rails were made of iron, too. But they did keep breaking. The traditional railway had to be, in effect, reinvented to serve as a suitable substructure for the locomotive. This created something of a catch-22, since to prove the value of the locomotive required first adopting rail designs that were themselves unproven and more costly than the status quo. Promoters of the locomotive would have to sell the capitalists building new railways on the rail and the machine to run upon it at the same time. In the first decades of the nineteenth century, vertical, flat-topped rails replaced the L-shaped plateway rails that were common around 1800 in new railway construction. Flanges on the inner lip of the wheel kept the vehicle on course. This approach reduced friction and used less metal per yard of track. In the 1820s locomotive makers also began to use coned wheels, with a narrower radius at the outside than at the inside, which greatly improved their ability to hold a consistent line on the track, especially around corners. So far, all of this was in effect a rediscovery of what had been standard practice on wooden railways in the eighteenth century.[13] A joint patent between George Stephenson and the chemist and engineer Wiliam Losh made some minor improvements to the design of cast iron rails, but the necessary improvements in rail design to make the steam locomotive a success appeared in 1820 in the work of John Birkinshaw. Birkinshaw introduced a whole host of innovations all at once. Most importantly, he had figured out how to roll sections of wrought iron rail that would be far tougher than the cast iron equivalent, allowing locomotives to swell in size and weight without concern for breaking the rails. He also replaced the traditional flat top for the rail with a convex curve, which would provide a smooth surface to ride on even if (as was often the case) the rail was not installed perfectly vertically. He realized that the sides of the rail were not needed for strength, and proposed the T-shaped rail cross-section that is still familiar today, saving on weight and cost. Finally, he found that he could produce rail in up to eighteen-foot-long sections, six times the standard for cast-iron rails, reducing the number of          joints that tended to jostle the machinery and the load.[14] Rail cross-sections from Birkinshaw’s patent. Note the curved top surface and the now-common T-shape of the left- and right-most designs. The basic design of railways for the steam age was now in place, in a form that would not change much until the Bessemer process made steel rails practical decades later. Stephenson recognized the superiority of Birkinshaw’s rails to such an extent that he jilted his own erstwhile partner, Losh, and chose wrought-iron rails for the first new railway for which he served as chief engineer, the Stockton and Darlington. This railway, opened in 1825, represented the emergence of the steam locomotive from colliery experiments and curiosities into the field of general public economic interest. Economics: The Virtue of Speed You’ll recall that the motivation for the various experiments with steam locomotives in the 1810s was to save money on horses – the steam engine was seen as a potentially cheaper source of traction within the framework of the existing system of colliery railways. However, there was a grander vision for rail transport that had been percolating in the background since as early as 1800, when William Thomas, a colliery engineer, proposed to the Newcastle Literary and Philosophical Society that the horse-drawn railway could serve as a general replacement for road transport, carrying goods and passengers between cities. A fellow visionary proposed that costs could be further reduced with supplementary steam engines along the way to pull the carriages along with chains. James Anderson, , a member of various philosophical and agricultural societies, wrote with enthusiasm of this proposal: “Around every market you may suppose a number of concentric circles drawn, within each of which certain articles are marketable, which were not so before, and thus become the source of wealth and prosperity to many individuals. Diminish the expence of carriage but one farthing, and you widen the circle ; you form, as it were, a new creation, not only of stones, and earth, and trees, and plants, but of men also, and, what is more, of industry, of happiness, and joy.”[15] An expression became commonplace that the railway would “annihilate space and time.” It seems to have originated in a couplet from the 1720s as a hyperbolic declaration of the despair of parted lovers: “Ye gods! annihilate but space and time, And make two lovers happy.”[16] But railroad visionaries would deploy it again and again in the decades to come in an economic and technological sense. William James, a lawyer and land agent born in 1771, was not the first railroad visionary, but he was the first to match such dreams with realistic means for achieving them. He became involved with railroads in 1801, when he helped fund the first one opened to public custom, the Surrey Iron Railway. In 1821, after surveying the various locomotive builders, he was most impressed with Stephenson, and penned a deal to promote his locomotives and railways. James connected Stephenson to the partners of the Stockton and Darlington Railway, a group of colliers who needed a link to the River Tees for their coal. With Stephenson as their chief engineer, they built the first public steam railway, twenty-five miles of rail open to anyone willing to pay to transport their cargo (or passengers). It was through speed that the locomotive would prove its worth as a form of general communication, not a mere adjunct to colliers and canals, and it was at Stockton and Darlington that the locomotive first proved it could be significantly faster than a team of horses: when the railway first opened on September 27, 1825, the Stephenson locomotive pulled its hundred-ton load on the downhill run at a brisk pace of ten-to-twelve miles-per-hour. Horsemen attempting to follow the locomotive were unable to keep pace as they attempted to follow it through the wall- and hedge-strewn terrain alongside the railroad.[17] This speed was anticipated by an anonymous 1824 Mechanics Magazine article on the economic advantages railways. The author pointed out that a horse pulled at its maximum power only at low speeds (say, two-and-a-half miles-per-hour). At higher speeds more and more of its power went to moving its own body, until at twelve miles-per-hour it could pull no load at all. Moreover, speed served even more of a handicap for the horse on a canal, because the friction of the water on the barge rose with the square of the speed. Neither disadvantage applied to a steam locomotive on rails, which could pull at ever higher speeds while losing relatively little power to air resistance. At two-and-a-half miles per hour, a given force would pull almost four times the weight in a canal barge than it would on rails, but at thirteen-and-a-half miles-per-hour the advantage was more than reversed: the rail’s power was undiminished but the canal load was reduced by a factor of almost thirty.[18] This doctrine of speed was a new idea in the world of transportation. For millennia, bulk transport on land had depended on animals and barges plodding along at a couple of miles per hour. Economizing on transportation costs meant assuming low speeds as a given, and focusing on lowering the cost of pulling a single load, just as the locomotive builders of the 1810s had tried to do. But with higher speeds, more loads could be pulled with the same capital investment in a given time period. What’s more, entirely new markets could be opened up: delivery of fresh produce to urban markets, and rapid inter-urban passenger service. The Mechanics Magazine article made an immediate impression and the doctrine of speed quickly became the dogma of the rail promoters. Speed would make the echoing refrain of “the annihilation of space and time” a reality. Settling the Question But the promoters of the steam locomotive had not yet settled the question of what the future of land transportation would look like. The creators of the Stockton and Darlington line hedged their bets, including two stationary engines for pulling trains up steep sections and using horses for much of the cargo.[19] Skeptics and critics of the steam locomotive could still readily be found. Much of the landed gentry worried about the effect of screeching locomotives on their livestock and their land values. Canal and turnpike operators, of course, feared the competition.  Other critics worried that locomotives would exhaust the country’s coal reserves, while still others questioned the safety of operating a vehicle at such high speeds.[20] One commentator on a proposed railroad at Woolwich wrote that …we should as soon expect the people of Woolwich to suffer themselves to be fired off upon one of Congreve’s ricochet rockets, as trust themselves to the mercy of such a machine, going at such a rate… if ponderous bodies, moving with a velocity of ten or twelve miles an hour, were to impinge on any sudden obstruction, or a wheel break, they would be shattered like glass bottles dashed on a pavement ; then what would become of the Woolwich rail-road passengers, in such a case, whirling along at sixteen or eighteen miles an hour…? We trust, however, that Parliament will, in all the rail-roads it may sanction, limit the speed to eight or nine miles an hour, which… is as great as can be ventured upon with safety.[21] Stephenson’s next project, the Liverpool and Manchester Railway, had to fight past these critics for Parliamentary approval. It was a landmark railway in two respects: first, by building an inter-urban link, its shareholders were committing to the railroad as a general form of transportation; this was not only or even primarily a means to bring coal to market. Second, those same shareholders committed wholeheartedly to steam traction; the traditional option of the horse was right out. Steam would pull their trains, the question was how: stationary engines or locomotives, and if a locomotive, of what design? To decide, they held a competition with a five-hundred-pound prize for the best engine, known as the Rainhill trials. One of the directors of the railway entered the Cycyloped, a carriage driven by a treadmill that was driven in turn by a horse walking atop it. More plausible entries included Sans Pareil, a locomotive design by former Wylam locomotive mechanic Timothy Hackworth, and Novelty, built by two London engineers.[22] The winning entry, however, came from George’s son, Robert. After returning from his mining ventures in the New World in 1827, he had apprenticed in locomotive construction under his father. But he built his own masterwork, Rocket, for the Liverpool and Manchester. Its great design advance lay in its multi-tubular boiler: rather than a single return flue pipe, it had twenty-five separate copper tubes to carry the hot gases from the firebox through the boiler. This greatly increased the surface area to transfer to the boiler. The narrower tubes also eliminated a serious problem with the steam blast: its tendency to suck burning embers straight out of the firebox along with the exhaust, wasting fuel. The new boiler design made the Rocket the most powerful locomotive built to date, capable of speeds of thirty miles-per-hour, on a par with the highest speeds humans had ever experienced (on the back of a galloping horse). A London reporter who witnessed the unladen Rocket whizzing by wrote that “[s]o astonishing was the celerity with which the engine, with its apparatus, darted past the spectators, that it could be compared to nothing but the rapidity with which the swallow darts through the air. Their astonishment was complete, every one exclaiming involuntarily, ‘The power of steam is unlimited!’”[23] Stephenson’s Rocket [National Railway Museum, UK / CCA 4.0]. Despite Rocket’s success, the centrality of the Stephensons to the history of the locomotive was more contingent than necessary, resulting from George’s central place in the development of two of the most important early lines (the Stockton and Darlington and Liverpool and Manchester). Ever since the burst of new designs in the 1810s, stimulated by the high price of horse feed, Britain had sustained multiple lines of locomotive development, and the basic skills required were familiar to anyone with experience in boiler and steam engine design. Hackworth’s Sans Pareil was almost as good as Rocket and also saw service on the Liverpool and Manchester line. In 1831, the Liverpool and Manchester carried 445,000 passengers and 54,000 tons of cargo. The turnpike roads and canals along the line suffered a sharp decline in revenue and had to lower their charges. The former stagecoach lines between the cities became instantly defunct. The steam railway had proved its economic worth, and by 1837 Britain could boast eighty railway companies and a thousand miles of track.[24] A train on the Liverpool and Manchester railway, crossing the peat bog of the Chat Moss. Still, the question was not altogether settled. For another fifteen years or so, entrepreneurs put forward a variety of alternative means of transport: several tried to revive the idea of steam road carriages, others promoted atmospheric railways that would operate by creating a vacuum on one side of the carriage. Canal owners were especially assiduous in searching for some other way forward that would not obviate their investments: barges pulled by locomotives on the tow path, barges pulled by paddle or screw steamboats, a tug that pulled itself along rails attached to either side of the canal. None of these could match the speed of the railway locomotive, and all struggled with the problem of locks.[25] By the early 1850s, railways carried more cargo in Britain than the canal system. Steam railways had spread across the United States and much of continental Europe, though European rails tended to follow a state-led development model, in contrast to the helter-skelter private buildout in the Anglo-American sphere. Despite talk among railway visionaries of unifying city and countryside, the railway tended to strengthen the cultural and economic centrality of the urban centers. Traffic between cities increased rapidly: that between Liverpool and Manchester quadrupled. Horse travel did not disappear, but was repurposed: local coaches and omnibuses multiplied to serve the flood of urban visitors. The products of the country became more readily available to the city than ever before: cows arrived in cattle cars on the hoof, to be butchered on site for urban middle- and upper-class customers; fresh milk, once a dubious prospect within a place like Paris, now arrived daily by railcar. Long-distance journeys across the whole of Britain became possible within a single day: in 1763 the stagecoach from London to Edinburgh took two weeks; by 1835 the roads and coaches had improved enough to do it in forty-eight hours; but in 1849 a rail passenger could make the journey in just twelve hours. [26] Neither canals nor turnpikes, important as they were to the development of Europe’s economy, had transformed everyday life to the same degree as the steam locomotive. The revolution was closed. Rails had won.

Read more
Steamships, Part 2: The Further Adventures of Isambard Kingdom Brunel

Iron Empire As far back as 1832, Macgregor Laird had taken the iron ship Alburkah to Africa and up the Niger, making it among the first ship of such construction to take the open sea. But the use of iron hulls in British inland navigation can be traced decades earlier, beginning with river barges in the 1780s. An iron plate had far more tensile strength than even an oaken board of the same thickness. This made an iron-hulled ship stronger, lighter, and more spacious inside than an equivalent wooden vessel: a two-inch thickness of iron might replace two-foot’s thickness of timber.[1]  The downsides included susceptibility to corrosion and barnacles, interference with compasses, and, at least at first, the expense of the material. As we have already seen, the larger the ship, the smaller the proportion of its cargo space that it would need for fuel; but the Great Western and British Queen pushed the limits of the practical size of a wooden ship (in fact, Brunel had bound Great Western’s hull with iron straps to bolster its longitudinal strength and prevent it from breaking in heavy seas).[2] The price of wood in Britain grew ever more dear as her ancient forests disappeared, but to build more massive ships economically also required iron prices to fall: and they did just that, starting in the 1830s, because of a surprisingly simple change in technique. Ironmongers had noticed long ago that their furnaces produce more metal from the same amount of fuel in the winter months. They assumed that the cooler air produced this result, and so by the nineteenth century it had become a basic tenet of the iron-making business that one should blast cool air into the furnace with the bellows to maximize its efficiency.[3] This common wisdom was mistaken; entirely backwards, in fact. In 1825, a Glasgow colliery engineer named James Neilson found that a hotter blast made the furnaces more efficient (it was the dryness, not the coolness, of the winter air that had made the difference). Neilson was asked to consult at an ironworks in the village of Muirkirk which was having difficulty with its furnace. He realized that heating the blast air would expand it, and thus increase the pressure of the air flowing into the furnace, strengthening the blast. In 1828 he patented the method of using a stove to heat the blast air. He convinced the Clyde Ironworks to adopt it, and together they perfected the method over the following few years. The results were astounding. A 600° F blast reduced coal consumption of the furnace by two-thirds and increased output from about five-and-a-half tons of pig iron per day to over eight.[4] On top of all that, this simple innovation allowed the use of plain coal as fuel in lieu of (more expensive) refined coke. Ironmakers had adopted coke in the 1750s because when iron was smelted with raw coal the impurities (especially sulfur) in the fuel made the resulting metal too brittle. But the hot blast sent the temperature inside the furnace so high that it drove the sulfur out in the slag waste rather than baking it into the iron. During the 1830s and 40s, Neilson’s hot blast technique spread from Scotland across all of Great Britain, and drove a rapid increase in iron production, from 0.7 million tons in 1830 to over two million in 1850. This cut the market price per ton of pig iron in half.[5] With its vast reserves of coal and iron, made accessible with the power of steam pumps (themselves made in Britain of British iron and fueled by British coal), Britain was perfectly placed to supply the demand induced by this decline in price. Much of the growth in iron output went to exports, strengthening the commercial sinews of the British empire while providing the raw material of industrialization to the rest of the world. The frenzies of railroad building in the United States and continental Europe in the middle of the nineteenth century relied heavily on British rails made from British iron: in 1849, for example, the Baltimore and Ohio railroad secured 22,000 tons of rails from a Welsh trading concern.[6] The hunger of the rapidly growing United States for iron proved insatiable; circa 1850 the young nation imported about 450,000 tons of British iron per year.[7] Good Engineering Makes Bad Business The virtues of iron were also soon on the brain of Isambard Kingdom Brunel. The Great Western Steam Ship Company’s plan for a successor to Great Western began sensibly enough; they would build a slightly improved sister ship of similar design. But Brunel and his partners were seduced, in the fall of 1838, by the appearance in Bristol harbor of an all-iron channel steamer called Rainbow, the largest such ship yet built. Brunel’s associates Claxton and Patterson took a reconnaissance voyage on her to Antwerp and upon their return all three men became convinced that they should build in iron.[8] As if that were not enough novelty to take on in one design, in May 1840 another innovative ship steamed into Bristol harbor, leaving Brunel and his associates swooning one more. The aptly named Archimedes, designed by Francis Petit Smith, swam through the water with unprecedented smoothness and efficiency, powered by a screw propeller rather than paddle wheels.[9] Any well-educated nineteenth-century engineer knew that paddles wasted a huge amount of energy pushing water down at the front of the wheel and lifting it up at the back. Nor was screw propulsion a surprising new idea in 1840. As we have seen, early steamboat inventors tried out just about every imaginable means of pushing or pulling a ship. In his very thorough Treatise on the Screw Propeller, the engineer John Bourne cites fifty some-odd proposals, patents, or practical attempts at screw propulsion prior toSmith’s.[10] After so many failures, most practical engineers assumed (reasonably enough) that the screw could never replace the proven (albeit wasteful) paddlewheel. The difficulties were numerous, including reducing vibration, transmitting power effectively to the screw, and choosing its shape, size, and angle among many potential alternatives. Most fundamental though, was producing sufficient thrust: early steam engines operated at modest speed, cycling every three seconds or so. At twenty revolutions per minute, a screw would have to be of an impractical diameter to actually push a ship forward rapidly. Smith overcame this last problem with a gearing system to allow the propeller shaft to turn 140 times per minute. His propeller design at first consisted of a true helical screw, of two turns (which created excessive friction), then later a single turn. Then, in 1840 he refitted Archimedes with a more recognizably modern propeller with two blades (each of half a turn).[11] Even with these design improvements, Brunel found that noise and vibration made the Archimedes of 1840 “uninhabitable” for passengers.[12]  But he had unshakeable faith in its potential. No doubt, advocates of the screw could tout many potential advantages over the paddlewheel: a lower center of gravity, a more spacious interior, more maneuverability in narrow channels, and more efficient use of fuel  (especially in headwinds, which caught the paddles full on, and rolling sidelong waves, which would lift one paddlewheel or the other out of the water).[13]  So, the weary investors of the Great Western Steam Ship Company saw the timetable of the  Great Britain’s construction set back once more, in order to incorporate a screw. As steamship historian Stephen Fox put it, “[i]n commercial terms, what the Great Western company needed in that fall of 1840 was a second ship, as soon as possible, to compete with the newly established Cunard line,” but that is not what they would get.[14] The completed ship finally launched in 1843, but did not take to sea for a transatlantic voyage until July 1845, having already cost the company some £200,000 pounds in total. With 322 feet of black iron hull driven by a 1000 horsepower Maudslay engine and a massive 36-ton propeller shaft, she dwarfed Great Western. Her all-iron construction gave an impression of gossamer lightness that fascinated a public used to burly wood.[15] The Launching of the Great Britain. But if her appearance impressed, her performance at sea did not. Her propeller fell apart, her engine failed to achieve the expected speed and she rolled badly in a swell. After major, expensive renovations in the winter of 1845, she ran aground at the end of the 1846 sailing season at Dundrum Bay off Ireland. Her iron hull proved sturdier than the organization that had constructed it: by the time she was at last floated free in August 1847, the Great Western Steam Company had already sunk. Another concern bought Great Britain for £25,000, and she ended up plying the route to Australia, operating mostly by sail.[16] In the long run, Brunel and his partners were right that iron hulls and screw propulsion would surpass wood and paddles, but Great Britain failed to prove it. The upstart Inman steamer line launched the iron-hulled, screw-powered City of Glasgow in 1850, which did prove that the ideas behind Great Britain could be turned to commercial success. But the more conservative Cunard line did not dispatch its first iron-hulled ship on its maiden voyage until 1856. Though even larger than Great Britain, at 376 feet and 3600 tons, the Persia still sported paddlewheels. This did not prevent her from booking more passengers than any other steamship to date, nor from setting a transatlantic speed record.[17] Not until the end of the 1860s did oceanic paddle steamers become obsolete. The Archimedes. Without any visible wheels, she looked deceptively like a typical sailing schooner, but for the telltale smokestack. A Glorious Folly For a time, Brunel walked away from shipbuilding. Then, late in 1851, he began crafting plans for a new liner to far surpass even Great Britain, one large enough to ply the routes to Indian and Australia without coaling stops on the African coast. Stopping to refuel wasted time but also quite a lot of money: coal in Africa cost far more than in Europe, because another ship had to bring it there in the first place.[18]    Because it would sail around Africa, not towards America, the new ship was christened Great Eastern. Monstrous in all its dimensions, the Great Eastern, can only be regarded as a monster in truth, in the archaic sense of “a prodigy birthed outside the natural order of things”; it was without precedent and without issue.[19] Given the total failure of Brunel’s last steam liner company, not to mention other examples of excessive exuberance in his past, such as an atmospheric railway project that shut down within a year, it is hard to conceive of how he was able to convince new backers to finance this wild new idea. He did have the help of one new ally, an ambitious Scottish shipbuilder named John Russell, who was also wracked by career disappointment and eager for a comeback. Together they built an astonishing vessel: at 690 feet long and over 22,000 tons, it exceeded in size every other ship built to its time, and also every other ship built in the balance of the nineteenth century. It would carry (in theory) 4,000 passengers and 18,000 tons of coal or cargo, and mount both paddlewheels and a propeller, the latter powered by the largest steam engine ever built, of 1600 horsepower. Brunel died of a stroke in 1859, and never saw the ship take to sea. That is just as well, for it failed even more brutally than the Great Britain. It was slow, rolled badly, maneuvered poorly, and demanded prodigious quantities of labor and fuel.[20] Like Great Britain, after a brief service its owners auctioned it off to new buyers at a crushing loss. Great Eastern did, however, have still in its future a key role to play in the extension of British imperial and commercial power, as we shall see. The Great Eastern in harbor in Wales in 1860. Note the ‘normal-size’ three-masted ship in the foreground for scale. I have lingered on Brunel’s career for so long not because he was of unparalleled import to the history of the age of steam (he was not), but because his character and his ambition fascinate me. He innovated boldly, but rarely as effectively as his more circumspect peers, such as Samuel Cunard. Much—though certainly not all—of his career consists of glorious failure. Whether you, dear reader, emphasize the glory or the failure, may depend on the width of the romantic streak that runs through your soul.

Read more
The Computer as a Communication Device

Over the first half of the 1970s, the ecology of computer networking diversified from its original ARPANET ancestry along several dimensions. ARPANET users discovered a new application, electronic mail, which became the dominant activity on the network. Entrepreneurs spun-off their own ARPANET variants to serve commercial customers. And researchers from Hawaii to l’Hexagone developed new types of network to serve needs or rectify problems not addressed by ARPANET. Almost everyone involved in this process abandoned the ARPANET’s original stated goal of allowing computing hardware and software to be shared among a diverse range of research sites, each with its own specialized resources. Computer networks became primarily a means for people to connect to one another, or to remote systems that acted as sources or sinks for human-readable information, i.e. information databases and printers. This was a possibility foreseen by Licklider and Robert Taylor, though not what they had intended when they launched their first network experiments. Their 1968 article,”The Computer as a Communication Device” lacks the verve and timeless quality of visionary landmarks in the history of computing such as Vannevar Bush’s “As We May Think” or Turing’s “Computing Machinery and Intelligence.” Nonetheless, it provides a rather prescient glimpse of a social fabric woven together by computer systems. Licklider and Taylor described a not-to-distant future in which1 You will not send a letter or a telegram; you will simply identify the people whose files should be linked to yours and the parts to which they should be linked-and perhaps specify a coefficient of urgency. You will seldom make a telephone call; you will ask the network to link your consoles together. …Available within the network will be functions and services to which you subscribe on a regular basis and others that you call for when you need them. In the former group will be investment guidance, tax counseling, selective dissemination of information in your field of specialization, announcement of cultural, sport, and entertainment events that fit your interests, etc. The first and most important component of this computer-mediated future – electronic mail – spread like a virus across ARPANET in the 1970s, on its way to taking over the world. Email To understand how electronic mail developed on ARPANET, you need to first understand an important change that overtook the network’s computer systems in the early 1970s.  When ARPANET was first conceived in the mid-1960s, there was almost no commonality among the hardware and operating software running at each ARPA site. Many sites centered on custom, one-off research systems, such Multics at MIT, the TX-2 at Lincoln Labs, and the ILLIAC IV, under construction at the University of Illinois. By 1973, on the other hand, the landscape of computer systems connected to the network had acquired a great deal of uniformity, thanks to the wild success of Digital Equipment Corporation (DEC) in penetrating the academic computing market.2 DEC designed the PDP-10, released in 1968, to provide a rock-solid time-sharing experience for a small organization, with an array of tools and programming languages built-in to aid in customization. This was exactly what academic computing centers and research labs were looking for at the time. Look at all the PDPs! BBN, the company responsible for overseeing the ARPANET, then made the package even more attractive by creating the Tenex operating system, which added paged virtual memory to the PDP-10. This greatly simplified the management and use of the system, by making it less important to exactly match the set of running programs to the available memory space. BBN supplied the Tenex software free-of-charge to other ARPA sites, and it soon became the dominant operating system on the network. But what does all of this have to do with email? Electronic messaging was already familiar to users of time-sharing systems, most of which offered some kind of mailbox program by the late 1960s. They provided a form of digital inter-office mail; their reach extended only to other users of the same computer system. The first person to take advantage of the network to transfer mail from one machine to another was Ray Tomlinson, a BBN engineer and one of the authors of the Tenex software. He had already written a SNDMSG program for sending mail to other users on a single Tenex system, and a CPYNET program for sending files across the network. It required only a leap of imagination for him to see that he could combine the two to create a networked mail program. Previous mail programs had only required a user name to indicate the recipient, so Tomlinson came up with the idea of combining that local user name and the (local or remote) host name with an @ symbol3, to create an email address that was unique across the entire network. Ray Tomlinson in later years, with his signature “at” sign Tomlinson began testing his new program locally in 1971, and in 1972 his networked version of SNDMSG was bundled into the Tenex release, allowing Tenex mail to break the bonds of a single site and spread across the network. The plurality of machines running Tenex made Tomlinson’s hybrid program available instantly to a large proportion of ARPANET users, and it became an immediate success. It did not take long for ARPA’s leaders to integrate email into the core of their working life. Stephen Lukasik, director of ARPA, was an early adopter, as was Larry Roberts, still head of the agency’s computer science office. The habit inevitably spread to their subordinates, and soon email became a basic fact of life of the culture of ARPANET. Tomlinson’s mail software spawned a variety of imitations and elaborations from other users looking to improve on its rudimentary functionality. Most of the early innovation focused on the defects of the mail reading program. As email spread beyond a single computer, the volume of mail received by heavy users scaled with the size of the network, and the traditional approach of treating the mailbox as a raw text file was no longer effective. Larry Roberts himself, unable to deal effectively with the deluge of incoming messages, wrote his own software to manage his inbox called RD. By the mid-1970s, however, the most popular program by far was MSG, written by John Vittal of USC. We take for granted the ability to press a single button to fill out the title and recipient of outgoing message based on an incoming one. But it was Vittal’s MSG that first provided this killer “answer” feature in 1975; and it, too, was a Tenex program. The diversity of efforts led to a need for standards. This marked the first, but far from the last, time that the computer networking community would have to develop ex post facto standards. Unlike the basic protocols for ARPANET, a variety of email practices already existed in the wild prior to any standard setting. The inevitable result was controversy and political struggle, centering around the main email standard documents, RFC 680 and 720. In particular, non-Tenex users expressed a certain prickly resentment about the Tenex-centric assumptions built into the proposals. The conflict never grew terribly hot – everyone on ARPANET in the 1970s was still part of the same, relatively small, academic community and the differences to be reconciled were not large. But it provided a taste of larger struggles to come. The sudden success of email represented the most important development of the 1970s in the application layer of the network, the level most abstracted from the physical details of the network’s layout. At the same time, however, others had set out to redifine the foundational “link” layer, where bits flowed from machine to machine. ALOHA In 1968, Norman Abramson arrived at the University of Hawaii from California to serve a combined appointment as electrical engineering and computer science professor. The University he joined consisted of a main campus in Oahu as well as a secondary Hilo campus, and several other community colleges and research sites spread across Oahu, Kauai, Maui, and Hawaii. In between lay hundreds of miles of water and mountainous terrain. A brawny IBM 360/65 powered computer operations at the main campus, but ordering up an AT&T dedicated line to link a terminal to it from one of the community colleges was not so simple a matter as on the mainland. Abramson was an expert in radar systems and information theory who did a stint as an engineer for Hughes Aircraft in Los Angeles. This new environment, with all the physical challenges it presented to wireline communications, seems to have inspired Abramson to a new idea – what if radio were actually a better way of connecting computers than the phone system, which after all was designed with the needs of voice, not data, in mind? Abramson secured funding from Bob Taylor at ARPA to test this idea, with a system he called ALOHAnet. In its initial incarnation, it was not a computer network at all, but rather a medium for connecting remote terminals to a single time-sharing system, designed for the IBM machine at the Oahu campus. Like ARPANET, it had a dedicated minicomputer for processing packets sent and received by the 360/65 – Menehune, the Hawaiian equivalent of the IMP. ALOHAnet, however, dealt away with all the intermediate point-to-point routing used by ARPANET to get packets from one place to another. Instead any terminal wishing to send a message simply broadcast it into the ether in the allotted transmission frequency. ALOHAnet in its full state of development later in the 1970s, with multiple computers The traditional way for a radio engineer to handle a shared transmission band like this would have been to carve it up into time or frequency-based slots, and assign each terminal to its own slot. But to handle hundreds of terminals in such a scheme would mean limiting each to a small fraction of the available bandwidth, even though only a few might be in active use at any given moment. Instead, Abramson decided to do nothing to prevent more than one terminal from sending at the same time. If two or more messages overlapped they would become garbled, but the central computer would detect this via error-correcting codes, and would not acknowledge those packets. Failing to receive their acknowledgement, the sender(s) would try again after some random interval. Abramson calculated that this simple protocol could sustain up to a few hundred simultaneously active terminals, whose numerous collisions would still leave about 15% of the usable bandwidth. Beyond that, though, his calculations showed that the whole thing would collapse into a chaos of noise. The Office Of The Future Abramson’s “packet broadcasting” concept did not make a huge splash, at first. But it found new life a few years later, back on the mainland. The context was Xerox’s new Palo Alto Research Center (PARC), opened in 1970 just across from Stanford University, in a region recently dubbed “Silicon Valley.” Some of Xerox’s core xerography patents stood on the verge of expiration, and  the company risked being trapped by its own success, unable or unwilling to adapt to the rise of computing and integrated circuits. Jack Goldman, head of research for Xerox, had convinced the bigwigs back East that a new lab – distanced from the influence of HQ, nestled in an attractive climate, and with premium salaries on offer – would attract the talent needed to keep Xerox’s edge, by designing the information architecture of the future. PARC certainly succeeded in attracting top computer science talent, due not only to the environment and the generous pay, but also the presence of Robert Taylor, who had set the ARPANET into motion as head of ARPA’s Information Processing Technology Office in 1966. Robert Metcalfe, a prickly and ambitious young engineer and computer scientist from Brooklyn, was one of many wooed to PARC via an ARPA connection. He joined the lab in June 1972 after working part-time for ARPA a a Harvard graduate student, building the interface to connect MIT to the network. Even after joining PARC, he continued to work as an ARPANET ‘facilitator’, traveling around the country to help new sites get started on the network, and on the preparations for ARPA’s coming out party at the 1972 International Conference on Computer Communications. Among the projects percolating at PARC when Metcalfe arrived was a plan by Taylor to link dozens, or even hundreds, of small computers via a local network.  Year after year, computers continued to decrease in price and size, as if bending to the indomitable will of Gordon Moore. The forward-looking engineers at PARC foresaw a not-far-distant future when every office worker would have his own computer. To that end, they designed and built a personal computer called Alto, a copy of which would be supplied to every researcher in the lab. Taylor, who had only become more convinced of the value of networking over the previous half-decade, also wanted these computers to be interconnected. The Alto. The computer per se was housed in the cabinet at bottom, about the size of a mini-fridge. On arriving at PARC, Metcalfe took over the task of connecting up the lab’s PDP-10 clone to ARPANET, and quickly acquired a reputation as the “networking guy”. Therefore when Taylor asked for an Alto network, his peers turned to Metcalfe. Much like the computers on ARPANET, the Altos at PARC didn’t have much to say to one another. The compelling application for the network, once again, was in enabling human communication – in this case in the form of word and images printed by laser. The core idea behind the laser printer did not originate at PARC, but back East, at the original Xerox research lab in Webster, New York. There a physicist named Gary Starkweather proved that the coherent beam of a laser could be used to deactivate the electrical charge of a xerographic drum, just like the diffuse light used in photocopying up to that point. Properly modulated, the beam could paint a image of arbitrary detail onto the drum, and thus onto paper (since only the uncharged areas of the drum picked up toner). Controlled by a computer, such a machine could produce any combination of images and text that a person might conceive, rather than merely reproducing existing documents like the photocopier. Starkweather received no support for these wild ideas from his colleagues or management in Webster, however, so he got himself transferred to PARC in 1971, where he found a far more receptive audience. The laser printer’s ability to render arbitrary images dot-by-dot provided the perfect mate for the Alto workstation, with its bit-mapped monochrome graphics. With a laser printer, the half-million pixels on a user’s display could be directly rendered onto paper with perfect fidelity. The bit-mapped graphics experience on the Alto. Nothing like this had been seen on a computer display before. Within about a year Starkweather, with the help of several other PARC engineers, had overcome the main technical challenges and built a working prototype of a laser printer, based on the chassis of the workhorse Xerox 7000 printer. It produced pages at the same rate – one per second – at 500 dots per linear inch. A character generator attached the printer crafted text from pre-defined fonts. Free-form imagery (other than what could be generated with custom fonts) was not yet supported, so the network did not need to carry the full 25 million bits-per-second or so required to feed the laser; nonetheless, a tremendous of amount of bandwidth would be needed to keep the printer busy at a time when the 50,000 bits-per-second ARPANET represented the state-of-the-art. PARC’s second generation “Dover” laser printer, from 1976 The Alto Aloha Network How would Metcalfe bridge this huge gap in speed? Finally, we come back to ALOHAnet, for it turns out that Metcalfe knew packet broadcasting better than anyone. The previous summer, while staying in Washington with Steve Crocker on ARPA business, Metcalfe had pulled down volume of the proceedings of the Fall Joint Computer Conference, and came across Abramson’s ALOHAnet paper. He immediately realized that the basic idea was brilliant, but the implementation under-baked. With a few tweaks in the algorithm and assumptions – notably having senders listen for a clear channel before trying to broadcast, and exponentially increasing the re-transmission interval in response to congestion – he could achieve a bandwidth utilization of 90%, rather than the 15% calculated by Abramason. Metcalfe took a short leave from PARC to visit Hawaii, where he integrated his ideas about ALOHAnet into a revised version of his PhD thesis, after Harvard had rejected the original due to a lack of theoretical grounding. Metcalfe originally called his plan to bring packet broadcasting to PARC the “ALTO ALOHA network”. Then, in a memo in May 1973, he rechristened it as Ether Net, invoking the luminiferous ether which nineteenth-century physicists had suposed to carry all electromagnetic radiation. “This will keep things general,” he wrote, “and who knows what other media will prove better than cable for a broadcast network; maybe radio or telephone circuits, or power wiring or frequency-multi-plexed CATV, or microwave environments, or even combinations thereof.” A sketch from Metcalfe’s 1973 Ether Net memo. Starting in June 1973, Metcalfe worked with another PARC engineer, David Boggs, to turn his theoretical concepts for a new high-speed network into a working system. Rather than sending signals over the air like ALOHA, they would bind the radio spectrum within the confines of a coaxial cable, greatly increasing the available bandwidth from the limited radio band allocated to the Menehune. The transmission medium itself was entirely passive, requiring no switching equipment at all for routing messages. It was cheap and easy to connect it hundreds of workstations – PARC engineers just ran coax cable through the building and added taps as needed – and it could handle three million bits per second. Robert Metcalfe and David Boggs in the 1980s, several years after Metcalfe founded 3Com to sell Ethernet technology By the fall of 1974, the complete prototype of the office of the future was up and running in Palo Alto, California – the initial batch of thirty altos with drawing, email, and word processing software, Starkweather’s prototype printer, and Ethernet to connect it all together. A central file server for storing data too large for the Alto’s local disk provided the only other shared resource. PARC originally offered the Ethernet controller as an optional accessory on the Alto, but once the system went live it became clear that it was essential, as the coax coursed with a steady flow of messages, many of them emerging from the printer as technical reports, memos, or academic papers. Simultaneously with the development of the Alto, another PARC project attempted to carry the resource-sharing vision forward in a new direction. The PARC On Line Office System (POLOS), designed and implemented by Bill English and other refugees from Doug Engelbart’s oN-Line System (NLS) project at Stanford Research Institute, consisted of a network of Data General Nova minicomputers. Rather than dedicating each machine to a particular user’s needs, however, POLOS would shuttle work around among them, in order to serve the needs of the system as a whole as efficiently as possible. One machine might be rendering displays for several users, while another handled ARPANET traffic, and yet another ran word processing software. The complexity and coordination overhead of this approach proved unmanageable, and the scheme collapsed under its own weight. Meanwhile, nothing more clearly showed Taylor’s emphatic rejection of the resource-sharing approach to networking than his embrace of the Alto. Alan Kay, Butler Lampson, and the other minds behind the Alto had brought all the computational power a user might need onto an independent computer at their desk, intended to be shared with no one. The function of the network was not to provide access to a heterogeneous set of computer resources, but to carry messages among these islands, each entire of itself, or perhaps to deposit them on some distant shore – for printing or long-term storage. While both email and ALOHA developed under the umbrella of ARPA, the emergence of Ethernet was one of several signs in the first half of the 1970s that computer networking had become something too large and diverse for a single organization to dominate, a trend that we’ll continue to follow next time. [Previous] [Next] Further Reading Michael Hiltzik, Dealers of Lightning (1999) James Pelty, The History of Computer Communications, 1968-1988 (2007) [http://www.historyofcomputercommunications.info/] M. Mitchell Waldrop, The Dream Machine (2001)    

Read more
The Backbone: Introduction

In the early 1970s, Larry Roberts approached AT&T, the vast American telecommunications monopoly, with an intriguing offer. At the time, Roberts was director of  the computing division of the Advanced Research Projects Agency (ARPA), a relatively young organization within the Department of Defense that was dedicated to long-term, blue-sky research. Over the previous five years, Roberts had overseen the creation of ARPANET, the first computer network of any significance, which now linked computers at about twenty-five sites across the country. The network was a success, but its long-term operation, and all of the bureaucratic work which that entailed, did not fall within ARPA’s mandate. Roberts was now looking to offload the task to someone else. And so, he contacted executives at AT&T to offer them the keys to the system.1 After mulling the matter over, AT&T ultimately rejected the offer. Its business and engineering leadership believed the fundamental technology upon which ARPANET operated was impractical and unstable, and had no place in a system designed for reliable, universal service. ARPANET, of course, was the seed around which the Internet crystallized, the prototype of a vast, world-circling information system, whose kaleidoscopic capabilities defy enumeration. How could AT&T have been so blind to this potential, so mired in the past, we are led to wonder? Bob Taylor, who had recruited Roberts to oversee the building of ARPANET back in 1966, later put it quite bluntly: “Working with AT&T would be like working with Cro-Magnon man,” he recalled.2 However, before we raise our hackles too sharply at the brutish ignorance of these anonymous corporate bureaucrats, let us step back a moment. The history of the Internet will be the subject of our story, so it would be good to first take a broad view of what it is we are talking about. Of all the technological systems constructed in the latter half of the twentieth century, the Internet is surely of the most profound significance to the society, culture, and economy of our contemporary world. Perhaps jet travel is its closest rival. Using the Internet, individuals can instantly share pictures, videos, and thoughts, welcome or unwelcome, with friends and family across the world. Young people a thousand miles apart now regularly fall in love, and even marry, within the confines of a virtual world. A never-ending shopping mall is instantly accessible at all hours of day and night from the comfort of millions of homes. Most of this is trite and familiar enough. But, as the author can attest, the Internet has also proved perhaps the greatest distracter, time-waster, and general source of mind-rot in human history, surpassing television – no mean feat.3 It has made it possible for fanatics, zealots and conspiracy theorists to spread their nonsense across the globe at the speed of light – some of it harmless, some much less so. It has enabled many organizations, public and private, to slowly amass, and, in some cases, quickly and embarrassingly lose, draconic hoards of data.4 It is, in short, a vast amplifier of human wisdom and folly, and the latter is dismayingly abundant. But what exactly is the object in question, the physical structure, the machinery that has made all this social and cultural change possible? What exactly is the Internet, anyway? If we could somehow decant its substance into a glass vessel, we would see it gradually settle into three strata. At bottom rests the global communications network. This substrate predates the Internet itself by roughly a century, built first in copper or iron wire, but since overlaid with coaxial cables, microwave relays, optical fibers, and cellular radio. The next layer up consists of computers communicating over that global telecommunications system using a set of shared languages, or protocols. Among the most fundamental are the Internet Protocol (IP), Transmission Control Protocol (TCP), and the Border Gateway Protocol (BGP). This is the core of the Internet per se, and its concrete expression is the network of specialized computers called routers, responsible for finding a path for a message to travel from its source computer to its destination. Finally, at the top, are the various applications which people and machines actually use to work and play over the Internet, many of them with their own specialized languages: web browsers, chat apps, video games, day trading applications, etc. To use the Internet, each application need only embed a message in a format which the routers can understand. That message could be a move in a game of chess, a tiny fragment of a movie, or a request to transfer money from one bank account to another – the routers don’t care and will treat them all just the same.5 Our story will weave these three threads together to tell the story of the Internet. First, the global telecommunications network. Last, the whole panoply of different software applications that allow users of computers to do fun or useful things over the network. Binding them together, the techniques and protocols for getting diverse computers to talk to one another. The creators of those techniques and protocols built on the achievements of the past (the network) and drew on a dimly imagined vision of the future towards which they were groping (the applications to come). In addition to these creators themselves, the state will be ever present as an actor in our story.  Most especially at the level of the telecommunications networks, which were all either themselves government-operated or subject to strict regulatory oversight. Which brings us back to AT&T. Distasteful as they may have found it, Taylor, Roberts and their ARPA colleagues were hopelessly entangled with the incumbent telecommunications operators, the foundational stratum of the future Internet. The functioning of their network depended entirely on such services. How then, to account for their hostility, for their belief that ARPANET represented a new world fundamentally at odds with the backwards-looking incumbents of telecommunications? In truth, philosophical, not temporal, distance separated the two groups. AT&T executives and engineers saw themselves as the custodians of a vast, complex machine that provided reliable, universal communications services from one person to another. All the equipment in between was the responsibility of the Bell System. The architects of the ARPANET, however, saw that system as a simple conduit for arbitrary bits of data, and believed its operators had no business meddling in questions of how that data was generated and used at each end of the wire. We must begin, then, with the story of how this philosophical impasse over the nature of American telecommunications was resolved, by the might of the United States government.

Read more
The Era of Fragmentation, Part 3: The Statists

In the spring of 1981, after several smaller trials, The French telecommunications administration (Direction générale des Télécommunications, or DGT), began a large-scale videotex experiment in a region of Brittany called Ille-et-Vilaine, named after its two main rivers. This was the prelude to the full launch of the system across l’Hexagone in the following year. The DGT called their new system Télétel, but before long everyone was calling it Minitel, a synecdoche that derived from the name of the lovable little terminals that were distributed free of charge, by the hundreds of thousands, to French telephone subscribers. Among all the consumer-facing information service systems in this “era of fragmentation” Minitel deserves our special attention, and thus its own chapter in this series, for three particular reasons. First, the motive for its creation. Other post, telephone, and telegraph authorities (PTTs) built videotex systems, but no other state invested as heavily in making it a success, nor gave so much strategic weight to that success. Entangled with hopes for a French economic and strategic renaissance, Minitel was meant not just to produce new telecom revenues or generate more network traffic, but to prime the pump for the entire French technology sector. Second, the extent of its reach. The DGT provided Minitel terminals to subscribers free of charge, and levied all charges at time of use rather than requiring an up-front subscription. This meant that, although many of them used the system infrequently,  more people had access to Minitel than to even the largest American on-line services of the 1980s, despite France’s much smaller population. The comparison to its nearest direct equivalent, Britain’s Prestel, which never broke 100,000 subscribers, is even more stark. Finally, there is the architecture of its backend systems. Every other commercial purveyor of digital services was a monolith, with all services hosted on their own machines. While they may have collectively formed a competitive market, each of their systems were structured internally as a command economy. Minitel, despite being the product of a state monopoly, was ironically the only system of the 1980s that created a free market for information services. The DGT, acting as an information broker rather than information supplier, provided one possible model for exiting the era of fragmentation. Playing Catch Up It was not by happenstance that the Minitel experiments began in Brittany. In the decades after World War II, the French government had deliberately seeded the region, whose economy still relied heavily upon agriculture and fishing, with an electronics and telecommunications industry. This included two major telecom research labs: the Centre Commun d’Études de Télévision et Télécommunications (CCETT) in Rennes, the region’s capital, and a branch of the Centre National d’Études des Télécommunications (CNET) in Lannion, on the northern coast. The CCETT lab in Rennes Themselves a product of an effort to bring a lagging region into the modern era, by the late 1960s and early 1970s these research departments found themselves playing catch up with their peers in other countries. The French phone network of the late 1960s was an embarrassment for a country that, under de Gaulle, wished to see itself as a resurgent world power. It still relied heavily on switching infrastructure built in the first decades of the century, and only 75% of the network was automated by 1967. The rest still depended on manual operators, which had been all but eliminated in the U.S. the rest of Western Europe. There were only thirteen phones for every 100 inhabitants of France, compared to twenty-one in neighboring Britain, and nearly fifty in the countries with the most advanced telecommunications systems, Sweden and the U.S. France therefore began a massive investment program of rattrapage, or “catch up,” in the 1970s. Rattrapage ramped up steeply after the 1974 election of Valéry Giscard d’Estaing to the presidency of France, and his appointment of a new director for the DGT, Gérard Théry. Both were graduates of France’s top engineering school, l’École Polytechnique, and both believed in the power of technology to improve society. Théry set about making the DGT’s bureaucracy more flexible and responsive and Giscard secured 100 billion francs in funding from Parliament for modernizing the telephone network, money that paid for the installation of millions more phones and the replacement of old hardware with computerized digital switches. Thus France dispelled its reputation as a sad laggard in telephony. But in the meantime new technologies had appeared in other nations that took telecommunications in new directions – videophone, fax, and the fusion of computer services with communication networks. The DGT wanted to ride the crest of this new wave, rather than having to play catch up again. In the early 1970s, Britain announced two separate teletex systems, which would deliver rotating screens of data to television sets in the blanking intervals in television broadcasts. CCETT, DGT’s joint venture with France’s television broadcaster, the Office de radiodiffusion-télévision française (ORTF) launched two projects in response. DIDON1 was modeled closely on the the British television broadcasting model, but ANTIOPE2 took a more ambitious tack, to investigate the delivery of screens of text independently of the communications channel. Bernard Marti in 2007 Bernard Marti headed the ANTIOPE team in Rennes. He was yet another polytechnichien (class of 1963), and had joined CCETT from ORDF, where he specialized in computer animation and digital television. In 1977, Marti’s team merged the ANTIOPE display technology with ideas borrowed from CNET’s TIC-TAC3, a system for delivering interactive digital services over telephone. This fusion, dubbed TITAN4, was basically equivalent to the British Viewdata system that later evolved into Prestel. Like ANTIOPE it used a television to display screens of digital information, but it allowed users to interact with the computer rather than merely receiving data passively. Moreover, both the commands to the computer and the screen data it returned passed over a telephone line, not over the air. Unlike Viewdata, TITAN supported a full alphabetic keyboard, not just a telephone keypad. In order to demonstrate the system at a Berlin trade fair, the team used France’s Transpac packet-switching network to mediate between the terminals and the CCETT computer in Rennes. Théry’s lab had assembled an impressive tech demo, but as of yet none of it had left the lab, and it had no obvious path to public use. Télématique In the fall of 1977, DGT director Gerard Théry, satisfied with how the modernization of the phone network was progressing, turned his attention to the British challenge in videotex. To develop a strategic response, he first looked to CCETT and CNET, where he found TITAN and TIC-TAC prototypes ready to be put to use. He turned these experimental raw materials over to his development office (the DAII) to be molded into products with a clear path to market and business strategy. The DAIIn recommended pursuing two projects: first, a videotex experiment to test out a variety of services in a town near Versailles, and second, investment in an electronic phone directory, intended to replace the paper phone book. Both would use Transpac as the networking backbone, and TITAN technology for the frontend, with color imagery, character-based graphics, and a full keyboard for input. An early experimental Télétel setup, before the idea of using the TV as the display was abandoned. The strategy the DAII devised for videotex differed from Britain’s in three important ways. First, whereas Prestel hosted all of the videotex content themselves, the DGT planned to serve only as a switchboard from which users could reach any number of different privately-hosted service providers, running any type of computer that could connect to Transpac and serve valid ANTIOPE data. Second, they decided to abandon the television as the display unit and go with custom, all-in-one terminals. People bought TVs to watch TV, the DGT leadership reasoned, and would not want to tie up their screen with new services like the electronic phone book. Moreover, cutting the TV set out of the picture meant that the DGT would not have to negotiate over the launch with their counterparts at Télédiffusion de France (TDF), the successor to the ORDF5. Finally, and most audaciously, France cracked the chicken-and-egg problem (that a network without users was unattractive to service providers and vice versa) by planning to lease those all-in-one videotex terminals free of charge. Despite these bold plans, however, videotex remained a second-tier priority for Théry. When it came to ensuring DGT’s place at the forefront of communications technology, his focus was on developing the fax into a nationwide consumer service. He believed that fax messaging could take over a huge portion of the market for written communication from the post office, whose bureaucrats the DGT looked upon as hidebound fuddy-duddies.  Théry’s priorities changed within months, however, with the completion of a government report in early 1978 entitled The Computerization of Society. Released to bookstores in a paperback edition in May, it sold 13,500 copies in its first month, and a total of 125,000 copies over the following decade, quite a blockbuster for a government report6 How did such a seemingly recondite topic engender such excitement? The authors, Simon Nora and Alain Minc, officers in the General Inspectorate of Finance, had been asked to write the report by the Giscard government in order to consider the threat and the opportunity presented by the growing economic and cultural significance of the computer. By the mid-1970s, it was becoming clear to most technically-minded intellectuals that computing power could and likely would be democratized, brought to the masses in the form of new computer-mediated services. Yet for decades, the United States had led the way in all forms of digital technology, and American firms held a seemingly unassailable grip on the market for computer hardware. The leaders of France considered the democratization of computers a huge opportunity for French society, yet they did not want to see France become a dependent satellite of a dominating foreign power. Nora and Minc’s reported presented a synthesis that resolved this tension, proposing a project that would catapult France into the post-modern age of information. The nation would go directly from trailing the pack in computing to leading it, by building the first national infrastructure for digital services – computing centers, databases, standardized networks – all of which would serve as the substrate for an open, democratic marketplace in digital services. This would, in turn, stimulate native French expertise and industrial capacity in computer hardware, software, and networking. Nora and Minc called this confluence of computers and communications télématique, a fusion of telecommunications and informatique (the french word for computing or computer science). “Until recently,” they wrote, computing… remained the privilege of the large and the powerful. It is mass computing that will come to the fore from now on, irrigating society, as electricity did. La télématique, however, in contrast to electricity, will not transmit an inert current, but information, that is to say, power. The Nora-Minc report, and the resonance it had within the Giscard government, put the effort to commercialize TITAN in a whole new light. Before the report, the DGT’s videotex strategy had been a response to their British rivals, intended to avoid being caught unprepared and forced to operate under a British technical standard for videotex. Had it remained only that, France’s videotex efforts might well have languished, ending up much like Prestel, a niche service for a few curious early adopters and a handful of business sectors that it found it useful. After Nora-Minc, however, videotex could only be construed as a central component of télématique, the basis for building a new future for the whole French nation, and it would receive more attention and investment than it might otherwise ever have hoped for. The effort to launch Minitel on a grand scale gained backing from the French state that might otherwise have failed to materialize, as it did for Théry’s plans for a national fax service, which dwindled to a mere Minitel printer accessory. This support included the funding to provide millions of terminals to the populace, free of charge. The DGT argued that the cost of the terminals would be offset by the savings from no longer printing and distributing the phone book, and from new network traffic stimulated by the Minitel service. Whether they sincerely believed this or not, it provided at least a fig leaf of commercial rationale for a massive industrial stimulus program, starting with Alcatel (paid billions of francs to manufacture terminals) and running downstream to the Transpac network, Minitel service providers, the computers purchased by those providers, and the software services required to run an on-line business. Man in the Middle In purely commercial terms, Minitel did not in fact contribute much to the DGT’s bottom line. It first achieved profitability on an annual basis in 1989, and if it ever achieved overall net profitability, it was not until well into its slow but terminal decline in the later 1990s. Nor did it achieve Nora and Minc’s aspiration to create an information-driven renaissance of French industry and society. Alcatel and other makers of telecom equipment did benefit from the contracts to build terminals, and the French Transpac network benefited from a large increase in traffic – though, unfortunately, with the X.25 protocol they turned out to have bet on the wrong packet-switching technology in the long-term. The thousands of Minitel service providers, however, mostly got their hardware and systems software from American providers. The techies who set up their own online services eschewed both the French national champion, Bull, and the dreaded giant of enterprise sales, IBM, in favor scrappy Unix boxes from the likes of Texas Instruments and Hewlett-Packard. So much for Minitel as industrial policy, what about its role in enervating French society with new information services, which would reach democratically into both the most elite arrondissements of Paris and the plus petit village of Picardy? Here it achieved rather more, though still mixed, success. The Minitel system grew rapidly, from about 120,000 terminals at its initial large-scale deployment in 1983, to over 3 million in 1987 and 5.6 million in 1990.7 However, with the exception of the first few minutes of the electronic phonebook, actually using those terminals cost money on a minute-by-minute basis, and there’s no doubt that usage was distributed much more unequally than the equipment. The most heavily used services, the online chat rooms, could easily burn hours of call time in an evening, at a base rate of 60 francs per hour (equivalent to about $8, more than double the U.S. minimum wage at the time). Nonetheless, nearly 30 percent of French citizens had access to a Minitel terminal at home or work in 1990. France was undoubtedly the most online country (if I may use that awkward adjective) in the world at that time. In that same year, the largest two online services in the United States, that colossus of computer technology, totaled just over a million subscribers, in a population of 250 million8. And the catalog of services that one could dial into grew as rapidly as the number of terminals – from 142 in 1983 to 7,000 in 1987 and nearly 15,000 in 1990. Ironically, a paper directory was needed to index all of the services available on this terminal that was intended to supplant the phone book. By the late 1980s that directory, Listel, ran to 650 pages.9 A man using a Minitel terminal Beyond the DGT-provided phone directory, services ran the gamut from commercial to social, and covered many of the major categories we still associate today with being online – shopping and banking, travel booking, chat rooms, message boards, games. To connect to a service, a Minitel user would dial an access number, most often 3615, which connected his phone line to a special computer in his local telephone switching office called a point d’accès vidéotexte, or PAVI. Once connected to the PAVI, the user could then enter a further code to indicate which Minitel service they wished to connect to. Companies plastered their access code in a mnemonic alphabetic form onto posters and billboards, much as they would do with website URLs in later decades: 3615 TMK, 3615 SM, 3615 ULLA. The 3615 code connected users into the PAVI’s “kiosk” billing system, introduced in 1984, which allowed Minitel to operate much like a news kiosk, offering a variety of wares for sale from different vendors, all from a single convenient location. Of the sixty francs charged per hour for basic kiosk services, 40 went to the service itself, and twenty to the DGT to pay for the use of the PAVI and the Transpac network. All of this was entirely transparent to the user; the charges would appear automatically on their next telephone bill, and they never needed to provide payment information to establish a financial relationship with the service provider. As access to the open internet began to spread in the 1990s, it became popular for the cognoscenti to retrospectively deprecate the online services of the era of fragmentation – the CompuServes, the AOLs – as “walled gardens”10. The implied contrast in the metaphor is to the freedom of the open wilderness. If CompuServe is a carefully cultivated plot of land, the internet, from this point of view, is Nature itself. Of course the internet is no more natural than CompuServe, nor Minitel. There is more than one way to architect an online service, and all of them are based on human choices. But if we stick to this metaphor of the natural versus the cultivated, Minitel sits somewhere in between. We might compare it to a national park. Its boundaries are controlled, regulated, and tolled, but within them one can wander freely and visit whichever wonders might strike your interest. DGT’s position in the middle of the market between user and service, with a monopoly on the user’s entry point and the entire communications pathway between the two parties, offered advantages over both the monolithic, all-inclusive service providers like CompuServe and the more open architecture of the later Internet. Unlike the former, once past the initial choke point, the system opened out into a free market of services unlike anything else available at the time. Unlike the latter, there was no monetization problem. The user paid automatically for computer time used, avoiding the need for the bloated and intrusive edifice of ad-tech that supports the bulk of the modern Internet. Minitel also offered a secure end-to-end connection. Every bit traveled only over DGT hardware, so as long as you trusted both the DGT and the service to which you were connected, your communications were safe from attackers. This system also had some obvious disadvantages compared to the Internet that succeeded it, however. For all is relative openness, one could not just turn on a server, connect it to the net, and be open for business. It required government pre-approval to make your server accessible via a PAVI. More fatally, the Minitel’s technical structure was terribly rigid, tied to a videotex protocol that, while advanced for the mid-1980s, appeared dated and extremely restrictive within a decade.11 It supported pages of text, in twenty-four rows of forty characters each (with primitive character-based graphics) and nothing more. None of the characteristic features of the mid-1990s World wide Web – free-scrolling text, GIFs and JPEGs, streaming audio, etc. –  were possible on Minitel. Minitel offered a potential road out of the era of fragmentation, but, outside of France, it was a road not taken. The DGT, privatized as France Télécom in 1988, made a number of efforts to export the Minitel technology, to Belgium, Ireland, and even the U.S. (via a system in San Francisco called 101 Online). But without the state-funded stimulus of free terminals, none of them had anything like the success of the original. And, with France Télécom, and most other PTTs around the world, now expected to fend for themselves as lean businesses in a competitive international market, the era when such a stimulus was politically viable had passed. Though the Minitel system did not finally cease operation until 2012, usage went into decline from the mid-1990s onward. In its twilight years it still remained relatively popular for banking and financial services, due to the security of the network and the availability of terminals with an accessory that could securely read and transmit data from banking and credit cards. Otherwise, french online enthusiasts increasingly turned to the Internet. But before we return to that system’s story, we have one last stop to visit on our tour of the era of fragmentation. [Previous] [Next] Further Reading Julien Mailland and Kevin Driscoll, Minitel: Welcome to the Internet (2017) Marie Marchand, The Minitel Saga (1988)    

Read more
The Electronic Computers, Part 3: ENIAC

The second electronic computing project to emerge from the war, like Colossus, required many minds (and hands) to bring it to fruition. But, also like Colossus, it would have never come about but for one man’s fascination with electronics. In this case, the man’s name was John Mauchly. John Mauchly Mauchly’s story intertwines in curious (and, to some, suspicious) ways with that of John Atanasoff. As you will recall,  we last left Atanasoff and his assistant, Claude Berry, in 1942. Having abandoned their own electronic computer to take on other work for the war. Mauchly had quite a bit in common with Atanasoff: both were physics professors at  lesser-known institutions, with no prestige or authority in the wider academic community. Mauchly languished in particular obscurity, as a teacher at little Ursinus College outside Philadelphia, which lacked even the modest prestige of Atanasoff’s Iowa State. Neither had done anything to merit the notice of their elite brethren at, say, the University of Chicago. Yet both were taken with the same eccentric idea: to build a computing machine from electronic components, the same parts used to make radios or telephone amplifiers. Predicting the Weather For a time, these two like-minded men formed a bond, of sorts. They met in late 1940, at a conference of the American Association for the Advancement Science (AAAS) in Philadelphia. There, Mauchly presented a paper on his study of cyclical patterns in weather data, using an electronic harmonic analyzer he had designed. It was an analog computer1 similar in function to the mechanical tide predictor devised by William Thomson (later Lord Kelvin) in the 1870s. Atanasoff, sitting in the audience, knew he had found a fellow-traveler on the lonely road to electronic computing, and he did not hesitate to approach Mauchly after the talk to tell him about the machine he was building in Ames. But to understand how Mauchly ended up presenting a paper on an electronic weather computer in the first place, we must go back to his roots. Mauchly was born in 1907 as the son of another physicist, Sebastian Mauchly. Like many of his contemporaries, he developed an interest as a boy in radios and vacuum-tubes, and he vacillated between electrical engineering and physics before deciding to focus on meteorology at Johns Hopkins. Unfortunately, he graduated from his Ph.D. program into the teeth of the Great Depression, and felt lucky to land a position at Ursinus in 1934, as the solitary member of its physics department. Aerial view of Ursinus College in 1930 At Ursinus, he began his dream project – to unveil the hidden cycles of the global weather machine, and thus learn to predict the weather not days, but months or years in advance. He was convinced that the sun drove multi-year weather patterns related to levels of sunspot activity. He would extract these patterns of solar gold from the vast silt of data at the U.S. Weather Bureau, with the help of a team of students and a bank of desk calculators acquired at a cut rate from collapsed banks. It soon became apparent, though, that there was simply too much data – too much silt to pan through. The machines could not compute fast enough, and moreover human error was introduced by the constant need to copy intermediate results from machine to paper. Mauchly began to think about another way. He knew about the vacuum tube counters pioneered by Charles Wynn-Willliams that his fellow physicists used to count sub-atomic particles. Given that electronic devices could clearly record and accumulate numbers, Mauchly wondered, why could they not perform more complex calculations? He spent several years of his spare time fiddling with electronic components: flip-flops, counters, a substitution cipher machine that used a mix of electronic and mechanical parts, and finally the harmonic analyzer, which he applied to his weather-prediction project, extracting what looked like multi-week patterns of rainfall variation in the U.S weather data. This is the finding that brought Mauchly to the AAAS in 1940, and the finding that brought Atanasoff to Mauchly. The Visit The pivotal event of Mauchly and Atanasoff’s relationship came about six months later, early in the summer of 1941. In Philadelphia, Atanasoff had told Mauchly about the electronic computer he was building in Iowa, and mentioned how cheaply he had managed to build it. In their correspondence afterward, he continued to make teasing hints about how he had built his computer at less than $2 per digit in hardware cost. Mauchly was intrigued, amazed even, by this achievement. By this time, he was entertaining serious plans for building an electronic calculator, but with no support from his college, he would have to pay for all the equipment out of his own pocket. A single tube typically cost $4, and it would take two tubes to store even a single binary digit in a typical flip-flop circuit. How, he wondered, could Atanasoff have possibly achieved such economy? Six months later, he was finally able to find the time to head west to satisfy his curiosity.  After a thousand mile cross-country drive, Mauchly and his son arrived at Atanasoff’s home in Ames in June 1941. Mauchly would later say that he came away disappointed. Atanasoff’s inexpensive storage was not electronic at all, but held in electro-static charges on a mechanical drum. Because of this and other mechanical parts, as we saw earlier, it could not compute at nearly the speed Mauchly dreamed of. He later called it “a mechanical gadget that uses some electronic tubes.”2 Yet shortly after the visit he wrote a letter praising Atanasoff’s machine, writing that it was “electronic in operation, and will solve within a very few minutes any system of linear equations involving no more than thirty variables.”  He claimed it would be both faster and cheaper than the mechanical Bush differential analyzer.3 Some three decades on, Mauchly and Atanasoff’s relationship would become pivotal to the arguments in Honeywell v. Sperry Rand, the court case whose ultimate resolution invalidated Mauchly’s patent claims to an electronic computer. Without commenting on the inherent merits of the patent itself, and allowing for the fact that Atanasoff was surely the more accomplished engineer, and granting that Mauchly’s retrospective opinions of Atanasoff and his computer are deeply suspect, still there is no reason to believe that Mauchly learned or copied anything of significance from Atanasoff. Clearly Mauchly’s general idea for an electronic computer did not come from Atanasoff. But more importantly, in no point of detail does the design of the later ENIAC have anything in common with the Atanasoff-Berry Computer. At most one could say that Atanasoff bolstered Mauchly’s confidence, providing an existence proof that electronic computing could work. The Moore School and Aberdeen Meanwhile, Mauchly was left where he had started. There was no magic trick for cheap electronic storage, and as long as he remained at Ursinus, he lacked the means to make his electronic dream a reality. Then came his lucky break. That same summer of 1941, he attended a summer course on electronics at the University of Pennsylvania’s Moore School of Engineering. By this time France was subjugated and Britain under siege, U-boats prowled the Atlantic, and American relations with an aggressive, expansionist Japan were deteriorating rapidly. Despite the isolationist bent of the populace as a whole, American intervention seemed probable, if not inevitable, to the elite at places like the University of Pennsylvania. The Moore School thus offered a course to bring scientists and engineers up to speed in preparation for possible wartime work, especially on the topic of radar technology.4 The Moore School of Engineering building in Philadelphia This course had two crucial consequences for Mauchly: first, it brought him in contact with Presper (Pres) Eckert, scion of a local real-estate empire and a young electronics whiz who had whiled away his teenage afternoons in the lab of television pioneer Philo Farnsworth. Eckert would later share authorship of the (ultimately invalidated) ENIAC patent with Mauchly. Second, it landed him a position in the Moore School faculty, ending his long academic isolation at the backwater of Ursinus College. This, it seems, was not due to any special merit on Mauchly’s part, but simply because the school was desperate for bodies to replace the academics that were being pulled into full-time war work. By 1942, however, a large part of the Moore School itself was taken over by a war project of its own: the computing of ballistic firing trajectories by mechanical and manual means. This project grew organically out of the school’s pre-existing relationship with the Aberdeen Proving Ground, about eighty miles down the coast in Maryland. The Proving Ground was created during the First World War to provide gunnery testing services to the Army, replacing the previous testing ground at Sandy Hook, New Jersey. In addition to actually test-firing weapons, it had the task of computing firing tables for use by artillery units in the field. Due to the complicating factor of air resistance, it was not possible to determine where a shell would land when fired from a gun by simply solving a quadratic equation. Nonetheless, high precision and accuracy were extremely important to artillery fire given that the initial shells were the most likely to kill and maim – after that the targeted men would go to ground as quickly as possible. To achieve such precision, modern armies built extensive tables which told gunners precisely how far away their shell would land when fired at a given angle. The tabulators used the shell’s initial velocity and position to compute the position and velocity a short time interval later, and then repeated the same computation for the next time step, and so on for hundreds or thousands of repetitions. For each combination of gun and shell, the same computations had to be made for every possible angle of fire, and accounting for a variety of different atmospheric conditions. The load of calculation was so massive that it took Aberdeen until 1936 to finish out the firing tables that it began at the conclusion of the First World War. Obviously, Aberdeen was in the market for a better solution. In 1933, it made a deal with the Moore School: the Army would provide the money to build two differential analyzers, analog computers modeled on the MIT design overseen by Vannevar Bush. One would be shipped down to Aberdeen, but the other would remain on loan at the Moore School for whatever use its professors saw fit. The analyzer could plot a trajectory in fifteen minutes that would have taken a human computer multiple days, albeit with somewhat less precision. Demonstration of a howitzer at Aberdeen, ca. 1942 In 1940, however, the research division, now called the Ballistic Research Laboratory (BRL), called in its loan, and took over the Moore School machine to begin plotting artillery tables for the looming war. The school’s computing group was also pulled in to supplement the machine with human calculation. By 1942, 100 woman computers at the school were working six days a week to churn out computations for the war – among them was Mauchly’s wife, Mary, who worked on Aberdeen’s firing tables. Mauchly himself was put in charge of another group of computers working on radar antenna calculations. Ever since arriving at the Moore School, Mauchly had been shopping his idea for an electronic computer around among the faculty. Already he had some significant allies, including Presper Eckert and John Brainerd, a more senior faculty member. Mauchly provided the vision, Eckert the engineering chops, and Brainerd the credibility and legitimacy. In the spring of 1943, the three decided the time was right to pitch Mauchly’s long-simmering idea directly to the Army. But the mysteries of the climate that he had long hoped to unveil would have to wait. His new computer would serve the needs of a new master: tracing not the eternal sinusoids of global temperature cycles, but the all-too-mortal ballistic arcs of artillery shells. ENIAC In April 1943, Mauchly, Eckert, and Brainerd drafted a “Report on an Electronic Diff. Analyzer.” It quickly acquired them another ally, Herman Goldstine, a mathematician and Army officer who served as the liaison between Aberdeen and the Moore School. With Goldstine’s help, the group pitched their idea to a committee at the BRL, and got an Army grant to fund their project, with Brainerd as the principal investigator. They were to finish the machine by September 1944, with a budget of $150,000. The team dubbed their project ENIAC: Electronic Numerical Integrator, Analyzer and Computer. Goldstine (2nd from left) after the war with a later electronic computer As with Colossus in the U.K., the established engineering authorities in the U.S., such as the National Defense Research Committee (NDRC), received the ENIAC proposal with skepticism. The Moore School did not have the reputation of an elite institution, yet it proposed to build something unheard of. Even industrial giants like RCA had struggled to build relatively simple electronic counting circuits, much less a highly configurable electronic computer.5 George Stibitz, architect of the Bell relay computers and now part of the NDRC computing projects committee, believed that ENIAC would take far too long to build to be useful to the war. In that he proved correct. ENIAC would take more than twice as long and cost three times as much as originally planned. It sucked up a huge portion of the human resources of the Moore School. The design work alone required the help of seven others in addition to the initial group of Mauchly, Eckert, and Brainerd. As with Colossus, the ENIAC project co-opted many of the school’s human computers to help configure their electronic replacement, among them Herman Goldstine’s wife Adele, and Jean Jennings (later Bartik), both of whom later did important work of their own in computer design. As the “NI” in ENIAC suggests, the Moore School team sold the Army on a digital, electronic, version of the differential analyzer, which would solve integrations for trajectories far faster and more precisely than its analog, mechanical counterpart.6 But what they delivered was rather more than that. The core of the ENIAC’s capabilities, again as with Colossus, came from its variety of functional units. The most used were accumulators for adding and counting. Their design derived directly from the Wynn-Williams-style electronic counters used by physicists, and they literally added by counting, in the way that a preschooler might add on his fingers. Other functional units included multipliers and function generators for doing table look-ups to shortcut more complex functions (such as sine and cosine). Each functional unit had local program controls to set up short sequences of operations. As with Colossus, programming was done with a combination of panel switches and telephone-style plugboards. ENAIC also had some electro-mechanical parts, notably the relay register that served as a buffer between the electronic accumulators and the IBM punch card machines used for input and output. Again this was much the same architecture as Colossus. Sam Williams of Bell Labs, who had collaborated with George Stibitz on the construction of the Bell relay computers, also built the register for ENIAC. But one key difference from Colossus made ENIAC a far more flexible machine: its programmable central control. The master programmer unit sent pulses to the functional units to trigger a pre-programmed sequence, and received a return pulse when the unit finished. It then went on to the next operation in its master coordination sequence, producing the overall desired computation as a function of many of these small sequences. The master programmer could also make decisions, or branches, by using a stepper: a ring counter that determined to which of six output lines an input pulse would be forwarded. Thus the master could execute up to six different functional sequences depending on the current state of the stepper. This flexibility would enable ENIAC to solve problems far removed from its initial bailiwick of ballistic computations. Configuring ENIAC with switches and plugboards Eckert was responsible for keeping all the electronics in this monstrosity humming, and independently came up with the same basic tricks as Flowers at Bletchley: run the filaments at well below their rated current, and never turn the machine off. Because of the huge number of tubes involved, however, another trick was required: plug-in units holding several dozen tubes that could be quickly swapped out in case of failure. Maintenance workers would then find and replace the precise failed tube at leisure while ENIAC returned to work immediately. Even with all these measures, given the vast number of tubes in ENIAC, it could not crank away on a problem over the weekend or even overnight, as relay computers routinely did. Invariably a tube would fail. A close up of some of ENIAC’s many tubes. Because of the slottable tube racks (visible here), the caption is not quite accurate. Accounts of the ENIAC  often emphasize its tremendous size. Its many racks of tubes – 18,000 of them in total – switches, and plugboards would fill a typical ranch house and overflow into the front yard. Its size was a product not merely of its components (tubes were relatively large) but of its peculiar architecture. Though all mid-century computing machines appear gigantic by modern standards, the next iteration of electronic computers were far smaller than ENIAC, providing greater capability with one-tenth the number of electronic components. Wide Shot of ENIAC at the Moore School ENIAC’s grotesque size proceeded directly from two major design choices. The first traded off cost and complexity for potential speed. Virtually all later computers instead stored numbers in registers and then processed them through a separate arithmetic unit, storing the result back into a register. ENIAC, however, had no separation between storage and processing units: each number storage location was also a processing unit that could add and subtract, and thus required many extra tubes. It could be seen as simply a massively accelerated version of the Moore School’s human computing division, for it “had the computational architecture of a roomful of twenty computers working with ten-place desk calculators and passing results back and forth.”7 In theory this allowed ENIAC to compute in parallel across multiple accumulators, but that capability was little used, and was removed in 1948. The second design choice is harder to defend on any grounds. Unlike the ABC or the Bell relay machines, ENIAC did not store numbers in a binary representation. Instead it translated a decimal mechanical counting wheel directly into electronics, with ten flip-flops per digit – if the first was lit up, that was a 0, the second a 1, the third a 2, etc. This was hugely wasteful of expensive electronic components,8 and seems to have been done solely out of fear of the complexity of binary-decimal conversion during input and output. Yet the Atanasoff-Berry Computer, Colossus, and the Bell and Zuse relay machines all used binary internally and their designers had no great difficulty with converting between bases. These design choices would not be repeated. In this sense, ENIAC was like the ABC – a one-off oddity rather than the template for all modern computing machines. However it was different in one very important sense – it proved beyond a shadow of a doubt the viability of electronic computing, by doing useful work, solving real problems with incredible speed, in the public eye. Vindication By November 1945, ENIAC was fully functioning. It could not boast the same robustness as its electromechanical cousins, but it was reliable enough for its several-hundred-fold speed advantage to tell. Whereas the differential analyzer took fifteen minutes, ENIAC could compute a ballistic trajectory in twenty seconds – less than the actual flying time of the shell.9 And, unlike the analyzer, it did so with the same precision as a human computer using a mechanical calculator. As Stibitz had predicted, however, ENIAC arrived too late to contribute to the war effort, and there was no longer a pressing need for artillery tables. But there was a secret weapons project operated out of Los Alamos, New Mexico that had acquired a momentum of its own, and continued to operate after the war. It also continued to hunger for calculation. One of the Manhattan Project physicists, Edward Teller, had become taken as early as 1942 with the idea of a “Super”: a far more powerful weapon than those later dropped on Japan, whose explosive energy would come from atomic fusion, rather than fission. Teller believed he could ignite a fusion chain reaction in a mixture of deuterium (normal hydrogen plus one neutron) and tritium (normal hydrogen plus two neutrons). But for that to be feasible, the proportion of tritium required had to be low, for it was extremely rare. The Los Alamos scientists therefore brought to the Moore School a calculation to test the feasibility of the Super, by computing differential equations that modeled the ignition of a deuterium-tritium mix at various concentrations of tritium. No one at the Moore School had the clearance to know what the calculation was actually for, but they dutifully set up the data and equations provided by the Los Alamos scientists. In fact, the details of the calculation remain classified to this day (as does the entire program to build the “Super,” now known more commonly as the hydrogen bomb), though we know that Teller considered the result, rendered in February 1946, a vindication of his design.10 That same month, the Moore School gave ENIAC its public debut. In a ritual unveiling before assembled VIPs and the press, operators pretended to turn on the machine (though of course it was always on), then ran it through some ceremonial calculations, computing a ballistic trajectory to show off the unprecedented speed of its electronic components. Afterward, the staff distributed punched card outputs from the calculation to each attendee. ENIAC went on to solve several more real problems during the remainder of 1946: a series of calculations on the flow of fluids (such as airflow over a wing) by British physicist Douglas Hartree, another Los Alamos calculation for modeling the implosion of a fission weapon, some trajectory calculations for a new ninety-millimeter cannon for Aberdeen. Then it fell silent for a year and a half. At the end of 1946, as part of the Moore School’s agreement with the Army, BRL packed up the machine and moved it down to the proving ground. Once there it suffered from persistent reliability problems, and the BRL team did not get it working smoothly enough to do any useful work until after a major overhaul, completed in March 1948.11 It didn’t matter. No one was paying attention to ENIAC any more. The race was already on to build its successor. Further Reading Paul Ceruzzi, Reckoners (1983) Thomas Haigh, et. al., Eniac in Action (2016) David Ritchie, The Computer Pioneers (1986)

Read more
High Pressure, Part 2: The First Steam Railway

Railways long predate the steam locomotive. Trackways with grooves to keep a wheeled cart on a fixed path date back to antiquity (such as the Diolkos, which could carry a naval vessel across the Isthmus of Corinth on a wheeled truck). The earliest evidence for carts running atop wooden rails, though, comes from the mining districts of sixteenth century Europe. Agricola describes a kind of primitive railway used by German miners in his 1556 treatise De Re Metallica. Agricola reports that the miners ran trucks called Hunds (“dogs”) (supposedly because of the barking noise they made while in motion) over two parallel wooden planks. A metal pin protruding down from the truck into the gap between the planks kept it from rolling off the track.[1] This system allowed a laborer to carry far more material out of the mine in a single trip than they could by carrying it themselves. British Railways Wooden railways called “waggon ways” are first attested in the coal-mining areas of Britain around 1600. These differed in two important ways from earlier mining carts: first, they ran outside the mine, carrying coal a short distance (perhaps a mile or two) to the nearest high-quality road or navigable waterway from which it could be brough to market. Second, they were drawn by horses, at least on the uphill courses—on some eighteenth-century wagon ways, the horse actually caught a ride downhill, standing on a flat carriage behind the cart. Flanged wheels to keep the wagon on the track were also probably introduced around this time. Both wheels and rails were still constructed of wood, however, which limited the load the wagons could carry.[2] By the middle of the eighteenth century, waggon ways crisscrossed the mining districts of northern England, especially around the coalfields, creating a substantial trade in birch wheels and rails of beech or ash from the South. They were called by many different names, such as “gangways,” “plateways,” “tramways,” or “tramroads.” Colliers invested sophisticated engineering into their design, using bridges, causeways, and tunnels to create a smooth grade from the pithead to the point of embarkation (such as the Tyne or the Severn rivers).[3] Most were no more than a mile or two long, but some ran as far as ten miles. They were smooth enough that a single horse could haul several times on rails what it could on an ordinary eighteenth-century road: the figures given by various sources for the load of a horse-drawn rail carriage range from two to ten tons, likely depending on the grade of the railway and the material composition of the rails and wheels.[4] The Little Eaton Gangway, a railway built in the 1790s, that, incredibly, continued to operate until 1908, when this photo was taken. It carried coal five miles down to the Derby Canal. This close-up of the Little Eaton Gangway shows clearly the design of the railbed, with L-shaped rails to hold the wagon on the track, and stone blocks underneath to which they were nailed. The Penydarren railway, discussed below, had the same design. This may seem prologue enough, but two further milestones in the development of railways still intervened before the steam locomotive came into the picture. Around the late 1760s, the Darbys of Coalbrookdale step into our history once more. They are reputed to have been the first to introduce durable cast iron plates to strengthen the rails that they used to carry materials among their various Shropshire properties.[5] Later the Darbys and others introduced fully cast-iron rails, doing away with wood altogether. With this change in material the railways of England (already intimately linked with coal mining) now became fully enmeshed in the cycle of the triumvirate—coal, iron, and steam—well before they became steam-powered. Then, in 1799, came the first public horse-drawn railway. Up to this time, all railways  served the needs of a single owner (though some required an easement across neighboring properties), typically a mining concern. But the Surrey Iron Railway, which ran from Croydon (south of London) up to the Thames at Wandsworth, was open to any paying cargo, much like a turnpike road or a canal. Among the backers of the Surrey Iron Railway was a Midlands colliery owner, William James, who will have an important part to play later in our story.[6] So, although we think of them now as two components of a single technological system, the locomotive and the railway did not start out that way. Instead, the locomotive appeared on the scene as an alternative way of hauling freight over an already familiar and well-established transportation medium. Trevithick Richard Trevithick was the first Englishman to attempt this substitution. He was born in 1771, in the heart of the copper-mining region of Cornwall. His birthplace, the village of Illogan, sat beneath the weathered hill of Carn Brea, said to be the ancient dwelling place of a giant.[7] But the only giants still found upon the landscape of eighteenth-century Cornwall breathed steam. They sheltered in the stone engine houses that still dot the countryside today, and raised water from the bottom of the mine, allowing the proprietors to delve ever deeper into the earth. Trevithick’s father was a mine “captain,” a high-status position with the responsibilities of a general manager and some of the same cachet among the mining community as a sea captain would have in a nautical community. This included the privilege of an honorific title: he was “Captain Trevithick” to his neighbors. The elder Trevithick’s work included serving as mine engineer and assayer, and he would have been familiar with all the technical workings of the mine, from the digging equipment to the pumping engine. The younger Trevithick must have learned well from his father. At fifteen, he was employed by his father at Dolcoath, the most lucrative copper mine of the region. By age 21, having grown into something of a giant himself—standing a burly six feet two, his pastimes were said to include hurling sledgehammers over buildings—the miners of Cornwall already consulted him for his expertise on steam engines.[8] Linnell, John; Richard Trevithick (1771-1833); Science Museum, London ; http://www.artuk.org/artworks/richard-trevithick-17711833-179865 " data-medium-file="https://cdn.accountdigital.net/Fvh4aGvmBxWfkJ8nuFse_zi81ic9" data-large-file="https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j?w=739" loading="lazy" src="https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j?w=831" alt="" class="wp-image-14451" width="566" height="696" srcset="https://cdn.accountdigital.net/FnhVecf3Lm75yyCxkoj00B9ZFNOG 566w, https://cdn.accountdigital.net/FikdTskwdmnAoksYCE_1j4a5Fl2B 122w, https://cdn.accountdigital.net/Fvh4aGvmBxWfkJ8nuFse_zi81ic9 244w, https://cdn.accountdigital.net/FgMy22xNugUYyjZ88KOSKWMwaLrv 768w, https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j 974w" sizes="(max-width: 566px) 100vw, 566px">A portrait of Trevithick painted in 1816, when he was 45. He gestures to the Andes of Peru in the background, where Trevithick intended, at the time, to make his fortune in silver mining. By the 1790s, Boulton and Watt were about as popular in Cornwall as Fulton and Livingston were in the American West, and for the same reason: they were seen as grasping monopolists who kept the miners of Cornwall, who depended on effective pumps for their livelihood, in thrall to the Watt patent. Fifteen years earlier, Watt’s efficient engines had appeared as a lifeline to copper mines suffering under competition from the prodigious Parys Mountain in Anglesey, whose ample ores could be cheaply mined directly from the surface.[9] But as the mines continued to struggle, Boulton and Watt began to take shares in mines in lieu of payment, and set up a headquarters at Cusgarne, right in the copper district, to oversee their investments. One of their most skilled mechanics, William Murdoch, moved to Cornwall and acted as their local agent. To the copper miners, Boulton and Watt began to look like meddlers as well as leeches. By the 1790s, Anglesey ran out of easy-to-reach ore, and the fortunes of the Cornwall copper mines began to look up. With their mutual enemy gone, the grudging partnership between the Cornish miners and Boulton and Watt soured rapidly. The Dolcoath Copper Mine, Camborne, Cornwall, circa 1831. (Photo by Hulton Archive/Getty Images) " data-medium-file="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=300" data-large-file="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=739" loading="lazy" width="902" height="637" src="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=902" alt="" class="wp-image-14453" srcset="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96 902w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=150 150w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=300 300w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=768 768w" sizes="(max-width: 902px) 100vw, 902px">An 1831 engraving of Dolcoath copper mine, in Cornwall. Trevithick, a hot-headed young man, took up the banner of revolution against the Boulton and Watt regime in 1792, fighting a series of legal battles on behalf of the competing engine design of Edward Bull. By 1796 every battle had been lost—Bull and Trevithick’s attempt to defy the Watt patent had failed, and there seemed to be nothing for the Cornwall interests to do but wait for the expiration of its term, in 1800.[10] But Trevithick found another way forward: strong steam. More than any other element, the separate condenser distinguished Watt’s patent engine from its predecessors. By shedding the condenser and operating well above atmospheric pressure instead, Trevithick could avoid claims of infringement. Concerned that releasing uncondensed steam would waste all the power of the engine, he consulted Cornwall’s resident mathematician, Davies Giddy. Giddy reassured him that he would waste a fixed amount of power equal to the weight of the atmosphere, and would gain some compensation in return by saving the power required to work an air pump and lift water into the condenser.[11] As in the U.S., then, the socioeconomic environment pushed steam engine users on the periphery toward high-pressure, though in this case it was the presence of a rival patent rather than an absence of capital resources. Trevithick saw an immediate application for high-pressure steam as a replacement for the horse whim, an animal-powered lift which worked alongside the pumping engine in many Cornish mines, usually in the same vertical shaft, to raise ore and dross from below. A few whims had been installed with Watt engines, but Trevithick’s “puffers” (so called for the visible puff of exhaust steam they released) cost less to build and transport. The compact high-pressure engine also fit much more comfortably in the engine house alongside the pumping engine than a second Watt behemoth would.  An 1806 Trevithick stationary steam engine, minus the flywheel it would have had at the time to maintain a steady motion. Note how the exhaust flue comes out of the middle of the cylindrical boiler, the same return-flue design used by Evans to extract additional heat from the hot gases of the furnace. Trevithick’s engines thus began replacing horse whims in engine houses across Cornwall in the early 1800s.[12] The Watt interests were not happy: much later in life Trevithick claimed that Watt (probably referring in this case to the belligerent James Watt, Jr., the inventor’s son), “said to an eminent scientific character still living that I deserved hanging for bringing into use the high pressure,” presumably because of the danger of explosion.[13] One of Trevithick’s boilers, installed to drain the foundation for a corn mill in Greenwich, did in fact explode in 1803 when left unattended, and the Watts did not miss the opportunity to get in their “I told you sos” in the press.[14] In future engines Trevithick would include two safety valves, plus a plug soldered with lead as a final safety measure: if the water level fell too low, the heat would melt the solder and blow out the plug, relieving excess pressure. But Trevithick’s interest had by this time already wandered from staid industrial applications to the more romantic dream of a steam carriage. Steam Carriage As we have seen already several times in this story, many inventors and philosophers had dreamed the same dream, dating back well over a century. To realize how readily available the idea of a steam carriage was, we must remember that steam power’s job, in a sense, had always been to replace either horse- or water-power, and that carriages were the most ubiquitous piece of horse-powered machinery around in early modern Europe. The first person we know of to successfully build a steam carriage (if we construe success loosely), was a French army officer named Nicolas-Joseph Cugnot. More specifically, he built a steam fardier, a cart for pulling cannon. It was a curious looking tricycle with the boiler hanging off the front like an elephantine proboscis. Cugnot carried out some trial runs of his vehicle in 1769, but with no way to refill the boiler while in use, it had to stop every fifteen minutes to let the boiler cool, refill it, and work up steam once more. This was a curiosity without real practical value.[15] Cugnot’s Fardier à Vapeur, preserved at the Musée des Arts et Métiers in Paris. Trevithick probably never heard of Cugnot, but he certainly knew William Murdoch, Watt’s representative in Cornwall. Murdoch began experimenting with high-pressure steam carriages in the 1780s, and built a three-wheeled carriage that (like Cugnot’s cart) survives today in a museum. Unlike Cugnot’s, vehicle however, Murdoch’s surviving machine is a model, no more than a foot tall. Lacking the backing of his employers, who disliked strong steam and found the carriage concept unpromising if not ridiculous, Murdoch’s tinkerings did not even get as far as Cugnot’s. There is no evidence that he ever built a full-sized carriage. [16] Editing Undertaken: Levels, Unsharp Mask " data-medium-file="https://cdn.accountdigital.net/FhB8HkLDtOTEC4mIMPtlRbD1ZWQw" data-large-file="https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T?w=739" loading="lazy" src="https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T?w=1024" alt="" class="wp-image-14457" width="561" height="421" srcset="https://cdn.accountdigital.net/FpqH97sUpkaqcDXUxuJIXFhnlUoO 561w, https://cdn.accountdigital.net/FjVNxoFYXseTtLwNZJAeiqhVHwOf 150w, https://cdn.accountdigital.net/FhB8HkLDtOTEC4mIMPtlRbD1ZWQw 300w, https://cdn.accountdigital.net/FtfsjNccReQS7wrfGvu9EyUGjYcr 768w, https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T 1024w" sizes="(max-width: 561px) 100vw, 561px">Murdoch’s model steam carriage. It’s unclear why Trevithick decided to build a steam-powered vehicle—he may have been trying to develop a portable engine that could be moved between work sites under its own power. It is possible that Trevithick got the idea for a steam carriage from Murdoch, but, as we have seen, the idea was commonplace. In the execution of that idea, Trevithick went far beyond his predecessor. He began work on his steam carriage in late 1800, with the help of his cousin Andrew Vivian and several other local craftsmen. He already had in hand his high-pressure engine design, with a very favorable power-to-weight ratio compared to a Watt engine. A small and light engine was advantageous in a steamboat, but it was crucial in a land vehicle that had to rest on wheels and fit on narrow roads. He used the same return-flue boiler design as Oliver Evans had; given the distance and timing, they almost certainly arrived at this idea independently. Many wise men of the time doubted that a self-driving wheel was even possible, arguing that it would simply spin in place without an animal with traction to pull it. Trevithick therefore felt it necessary to first disprove this theory (in an experiment probably devised by Giddy) by sitting in a chaise with his compatriots, and moving the vehicle by turning the wheels with their hands.[17] In December 1801 they went for their first steam-powered ride. What exactly the first carriage looked like is unknown, but it was likely a simple wheeled platform with engine and boiler mounted atop it and a crude lever for steering. Years later one “old Stephen Williams” (not so old at the time) would recall: I was a cooper by trade, and when Captain Dick [Trevithick] was making his first-steam carriage I used to go every day into John Tyack’s blacksmiths’ shop at the Weith, close by here, where they were putting it together. …In the year of 1801, upon Christmas-eve, coming on evening, Captain Dick got up steam, out in the high road… we jumped up as many as could; may be seven or eight of us. ‘Twas a stiffish hill going from the Weith up to Cambourne Beacon, but she went off like a little bird.[18] Within days, this first carriage quite literally crashed and burned (though the burning was apparently caused by leaving the carriage unattended with the firebox lit, not by the crash itself).[19] Nonetheless, Trevithick formed a partnership with his cousin Vivian to develop both the high-pressure engine and its use in carriages, and they went to London to seek a patent and additional backers and advisers, including such scientific luminaries as Humphrey Davy and Count Rumford. They had a second carriage built, this one designed as a true passenger vehicle with a compartment to accommodate eight. Giddy nicknamed it “Trevithick’s Dragon.” It worked better than the first attempt, running a good eight miles-per-hour on level ground, but the ride was rough. For some decades, steel spring suspensions had been standard on carriages, but the direct geared linkage between the drive wheels and the engine on Trevithick’s carriage did not allow for them to move independently.[20] The steering mechanism also worked poorly. In one early trial Trevithick tore the rail from a garden wall, and Vivian’s relative Captain Joseph Vivian (actually a sea captain) reported after a drive that he “thought he was more likely to suffer shipwreck on the steam-carriage than on board his vessel…”[21] It offered no obvious advantages over a horse carriage to offset the loss of comfort and control, not to mention the risk of fire and explosion. The Dragon attracted some curious onlookers, but no investors. Steam Railway If steam-powered vehicles on water found success first in the U.S. because alternative modes of inland transportation were lacking, steam-powered vehicles on land found success first in Britain because the transportation medium to support them already existed. The railways offered the perfect solution for the problems of Trevithick’s steam carriage: a road without cobbles or ruts to jounce on, a road that steered the carriage for you, and a road with no passengers to annoy or endanger. But Trevithick was not positioned to see it, because Cornwall did not have railways of any kind (its first, the Portreath Tramroad was not constructed until 1812). It would take a new connection to link the engine born out of the struggle with Watt over the mines of Cornwall to the rails created to solve the problems of northern coalfields. On business in Bristol in 1803, Trevithick made that connection, when he met a Welsh ironmaster named Samuel Homfray, who provided him with fresh capital in exchange for a share in his patent, and solicited his aid in building steam engines for his ironworks, called Penydarren. It also happened that Homfray also had part ownership of a railway, and the opportunity thus arose to marry high-pressure steam to rails. For Homfray this was also an opportunity to show up a rival. He and several other ironmasters had invested in a canal to carry their wares down to the port at Cardiff, but the controlling partner, Richard Crawshay, demanded exclusive privileges over the waterway. Homfray and several of the other partners exploited a loophole to bypass Crawshay. At the time, any public thoroughfare (on land or water) required an act of Parliament to approve its construction. The act approving the Cardiff canal also allowed for the construction of railways within four miles of the canal. The intent of this was to allow for feeder lines. Rails, at the time, were a strictly secondary transportation system. They provided “last-mile” service from mining centers to a navigable waterway. A boom in canal building that began in the later eighteenth century extended and interconnect those waterways, which offered far lower transportation costs than any form of land transportation. If a horse could pull several times the weight on a railway that it could on an ordinary road, it could pull several times more again when hitched to a canal barge.[22] (The plummeting transportation costs brought about by the ability to float cargo to the coast from nearly any town in England by horse-drawn barge account for the lack of British interest in riverine steamboats.) So the goal was almost always to get goods to water as quickly as possible. The trick that Homfray and his allies pulled was to build a railway as a primarytransportation link in its own right, paralleling the canal for over nine miles, rather than connecting directly to it, and thereby neutering Crawshay’s privileges.[23] It was on this railway that Homfray (or perhaps Trevithick, which partner initiated the idea is unknown) proposed to replace horse power with steam power. Crawshay found the concept laughable. Like many of his contemporaries, he believed that the smooth wheels would find no purchase on smooth rails, and would simply spin in place. The ironmasters placed a not-so-friendly wager of 500 guineas over whether Trevithick could build a locomotive to haul ten tons of iron the length of the railway. On February 21st, 1804, Crawshay lost. As Trevithick reported to Giddy: Yesterday we proceeded on our journey with the engine; we carry’d ten tons of Iron, five waggons, and 70 Men riding on them the whole of the journey. Its above 9 miles which we perform’d in 4 hours & 5 Mints, but we had to cut down som trees and remove some Large rocks out of road. The engine, while working, went nearly 5 miles pr hour; …We shall continue to work on the road, and shall take forty tons the next journey. The publick untill now call’d mee a schemeing fellow but now their tone is much alter’d.[24] We should not picture the Penydarren engine in the mind’s eye as the iconic, fully-developed steam locomotive of the mid-19th century. The railbed itself looked very different than what we might imagine: the cast-iron rails were outward-facing Ls, whose vertical stroke kept the wheels from leaving the track. Nails driven into two parallel rows of stone blocks held the rails in place. This arrangement avoided having perpendicular rail ties (or sleepers, as the British call them) that could trip up the horses, who walked between the rails as they pulled their cargo. Trevithick’s locomotive resembled a stationary engine jury-rigged to a wheeled platform. A crosshead and large gears carried power from the cylinder down to the left-hand wheels (only, the right side received no power), and a flywheel kept the vehicle from lurching each time the piston reached the dead center position. Trevithick’s goal was to show off the versatility of high-pressure steam, not to launch a railroad revolution. A replica showing what the Penydarren locomotive may have looked like. Note the fixed gearing system for delivering power to the two wheels in the foreground, the flywheel in the background, and the L-shaped rails. Notice also how much it resembles Trevithick’s stationary steam engine, with additional mechanisms to transmit power to the wheels. The Penydarren locomotive performed several more trial runs; on at least one, the rails cracked under the engine’s weight: a portent of a major technical obstacle yet to be overcome before steam railways could find lasting success. Trevithick then seems to have removed the engine and put it to work running a hammer in the ironworks; what became of the rest of the vehicle is unknown.[25] Many other endeavors captured Trevithick’s attention in the following years; among them stationary engines at Penydarren and elsewhere, steam dredging experiments, and a scheme to use a steam tug to drag a fireship into the midst of Napoleon’s putative invasion fleet at Bolougne (as we have seen, Robert Fulton was at this time trying to sell the British government on his “torpedoes” to serve the same purpose). In 1808, he made once last stab at steam locomotion, a demonstration vehicle called the Catch-me-who-can that ran over a temporary circular track in London. Again, rail breakage proved a problem. Trevithick hoped to earn some money from paying riders and to attract the interest of investors, but he failed on both accounts.[26] The reasons for the lack of interest are clear. Trevithick’s locomotives were neither much faster nor obviously cheaper than a team of horses, and they came with a host of new, unsolved technical problems. Twenty more years would elapse before rails would begin to seriously challenge canals as major transport arteries for Britain, not mere peripheral capillaries. To make that happen would require improvements in locomotives, better rails, and a new way of thinking about the comparative economics of transportation. Trevithick himself had twenty-five more years of restless, peripatetic life ahead of him, much of it spent on fruitless mining ventures in South and Central America. In an irresistible historical coincidence, in 1827, at the end of a financially ruinous trip to Costa Rica, he crossed paths with another English engineer named Robert Stephenson. Stephenson gave the downtrodden older man fifty pounds to help him get home. After a spate of mostly failed or abortive projects, Trevithick died in 1833. The one item of real wealth remaining to him, a gold watch brought back from South America, went to defray his funeral expenses.[27] Young Stephenson, however, returned to much brighter prospects in England. He and his father would soon redeem the promise hinted at by the trials at Penydarren.

Read more
The Hobby Computer Culture

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] From 1975 through early 1977, the use of personal computers remained almost exclusively the province of hobbyists who loved to play with computers and found them inherently fascinating. When BYTE magazine came out with its premier issue in 1975, the cover called computers “the world’s greatest toy.” When Bill Gates wrote about the value of good software in the spring of 1976, he framed his argument in terms of making the computer interesting, not useful: “…software makes the difference between a computer being a fascinating educational tool for years and being an exciting enigma for a few months and then gathering dust in the closet.”[1] Even as late as 1978, an informed observer could still consider interest in personal computers to be exclusive to a self-limiting community of hobbyists. Jim Warren, editor of Dr. Dobb’s Journal of Computer Calisthenics and Orthodontia, predicted a maximum market of one million home computers, expecting them to be somewhat more popular than ham radio, which attracted about 300,000.[2] A survey conducted by BYTE magazine in late 1976 shows that these hobbyists were well-educated (72% had at least a bachelor’s degree), well-off (with a median annual income of $20,000, or $123,000 in 2025 dollars), and overwhelmingly (99%) male. Based on the letters and articles appearing in BYTE in that same centennial year of 1976, it is clear that what interested these hobbyists above all was the computers themselves: which one to buy, how to build it, how to program it, how to expand it and to accessorize it.[3] Discussion of practical software applications appeared infrequently. One intrepid soul went so far as to hypothesize a microcomputer-based accounting program, but he doesn’t seem to have actually written it. When  mention of software appeared it came most often in the form of games. The few with more serious scientific and statistical work in mind for their home computer complained of the excessive discussion of “super space electronic hangman life-war pong.” Star Trek games were especially popular:  In July, D.E. Hipps of Miami advertised a Star Trek BASIC game for sale for $10; in August, Glen Brickley of Florissant, Missouri wrote about demoing his “favorite version of Star Trek” for friends and neighbors; and in August, BYTE published, with pride, “the first version of Star Trek to be printed in full in BYTE” (though the author consistently misspelled “phasers” as “phasors”). Most computer hobbyists were electronic hobbyists first, and the electronics hobby grew up side-by-side with modern science fiction, and shared its fascination with the possibilities of future technology. We can guess that this is what drew them to this rare piece of popular culture that took the future and the “what-ifs” it poses seriously, rather than treating it as a mere backdrop for adventure stories.[4] The June 1976 issue of Interface is one of many examples of the hobbyists’ ongoing fascination with Star Trek. Other than a shared interest in computers—and, apparently, Star Trek—three kinds of organizations brought these men together: local clubs, where they could share expertise in software and hardware and build a sense of belonging and community; magazines like BYTE where they could learn about new products and get project ideas; and retail stores, where they could try out the latest models and shoot the shit with fellow enthusiasts. The computer hobbyists were also bound by a force more diffuse than any of these concrete social forms: a shared mythology of the origins of hobby computing that gave broader social and cultural meaning to their community. The Clubs The most famous computer club of all, of course, is the Homebrew Computer Club, headquartered in Silicon Valley, whose story is well documented in several excellent sources, especially Steven Levy’s book, Hackers. Its fame is well-deserved, for its role as the incubator of Apple Computer, if nothing else. But the focus of the historical literature on Homebrew as the computer club has tended to distort the image of American personal computing as a whole. The Homebrew Computer Club had a distinctive political bent, due to the radical left leanings of many of its leading members, including co-founder Fred Moore. In 1959, Moore had gone on hunger strike against the Reserve Officers’ Training Corps (ROTC) program at Berkeley, which had been compulsory for all students since the nineteenth century. He later became a draft resister and published a tract against institutionalized learning, Skool Resistance. Yet even the bulk of Homebrew’s membership stubbornly stuck to technical hobbyist concerns, despite Moore’s efforts to turn their attention to social causes such as aiding the disabled or protesting nuclear weapons. To the extent that personal computing had a politics, it was a politics of independence, not social justice.[5] Cover of the second Homebrew Computer Club newsletter, with sketches of members. Only Fred Moore is labeled, but the man with glasses on the far right is likely Lee Felsenstein. Moreover, excitement about personal computing was not at all a phenomenon confined to the Bay Area. By the summer of 1975, Altair shipments had begun in earnest, and clubs formed across the United States and beyond where enthusiasts could share information and ask for help with their new (or prospective) machines. The movement continued to grow as new companies sprang up and shipped more hobby machines. Over the course of 1976, dozens of clubs advertised their existence or attempted to find a membership through classifieds in BYTE, from the Oregon Computer Club headquartered in Portland (with a membership of forty-nine), to a proposed club in Saint Petersburg, Florida, mooted by one Allen Swan. But, as one might expect, the largest and most successful clubs were concentrated in and around major metropolitan areas with a large pool of existing computer professionals, such as Los Angeles, Chicago, and New York City.[6] The Amateur Computer Group of New Jersey convened for the first time in June 1975, in under the presidency of Sol Libes. Libes, a professor at Union County College, was another of those computer lovers working on their own home computers for years before the arrival of the Altair, who then suddenly found themselves joined by hundreds of like-minded hobbyists once computing became somewhat more accessible. Libe’s club grew to 1,600 members by the early 1980s, had a newsletter and software library, sponsored the annual Trenton Computer Festival, and is likely the only organization from the hobby computer years other than Apple and Microsoft to still survive today.[7] The Chicago Area Computer Hobbyist Exchange attracted several hundred members to its first meeting at Northwestern University in the summer of 1975. Like many of the larger clubs, they organized information exchange around “special interest groups” for each brand of computer (Digital Group, IMSAI, Altair, etc.). The club also gave birth to one of the most significant novel software applications to emerge from the personal computer hobby, the bulletin board system—we will have more to say on that later in this series.[8] The most ambitious—one might say hubristic—of the clubs was the Southern California Computer Society (SCCS) of Los Angeles, founded in Don Tarbell’s apartment in June of 1975. Within the year the club could boast of a glossy club magazine(in contrast to the cheap newsletters of most clubs) called Interface, plans to develop a public computer center, and—in answer to the challenge of Micro-Soft BASIC—ideas about distributing their own royalty-free program library, including “’branch’ repositories that would reproduce and distribute on a local basis.”[9] Not content with a regional purview, the leadership also encouraged the incorporation of far-flung club chapters into their organization; in that spirit, they changed their name in early 1977 to the International Computer Society. Several chapters opened in California, and more across the U.S, from Minnesota to Virginia, but interest in SCCS/ICS chapters could be found as far away as Mexico City, Japan, and New Zealand. Across all of these chapters, the group accumulated about 8,000 members.[10] The whole project, however, ran atop a rickety foundation of amateur volunteer work, and fell apart under its own weight. First came the breakdown in the relationship between the club and the publisher of Interface, Bob Jones. Whether frustrated with the club’s failure to deliver articles to fill the magazine (his version), or greedy to make more money as a for-profit enterprise (the club’s version), Jones broke away to create Interface Age, leaving SCCS scrambling to start up its own replacement magazine. Expensive lawsuits flew in both directions. Then came the mismanagement of the club’s group buy program: intended to save members money by pooling their purchases into a large-scale order with volume discounts, it instead lost thousands of members’ dollars to a scammer: “a vendor,” as one wry commenter put it “who never vended” (the malefactor traded under the moniker of “Colonel Winthrop.”)[11] The December 1976 issues of SCCS Interface and Interface Age. Which is authentic, and which the impostor? More lawsuits ensued. Squeezed by money troubles, the club leadership raised dues to $15 annually, and sent out a plea for early renewal and prepayment of multiple years’ dues. The club magazine missed several issues in 1977, then ceased publication in September. The ICS sputtered on into 1978 (Gordon French of Processor Technology announced his candidacy for the club presidency in March), then disappeared from the historical record.[12] Whatever the specific historic accidents that brought down SCCS, the general project—a grand non-profit network that would provide software, group buying programs and other forms of support to its members—was doomed by larger historical forces. Though many clubs survived into the 1980s or beyond, they waned in significance with the maturing of commercial software and the turn of personal computer sellers away from hobbyists and towards the larger and more lucrative consumer and business markets. Newer computer products no longer required access to secret lore to figure out what to do with them, and most buyers expected to get any support they did need from a retailer or vendor, not to rely on mutual support networks of other buyers. One-to-one commercial relations between buyer and seller became more common than the many-to-many communal webs of the hobby era. The Retailers The first buyers of Altair could not find it in any shop. Every transaction occurred via a check sent to MITS, sight unseen, in the hopes of receiving a computer in exchange. This way of doing businesses suited the hardcore enthusiast just fine, but anyone with uncertainty about the product—whether they wanted a computer at all, which model was best, how much memory or other accessories they needed—was unlikely to bite. It had disadvantages for the manufacturer, too. Every transaction incurred overhead for payment processing and shipping, and demand was uncertain and unpredictable week to week and month to month. Without any certainty about how many buyers would send in checks next month, they had to scale up manufacturing carefully or risk overcommitting and going bust. Retail computer shops would alleviate the problems of both sides of the market. For buyers, they provided the opportunity to see, touch, and try out various computer models, and get advice from knowledgeable salespeople. For sellers, they offered larger, more predictable orders, improving their cash flow and reducing the overhead of managing direct sales. The very first computer shop appeared around the same time when the clubs began spreading, in the summer of 1975. But they did not open in large numbers until 1976, after the hardcore enthusiasts had primed the pump for further sales to those who had seen or heard about the computers being purchased by their friends or co-workers. The earliest documented computer shop, Dick Heiser’s Computer Store, opened in July 1975 in a 1,000-square-foot store front on Pico Boulevard in West Los Angeles. Heiser had attended the very first SCCS meeting in Don Tarbell’s apartment, and, seeing the level of excitement about Altair, signed up to become the first licensed Altair dealer. Paul Terrell’s Byte Shop followed later in the year in Mountain View, California. In March of 1976, Stan Veit’s Computer Mart opened on Madison Avenue in New York City and Roy Borrill’s Data Domain in Bloomington, Indiana (home to Indiana University). Within a year, stores had sprouted across the United States like spring weeds: five hundred nation-wide by July 1977.[13] Paul Terrell’s Byte Shop at 1063 El Camino Real in Mountain View. Ed Roberts tries to enforce an exclusive license on Altair dealers, based on the car dealership franchise model. But the industry was too fast-moving and MITS too cash- and capital-strapped to make this workable. Hungry new competitors, from IMSAI to Processor Technology, entered the market constantly with new-and-improved models. Many buyers weren’t satisfied with only Altair offerings, MITS couldn’t supply dealers with enough stock to satisfy those who were, and they undercut even their few loyal dealers by continuing to offer direct sales in order to keep as much cash as possible flowing in. Even Dick Heiser, founder of the original Los Angeles Computer Store, broke ties with MITS in late 1977, unable to sustain an Altair-only partnership.[14] Dick Heiser with a customer at The Computer Store in Los Angeles in 1977. Not only is the teen here playing a Star Trek game, a picture of the ubiquitous starship Enterprise can be seen hanging in the background. [Photo by George Birch, from Benj Edwards, “Inside Computer Stores of the 1970s and 1980s,” July 13, 2022] Given the number of competing computer makers, retailers ultimately had the stronger position in the relationship. Manufacturers who could satisfy the desires of the stores for reliable delivery of stock and robust service and customer support would thrive, while the others withered.[15] But independent dealers faced competition of their own. Chain stores could extract larger volume discounts from manufacturers and build up regional or even national brand recognition. Byte Shop, for example, expanded to fifty locations by March 1978. The most successful chain was ComputerLand, run by the same Bill Millard who had founded IMSAI. Though he later claimed everything was “clean and appropriate,” Millard clearly extracted money and employee time from the declining IMSAI in order to get his new enterprise off the ground. As the company’s chronicler put it, “There was magic in ComputerLand. Started on just Milliard’s $10,000 personal investment, losing $169,000 in its maiden year, the fledgling company required no venture capital or bank loans to get off the ground.” Some small dealers, such as Veit’s Computer Mart, responded by forming a confederacy of independent dealers under a shared front called “XYZ Corporation” that they could use to buy computers with volume discounts.[16] A ComputerLand ad from the February 1978 issue of BYTE. Note that the store offers many of the services that most people could have only found in a club in 1975 or 1976: assistance with assembly, repair, and programming. The Publishers Just like manufacturers, retailers faced their own cash flow risks: outside the holiday season they might suffer from long dry spells without many sales. The early retailers typically solved this by simply not carrying inventory: they took customer orders until they accumulated a batch of ten or so computers from the same manufacturer, then filled all of the orders at once. But a big boon for their cash flow woes came in the form of publications that sold for much less than a computer, but at a much higher and steadier volume, especially the rapidly growing array of computer magazines.[17] BYTE was both the first of the national computer magazines, and the most successful. Launched in New Hampshire in the late summer of 1975, by 1978 it built up a circulation of 140,000 issues per month. It got a head start by cribbing thousands of addresses from the mailing lists of manufacturers such as Nat Wadsworth’s Connecticut-based SCELBI, one of the proto-companies of the pre-Altair era. But, like so much of the hobby computer culture, BYTE also had direct ancestry in the radio electronics hobby.[18] Conflict among the three principal actors has muddled the story of its origins. Wayne Green, publisher of a radio hobby magazine called 73 in Peterborough, New Hampshire, started printing articles about computers in 1974, and found that they were wildly popular. Virginia Londner Green, his ex-wife, worked at the magazine as a business manager. Carl Helmers, a computer enthusiast in Cambridge, Massachusetts, authored and self-published a newsletter about home computers. One of the Greens learned of Helmers’ newsletter, and one or more of the three came up with the idea of combining Helmers’ computer expertise with the infrastructure and know-how from 73 to launch a professional-quality computer hobby magazine.[19] The cover of BYTE‘s September 1976 0.01-centennial issue (i.e., one year anniversary). The phrase “cyber-crud” and the image of a fist on the shirt of the man at center both come from Ted Nelson’s Computer Lib/Dream Machines. Also, these people really liked Star Trek. Within months, for reasons that remain murky, Wayne Green found himself ousted by his ex-wife, who took over publishing of BYTE, with Helmers as editor. Embittered, Green launched a competing magazine, which he wanted to call Kilobyte, but was forced to change to Kilobaud. Thus began a brief period in which Peterborough, with a population of about 4,000, served as a global hub of computer magazine publishing.[20] Another magazine, Personal Computing, spun off from MITS in Albuquerque. Dave Bunnell, hired as a technical writer, had become so fond of running the company newsletter Computer Notes, that he decided to go into publishing on his own. On the West Coast, in addition to the aforementioned Interface Age, there was also Dr. Dobb’s Journal of Computer Calisthenics and Orthodontia—conceived by Stanford lecturer Dennis Allison and computer evangelist Bob Albrecht (Dennis and Bob making “Dobb”), and edited by the hippie-ish Jim Warren, who drifted into computers after being fired from a position teaching math at a Catholic school for holding (widely-publicized) nude parties. Bunnell (right) with Bill Gates. This photo probably dates to sometime in the early 1980s. Computer books also went through a publishing boom. Adam Osborne, born to British parents in Thailand and trained as a chemical engineer, began writing texts for computer companies after losing his job at Shell Oil in California. When Altair arrived, it shook him with the same sense of revelation that so many other computer lovers had experienced. He whipped out a new book, Introduction to Microcomputers, and put it out himself when his previous publishers declined to print it. A highly technical text, full of details on Boolean logic and shift registers, it nonetheless sold 20,000 copies within a year to buyers eager for any information to help them understand and use their new machines.[21] The magazines served several roles. They offered up a cornucopia of content to inform and entertain their readers: industry news, software listings, project ideas, product announcements and reviews, and more. One issue of Interface Age even came with a BASIC implementation inscribed onto a vinyl record, ready to be loaded directly into a computer as if from a cassette reader. The magazines also provided manufacturers with a direct advertising and sales channel to thousands of potential buyers—especially important for smaller makers of computers or computer parts and accessories, whose wares were unlikely to be found in your local store. Finally, they became the primary texts through with the culture of the computer hobbyist was established and promulgated.[22] Each of the magazines had its own distinctive character and personality. BYTE was the magazine for the established hobbyist and tried to cover it all: hardware, software, community news, book reviews, and more. But the hardcore libertarian streak of founding editor Carl Helmers (an avid fan of Ayn Rand) also shone through in the slant of some of its articles. Wayne Green’s Kilobaud, with its spartan cover (title and table of contents only), appealed especially those with an interest in starting a business to make money off of their interest in computers. The short-lived ROM spoke to the humanist hobbyist, offering longer reports and think-pieces. Dr. Dobb’s had an amateur, free-wheeling aesthetic and tone not far removed from an underground newsletter. In keeping with its origins as a vehicle to publish Tiny BASIC (a free Microsoft BASIC alternative), itfocused on software listings. Creative Computing also had a software bent, but as a pre-Altair magazine designed to target users of BASIC in schools and universities, it took a more lighthearted and less technical tone, while Bunnell’s Personal Computing opened its arms to the beginner, with the message that computing was for everyone.[23] The Mythology of the Microcomputer Running through many of these early publications can be found a common narrative, a mythology of the microcomputer. To dramatize it: Until recently, darkness lay over the world of computing. Computers, a font of intellectual power, had served the interests only of the elite few. They lay solely in the hands of large corporate and government bureaucracies. Worse yet, even within those organizations, an inner circle of priests mediated access to the machine: the ordinary layperson could not be allowed to approach it. Then came the computer hobbyist. A Prometheus, a Martin Luther, and a Thomas Jefferson all wrapped into one, he ripped the computer and the knowledge of how to use it from the hands of the priests, sharing freedom and power with the masses. The “priesthood” metaphor came from Ted Nelson’s 1974 book, Computer Lib/Dream Machines, but became a powerful means for the post-Altair hobbyist to define himself against what came before. The imagery came to BYTE magazinein an October 1976 article by Mike Wilbur and David Fylstra: The movement towards personalized and individualized computing is an important threat to the aura of mystery that has surrounded the computer for its entire history. Until now, computers were understood by only a select few who were revered almost as befitted the status of priesthood.[24] In this cartoon from Wilbur and Fylstra’s article on the “computer priesthood,” the sinister “HAL” (aka IBM) finds himself chagrined by the spread of hobby computerists. BYTE editor Carl Helmers made the historical connection with the Enlightenment explicit: Personal computing as practiced by large numbers of people will help end the concentration of apparent power in the “in” group of programmers and technicians, just as the enlightenment and renaissance in Europe brought about a much wider understanding beginning in the 14th century.[25] The notion that computing had been jealously guarded by the powerful and kept away from the people can be found as early as June 1975, in the pages Homebrew Computer Club newsletter. In the words of club co-founder Fred Moore: The evidence is overwhelming the people want computers… Why did the Big Companies miss this market? They were busy selling overpriced machines to each other (and the government and military). They don’t want to sell directly to the public.[26] In the first collected volume of Dr. Dobb’s Journal, editor Jim Warren sounded the same theme of a transition from exclusivity to democracy in more eloquent language: …I slowly come to believe that the massive information processing power which has traditionally been available only to the rich and powerful in government and large corporations will truly become available to the general public. And, I see that as having a tremendous democratizing potential, for most assuredly, information–ability to organize and process it–is power. …This is a new and different kind of frontier. We are part of the small cadre of frontiersmen who are exploring it. exploring this new frontier.[27] Personal Computing editor Dave Bunnell further emphasized the potential for the computer as a political weapon against entrenched bureaucracy: …personal computers have already proliferated beyond most government regulation. People already have them, just like (pardon the analogy) people already have hand guns. If you have a computer, use it. It is your equalizer. It is a way to organize and fight back against the impersonal institutions and the catch-22 regulations of modern society.[28] The journalists and social scientists who began to write the first studies of the personal computer in the mid-1980s lapped up this narrative, which provided a heroic framing for the protagonists of their stories. They gave it new life and a much broader audience in books like Silicon Valley Fever (“Until the mid-1970s when the microcomputer burst on the American scene, computers were owned and operated by the establishment–government, big corporations, and other large institutions”) and Fire in the Valley (“Programmers, technicians, and engineers who worked with large computers all had the feeling of being ‘locked out’ of the machine room… there also developed a ‘computer priesthood’… The Altair from MITS breached the machine room door…”)[29] This way of telling the history of the hobby computer gave deeper meaning to a pursuit that looked frivolous on the surface: paying thousands of dollars for a machine to play Star Trek. And, like most myths, it contained elements of truth. There was a large installed base of batch-processing systems, surrounded by a contingent of programmers denied direct access to the machine. Between the two there did stand a group of technicians whose relation to the computer was not unlike the relation of the pre-Vatican II priest to the Eucharist. But in promoting this myth, the computer hobbyists denied their own parentage, obscuring the time-sharing and minicomputer cultures that had made the hobby computer possible and from which it had borrowed most of its ideas. The Altair was not an ex nihilo response to an oppressive IBM batch-processing culture that had made access to computers impossible. The announcement of Altair had called it the “world’s first minicomputer kit”: it was the fulfillment of the dream of owning your own minicomputer, a type of computer most of its buyers had already used. It could not have been successful if thousands of people hadn’t already gotten hooked on the experience of interacting directly with a time-sharing system or minicomputer. This self-confident hobby computer culture, however—with its clubs, its local shops, its magazines, and its myths—would soon be subsumed by a larger phenomenon. From this point forward, no longer will nearly every major character in the story of the personal computer have a background in hobby electronics or ham radio. No longer will nearly all the computer makers and buyers alike be computer lovers who found their passion on mainframe, minicomputer, or time-sharing systems. In 1977, the personal computer entered a new phase of growth, led by a new class of businessmen who targeted the mass market.

Read more
The Transistor, Part 3: Endless Reinvention

For over a hundred years the analog dog wagged the digital tail. The effort to extend the reach of our senses – sight, hearing, even (after a manner of speaking) touch, drove engineers and scientists to search for better components for telegraph, telephone, radio and radar equipment. It was a happy accident that this also opened the door to new kinds of digital machines.1 I set out to tell the story of this repeated exaptation, whereby telecommunications engineers supplied the raw materials of the first digital computers, and sometimes even designed and built such computers themselves. By the 1960s, however, this fruitful relationship came to a close, and so too does my story. The makers of digital equipment no longer had any need to look outward to the world of the telegraph, telephone, and radio for new and improved switches, because the transistor itself provided a seemingly inexhaustible vein of improvements to mine. Year after year, they dug deeper and deeper, always finding ways to exponentially increase speed and reduce costs. None of this, however, would have happened, if the invention of the transistor had stopped with Bardeen and Brattain. A Slow Start The popular press did not react to Bell Labs’ announcement of the transistor with great enthusiasm. On July 1, 1948, The New York Times gave it three paragraphs at the bottom of their “The News of Radio” bulletin. The notice appeared after several other announcements evidently deemed more significant: touting, for example, the one-hour “Waltz Time” broadcast coming to NBC. Hindsight tempts us to mock, or even scold, the ignorance of the anonymous authors – how did they fail to recognize the world-shaking event that had just transpired? The New York Times announces the arrival of the transistor But hindsight distorts, amplifying the few signals that we now know to be significant,  though at the time they were lost in a sea of noise. The transistor of 1948 was very different from the transistors in the computer on which you are reading this2. So different that, despite their common name and the unbroken line of ancestry that connects them, they should be considered a different species of thing, if not a different genus. They share neither material composition, nor structural form, nor functional principles; not to mention their tremendous difference in size. Only by being reinvented again and again did the clunky device constructed by Bardeen and Brattain become capable of transforming the world and the way we live in it. In truth, the germanium point-contact transistor deserved no more attention than it got. It suffered from a number of defects relative to its vacuum tube cousin. To be sure, it was rather smaller than even the most compact tube. Its lack of a heated filament meant it generated less heat, consumed less power, would not burn out, and required no warming up before it could be used. But the accumulation of dirt on the contact surface led to failures and undermined its potential for longer life; it produced a noisier signal; worked only at low power and in a narrow frequency band; failed when exposed to heat, cold or humidity; and could not be produced uniformly. A sequence of transistors all constructed in the same way by the same people would differ obnoxiously in their electrical characteristics. All of this baggage came at a retail price some eight times higher than that of a typical tube. Not until 1952 did Bell (and other patent licensees) work out the manufacturing kinks sufficiently for point-contact transistors to see real use, and even so they never spread much beyond the hearing aid market, where price sensitivity was relatively low, and the advantages offered in battery life dominated other considerations.3 The first efforts to remake the transistor into something better and more useful, however, had already begun. In fact, they began well before the public even learned that the transistor existed. Shockley’s Ambition As the year 1947 came to a close, an agitated Bill Shockley took a business trip to Chicago. He had some vague ideas about how to trump Bardeen and Brattain’s recently invented transistor, but he had not yet had the chance to develop them. And so, rather than enjoy his time off between business engagements, he spent New Year’s Eve and New Year’s Day in his hotel room, scribbling out some twenty pages of notes on his ideas. Among them was a proposal for a new kind of transistor consisting of a kind of semiconductor sandwich – a slice of p-type germanium between two pieces of n-type. Emboldened by this ace in his back pocket, Shockley then confronted Bardeen and Brattain on his return to Murray Hill, claiming full credit for the invention of the transistor. Had it not been his field-effect idea that had sent Bardeen and Brattain scurrying off to the lab? Should not the authorship of the patent thus fall entirely to him? But Shockley’s stratagem backfired: Bell Labs patent lawyers found that an obscure inventor, Julius Lilienfeld, had patented a field-effect semiconductor amplifier almost twenty years prior, in 1930. Lilienfeld had surely never built the thing he proposed, given the state of materials science at the time, but the risk of an interference proceeding was too great – better to avoid mentioning the field effect in the patent altogether. So, though Bell Labs would allow Shockley a generous share of the public credit, it named only Bardeen and Brattain in the patent. The damage was done, however: Shockley’s ambition destroyed his relationship with his two subordinates. Bardeen abandoned the transistor, and shifted his focus to superconductivity. He left Bell Labs in 1951. Brattain remained but refused to work with Shockley again, and insisted on being transfered to another group. His inability to get along with others made it impossible for Shockley to rise any further at Bell Labs, and so he too jumped ship. In 1956, he returned home to Palo Alto to found his own transistor manufacturing company, Shockley Semiconductor. Before leaving for the West Coast, he left his wife Jean, while she was recovering from uterine cancer, for his soon-to-be second wife, Emmy Lanning. But of the two halves of his California dream — new company and new wife — only one would last. In 1957, Shockley’s best engineers, irritated by his management style and the direction in which he was taking the company, defected to form a new firm called Fairchild Semiconductor. Shockley in 1956. So Shockley abandoned the hollowed-out husk of his company to take a position in the electrical engineering department at Stanford. There he proceeded to alienate his colleagues (and his oldest friend, physicist Fred Seitz) with his newfound interest in dysgenics and racial hygiene – unpopular topics in the United States since the late war, especially so in academia. He delighted in stirring up controversy, riling up the media, and drawing protesters. He died in 1989, alienated from his children and his peers, attended only by his eternally devoted second wife, Emmy. Though his own attempt at entrepreneurship had failed miserably, Shockley had cast seed onto fertile ground. The San Francisco Bay area teemed with small electronics firms and had been irrigated with funds from the federal government during the war. Fairchild Semiconductor, Shockley’s accidental offspring, itself spawned dozens of new firms, among them two names still very well-known today: Intel and Advanced Micro Devices (AMD). By the early 1970s, the area had acquired the moniker “Silicon Valley.” But wait – Bardeen and Brattain had built a germanium transistor. Where did the silicon come from? The forlorn former site of Shockley Semiconductor in Mountain View, California, as of 2009. It has since been demolished.   To The Silicon Junction The new kind of transistor that Shockley devised in his Chicago hotel had a happier destiny than that of its inventor. This was thanks to one man’s determination to grow single, pure semiconductor crystals. Gordon Teal, a physical chemist from Texas who had studied the then-useless element of germanium for his doctoral thesis, had joined Bell Labs in the thirties. After learning about the transistor, he became convinced that its reliability and power could be vastly improved by crafting it from a pure monocrystal, rather than the polycrystalline aggregates then being used. Shockley discouraged this effort, believing that it was an unnecessary waste of resources. But Teal persevered, and succeeded, with the help of a mechanical engineer named John Little, in constructing an apparatus that pulled a tiny seed crystal from a molten bath of germanium. As the germanium cooled around the seed, it extended its crystalline structure, drawing out a continuous, and almost entirely pure, semiconductor lattice. By the spring of 1949 Teal and Little could produce crystals on demand, and tests showed that they vastly outperformed their poly-crystalline counterparts. In particular, injected minority carriers could survive inside them for one hundred microseconds or more (versus ten microseconds or less in other crystal samples).4 Teal could now avail himself of more resources, and recruited more men to his team, among them another physical chemist who came to Bell Labs by way of Texas, Morgan Sparks.5 They began altering the melt to make p-type or n-type germanium by adding pellets of the appropriate doping agents. Within another year they had refined their technique to the point that they could actually grow an n-p-n germanium sandwich right out of the melt. And it worked just as Shockley had predicted: an electrical signal on the p-type material modulated the flow of electricity between two other leads attached to the n-type slices that surrounded it. Morgan Sparks and Gordon Teal at the workbench at Bell Labs. This grown-junction transistor outclassed its point-contact predecessor in almost every dimension. Most notably they were more reliable and predictable, far less noisy (and thus more sensitive), and extremely power efficient – drawing one million times less power than a typical vacuum tube.6 In July 1951, Bell Labs held another press conference to announce this new creation. Before the original transistor had even gotten off the ground commercially, it had already been rendered largely irrelevant. Yet it was still only the beginning. In 1952, General Electric (GE) announced a new process for making junction transistors called the alloy-junction method. This involved melting two pellets of indium (a p-type donor) into either side of a thin slice n-type germanium. The process was simpler and less expensive than growing junctions out of the melt, generated less resistance, and supported higher frequencies. Grown- and Alloy-Junction Transistors The following year Gordon Teal decided to return to his home state, and took a job at Texas Instruments (TI), in Dallas. Founded as Geophysical Services, Inc., a maker of oil prospecting equipment, TI branched out into electronics during the war and was now entering the transistor market under a license from Western Electric (Bell’s manufacturing arm). Teal brought with him the newest set of skills he had developed at Bell Labs: the ability to grow and dope monocrystals of silicon. Germanium’s most obvious weakness was its sensitivity to temperature. When exposed to heat, the germanium atoms in the crystal rapidly shed free electrons, becoming more and more like a pure conductor. At around 170 degrees Farenheit, they ceased to work as transistors altogether. The military – a potential customer with little price sensitivity and a powerful need for stable, reliable and small electronic components –  was a prime target for transistor sales. But temperature-sensitive germanium would not do for many military applications, especially in aerospace equipment. Silicon was much more stable, but this came at the price of a much higher melting point, as high as that of steel. This created great difficulties given that pure crystals were needed to make high quality transistors. The hot molten silicon would leach impurities from whatever crucible it rested in. Teal and his team at TI managed to overcome these difficulties, with the help of ultra-high purity silicon samples from DuPont. In May 1954, at an Institute of Radio Engineers conference in Dayton, Ohio, Teal demonstrated that the new silicon devices coming out of his lab continued to amplify even when immersed in hot oil.7 The Upstarts At last, some seven years after the initial invention of the transistor, it could be made from the material with which it has become synonymous. As much time again would pass before transistors appeared that roughly resemble the form of those in our microprocessors and memory chips. In 1955, Bell Labs scientists succeeded in making silicon transistors with a new doping technique – rather than adding solid dopant pellents to liquid melt, they diffused vaporized dopants into a solid semiconductor surface. By carefully controlling the temperature, pressure and duration of exposure, they achieved exactly the desired depth and amount of doping. This more precise control of the manufacturing process resulted in a more precise control over the electrical properties of the end product. Just as importantly, the diffusion technique opened the doors to batch production – one could dope a large slab of silicon all at once and then slice it up into transistors after the fact. The military provided the cash needed to offset Bell’s high up-front costs for setting up manufacturing. They wanted the new product for the ultra-high-frequency Distant Early Warning Line, a chain of arctic radar stations designed to detect Soviet bombers coming over the North Pole, and were willing to pay $100 per transistor.8 Doping, combined with photolithography to control the placement of the dopants, made it possible to imagine etching a complete circuit in one semiconductor wafer, an achievement which was realized independently at Fairchild Semiconductor and Texas Instruments in 1959. Fairchild’s “planar process” used the chemical deposition of metal films to connect the transistor’s electrical contacts. This obviated the need for hand wiring, simultaneously reducing costs and increasing reliability. Finally, in 1960, two Bell Lab engineers (John Atalla and Dawon Kahng) realized Shockley’s original concept for a field-effect transistor. A thin layer of oxide on the semiconductor surface proved highly effective at suppressing the surface states, allowing the electric field from an aluminum gate to pass into the body of the silicon. This was the origin of the MOSFET (metal-oxide semiconductor field-effect transistor), which proved so amenable to miniaturization and still features in almost all computers today.9 Here at last, thirteen years after the initial invention of the transistor, is something recognizably like the transistor in your computer. It was simpler to make and used less power than junction transistors, but was a laggard in responding to signals. Not until the advent of large-scale integration circuits with hundreds or thousands of components on a single chip did the advantages of field-effect transistors come to the fore. Patent illustration for the field-effect transistor The field-effect proved to be the last major contribution by Bell Labs to the development of the transistor. The large electronics incumbents such as Bell (via Western Electric), General Electric, Sylvania, and Westinghouse developed an impressive record of semiconductor research. From 1952-1965, Bell Labs alone secured well over two hundred patents in the field. Nonetheless the commercial marketplace rapidly passed into the hands of new players like Texas Instruments, Transitron and Fairchild. The early market for transistors was simply too small for the big incumbents to pay much attention to: roughly $18 million a year in the mid-1950s, versus over $2 billion for the electronics market as a whole. Meanwhile, though, the research labs of those same incumbents served as unwitting training facilities, where young scientists could soak up knowledge about semiconductors before moving on to sell their services to smaller firms. By the time the market for tube electronics began to decline seriously, in the mid-1960s, it was far too late for Bell, Westinghouse and the like to overtake the upstarts.10 The Computer Transistorized There were four notable areas where transistors made significant inroads in the 1950s. The first two were hearing aids and portable radios, where lower power consumption (and thus longer battery life) trumped other considerations. The U.S. military was the third. They had high hopes for transistors as rugged and compact components in everything from field radios to ballistic rockets. But in the early years their spending on transistors was more of a bet on the future of the technology than an indication of its present value. And, finally, there was digital computing. In the case of the computer, the severe disadvantages of vacuum tube switches were well known, so much so that many skeptics before the war had believed that an electronic computer could never be made practical. When assembled in units of thousands, tubes devoured electrical power while generating vast amounts of heat, and could be relied on only to burn out regularly. Thus the power-sipping, cool, and filament-less transistor appeared as a kind of savior to computer manufacturers. Its disadvantages as an amplifier (such as a noisier output signal) presented much less of a problem when used as a switch. The only real obstacle was cost, and that would began to fall precipitously, in due time. All of the early American experiments in transistorized computers occurred at the intersection of the desire of the military to explore the potential of a promising new technology, and the desire of computer engineers to migrate to a new, better kind of switch. Bell Labs built the TRADIC in 1954 for the U.S. Air Force, to see if transistors would make it possible to send a digital computer on-board a bomber, to replace analog navigation and bomb-sighting aids.  MIT’s Lincoln Laboratory, developed the TX-0 computer as part of its vast air-defense system project, in 1956. The machine used yet another transistor variant, the surface-barrier transistor, which was well suited to high-speed computing. Philco built its SOLO computer under Navy contract (but really on behalf of the National Security Agency), completing the work in 1958. (It was another surface-barrier design.) The story in Western Europe, which was not so flush with Cold War military resources, was rather different. Machines like the Manchester Transistor Computer, Harwell CADET (another ENIAC-inspired name, obscured by mirror-writing), and Austrian Mailüfterl were side projects, using whatever resources the creators could scrape together -including first-generation point-contact transistors. There is much jockeying among these various projects for the title of first transistorized computer. It all, of course, depends on which definitions one chooses for “first”, “transistorized,” and “computer.” We know where the story ends up in any case. The commercialization of transistorized computers followed almost immediately. Year-by-year computers of the same price grew ever more powerful while computers of the same power fell ever lower in price, in a process so seemingly inexorable that it became enshrined as a “law”, to sit alongside gravity and the conservation of matter. Shall we quibble over which was the first pebble in the avalanche? Why Moore’s Law? As we reach the close of our story of the switch, it is worth asking the question: What did cause this avalanche? Why does11 Moore’s Law exist? There is no Moore’s Law of airplanes or vacuum cleaners, nor, for that matter, of vacuum tubes or relays. There are two parts to the answer: The logical properties of the switch as a category of artifact The ability to use entirely chemical processes to make transistors First, the essence of the switch. The properties of most artifacts must satisfy a wide variety of non-negotiable physical constraints. A passenger airplane must be able to hold up the combined weight of many people. A vacuum cleaner must be able to pull up a certain amount of dirt in a given amount of time, over a given physical area. Neither airplanes nor vacuum cleaners would be useful if reduced to nanoscale. On the other hand, a switch – an automatic switch, one never touched by human hands – has very few physical constraints. It needs to have two distinguishable states, and it needs to be able to tell other switches like itself to change between those states. That is to say, all it needs to do is to turn on, and turn back off again. Given this, what is special about transistors? Why have other kinds of digital switch not seen such exponential improvements? Here we come to the second fact. Transistors can be made using chemical processes with no mechanical intervention. From the beginning, the core element of transistor manufacturing was the application of chemical dopants. Then came the planar process, which removed the last mechanical step in the manufacturing process – the attachment of wires. It thus cast off the last physical constraint on miniaturization. No longer did transistors need to be large enough for fingers – or a mechanical device of any sort – to handle. Mere chemistry would do the job, at an unimaginably tiny scale: acids to etch, light to control which parts of the surface will resist the etching, and vapors to deposit dopants and metal films into the etched corridors. Why miniaturize in the first place? Decreased size brought with it an array of pleasant side-effects: higher switching speeds, reduced power consumption, and lower unit costs. Therefore powerful incentives drove everyone in the business to look for ways to make their switches ever smaller. And so the semiconductor industry, within a human lifetime, went from making switches the size of a fingernail to packing tens of millions of switches within a single square millimeter. From asking eight dollars a switch to offering twenty million switches per dollar. The Intel 1103 memory chip from 1971. Already the individual transistor, mere tens of micrometers across, had disappeared from sight. It has shrunk a thousand-fold since. [Previous part] Further Reading Ernest Bruan and Stuart MacDonald, Revolution in Miniature (1978) Michael Riordan and Lillian Hoddeson, Crystal Fire (1997) Joel Shurkin, Broken Genius (1997)      

Read more
The Electronic Computers, Part 2: Colossus

In 1938 the head of the British Secret Intelligence Service quietly bought up a sixty acre estate fifty miles from London. Located at the junction of railways running up from London to parts north and from Oxford in the west to Cambridge in the east, it was an ideal site for an organization that needed to be out of view, yet within easy reach of the most important centers of British knowledge and power. The estate, known as Bletchley Park, became the center of Britain’s code-breaking effort during World War II. It is perhaps the only place in the world that is famous for cryptography. Tunny In the summer of 1941, work was already well underway at Bletchley on cracking the famous Enigma cipher machine, used by both the German Army and Navy. If you have seen a movie about British code-breaking, it was about Enigma, but we will have little to say about it here. That’s because, shortly after the invasion of the Soviet Union, Bletchley picked up on a new kind of encrypted traffic. It did not take long for the cryptanalysts to discern the general nature of the machine used to send this traffic, which they dubbed “Tunny.” Unlike Engima, which required hand transcription of messages, Tunny was attached directly to a teletypewriter. The teletypewriter converted each character typed by the operator into a stream of “dots and crosses” (equivalent to Morse dots and dashes) in the standard Baudot code, with five symbols per character. This was the unencrypted message text. The Tunny machine simultaneously used twelve wheels to generate its own parallel stream of dots and crosses: the key. It then “added” the key to the message, producing the encrypted cipher text which went out over the air.  This addition was done with modulo-2 arithmetic (i.e. wrapping 2 back around to 0), reading the dot as 0 and the cross as 1: 0 + 0 = 0   0 + 1 = 1 1 + 1 = 0 Another Tunny on the receiver’s side, with the same settings, generated the same key and added it to the cipher text to extract the original message, which was printed onto paper tape at the receiver’s teletypewriter.1 The work of understanding Tunny was made much easier by the fact that in the early months of its use, senders transmitted the wheel settings to be used before sending the message. Later the Germans issued codebooks with pre-defined wheel settings  – the sender would send only the code, which the recipient could use to look up the proper wheel setting in the book. Eventually they began changing the codebooks daily, forcing Bletchley to start all over with cracking the wheel settings for the codes each morning. Intriguingly, the cryptanalysts also discerned the function of Tunny, based on the location of the sending and receiving stations. It connected the nerve centers of the German High Command to army and army group commanders at the various European war fronts, from occupied France to the Russian steppes. It was a seductive problem: to break into Tunny promised direct access to the intentions and capabilities of the enemy at the highest level. Then, through a combination of German operator error, cleverness, and sheer bloody-minded determination, a young mathematician named William Tutte went well beyond these basic inferences about Tunny. Without having ever seen the machine itself, he determined its complete internal structure. He deduced the possible positions of each wheel (a different prime number for each), and exactly how the arrangement of wheels generated the key. Armed with this information, Bletchley built replica Tunnys that could be used to decode a message – once they worked out the proper wheel setting. The 12 key wheels of the Lorenz SZ cipher machine, known as “Tunny” to the British Heath Robinson By the end of 1942, Tutte had continued his attack on Tunny, by devising a strategy for doing just that. It was based on the concept of a delta: the modulo-2 sum between one signal (dot or cross, 0 or 1) in a message stream and the next. He realized that due to a stutter in the motion of the wheels of the Tunny, there was a correlation between the delta of the cipher text and the delta of the key text2: they would tend to change together. So if you could compare cipher text against the key text generated at various wheel settings, you could compute the delta of each and count the number of matches. A match rate significantly higher than 50% would indicate a potential candidate for the real key of the message. It was a nice idea in theory, but was impossible to carry out in practice, requiring 2,400 passes through each message to test the various possible settings. Tutte brought his problem to another mathematician, Max Newman, who oversaw a section at Bletchley, known simply as the Newmanry. Newman was at first glance an improbable figure to lead a sensitive British intelligence organization, given that his father was German-born. However he probably seemed a rather unlikely spy for Hitler, given that his family was also Jewish. So alarmed was he at the progress of Hitler’s domination of Europe that he evacuated his family to safety in New York shortly after the collapse of France in 1940, and for a time considered moving to Princeton himself. Max Newman It just so happened that Newman had an idea for tackling the calculations required by Tutte’s method – by building a machine. Using machines for cryptanalysis was nothing new to Bletchley. That was, after all, how Enigma was being cracked. But Newman had a particular, electronic device in mind for the Tunny machine. He had taught at Cambridge before the war (Alan Turing had been a student of his), and knew about the electronic counters Charles Eryl Wynn-Williams had built for counting particles at the Cavendish. Here was Newman’s idea: If one could synchronize two loops of tape, spinning at high speed – one with the key and one with the enciphered message – and read each element into a processing unit that computed the deltas, an electronic counter could accumulate the results. By reading off the final count at the end of each run, one could decide if the key was a promising one or not. It so happened that a group of engineers with suitable expertise was readily available. Among them, Wynn-Williams himself. Turing had recruited Wynn-Williams from the radar lab in Malvern, to help build a new rotor for the Enigma decoding machine that would use electronics to register the rotations. Assisting him with that and another related Enigma project were three engineers from the Post Office Research Station at Dollis Hill: William Chandler, Sidney Broadhurst, and Tommy Flowers. (Recall that the British Post Office was a high-tech operation, responsible for telegraphy and telephony, as well as paper mail). Both projects had gone bust and now the men were at loose ends. Newman scooped them up. He assigned Flowers to lead the team building the “combining unit”, which would compute the deltas and pass the result on to the counter, which was the responsibility of Wynn-Williams. Along with the engineers to build them, Newman tasked the Women’s Royal Naval Service, or Wrens, to operate his message processing machines. While the government trusted only men with high-level, executive positions, women performed much of the operational work at Bletchley, from message transcription to setting up decoding runs. They naturally transitioned from performing clerical work themselves to the tending of the machines that automated that work. The Wrens facetiously dubbed their charge “Heath Robinson”, the British equivalent to Rube Goldberg. The Heath Robinson’s very similar successor, Old Robinson Indeed, the Heath Robinson, though sound in concept, suffered from serious practical problems. Most notably, the need to keep the two tapes – cipher text and key text – in perfect sync. Any stretching or slippage in either tape would ruin an entire run. In order to minimize the risk of error, the machine processed no more than 2,000 characters per second, though the belts could have been run faster. Flowers, though he reluctantly went along with the Heath Robinson project, believed there was a better way: a machine built almost entirely from electronic components. Colossus Thomas Flowers had served as an engineer in the Post Office research division since 1930, where he was initially set the problem of investigating mis-dialed and failed connections in the new automatic exchanges. This led him to ruminate on how to build an altogether better telephone system, and by 1935 he had become a missionary for replacing the electro-mechanical components of that system, such as relays, with electronics. This cause would define his entire career thereafter. Tommy Flowers, ca. 1940s Most engineers dismissed electronic components as too balky and unreliable for use on a large scale, but Flowers showed that if kept running continuously and at well below their rated power, tubes actually had remarkable longevity. He proved out his ideas by replacing all the terminals for establishing connection tones in a 1,000 line switch with tubes; three or four thousand in total. This installation went into live service in 1939. In the same period, he also experimented with replacing the relay-based registers for storing telephone numbers with electronic switches. Flowers believed that the Heath Robinson that he had been recruited to help build was seriously flawed, and that he could do much better by using many more tubes and fewer mechanical parts. In February 1943, he brought his alternative design to Newman. In an imaginative leap, Flowers disposed with the key text tape altogether, completely eliminating the synchronization problem. Instead his machine would generate the key text on the fly. It would simulate a Tunny electronically, iterating through wheel settings and trying each against the cipher text and recording possible matches. He estimated that this approach would entail the use of about 1,500 tubes.  Newman, and the rest of the Bletchley leadership, cast a skeptical eye on this proposal. Like most of Flowers’ contemporaries, they doubted whether electronics could be made to work on such a scale. They further doubted whether, even were it made to work, such a machine could be built in time to be useful to the war effort. Flowers’ boss at Dollis Hill nonetheless authorized him to assemble a team to build his electronic monster – though Flowers may have given a misleading impression of just how much backing he had from Bletchley.3 In addition to Flowers, Sidney Broadhurst and William Chandler played a major part in the design work, and the effort as a whole required fully fifty people, half of Dollis Hill’s resources. The team drew on precedents from telephone machinery: counters, branching logic, equipment for routing and translating signals, and “routiners” for putting equipment through a series of pre-programmed tests. Broadhurst was a master of these electro-mechanical circuits, while Flowers and Chandler were the electronics experts, who understood how to translate concepts from the world of relays into the world of valves. By early 1944, the team had delivered a working model to Bletchley.4 The giant machine acquired the code name Colossus, and quickly proved that it could outshine the Heath Robinson, reliably processing 5,000 characters per second. Newman and the rest of the command chain at Bletchley were not slow to realize their earlier mistake in dismissing Flowers’ ambitions. In February 1944, they requested twelve more Colossoi to be in operation by June 1 – the intended date for the invasion of France, though of course Flowers knew nothing of that. Flowers flatly declared this to be impossible, but with heroic efforts his team managed to deliver a second machine by May 31, with a new team member, Allan Coombs, making many of the design improvements. This revised design, known as the Mark II, expanded on the success of the first. In addition to the tape feed, it consisted of 2,400 tubes, 12 rotary switches, 800 relays and an electric typewriter. A Colossus Mark II It was configurable and flexible enough to perform a variety of tasks within its milieu. After installation, each team of Wrens customized their Colossus to suit the particular problems they needed to solve. A plugboard, modeled on a telephone operator’s patch panel, established the settings for the electronic rings which simulated the wheels of the Tunny machine. More broadly, a series of switches allowed operators to set up any number of different function units to compute over the two data streams: the external tape and the internal signals generated by the rings. By combining a variety of different logic gates, Colossus could compute arbitrary Boolean functions on that data, i.e. functions that output a 0 or 1. Every output of 1 incremented Colossus’ counter. A separate control unit made branching decisions based on the state of the counter, e.g. stop and print the output if the counter is greater than 1000. The switch panel for configuring a Colossus Let us not imagine, however, that Colossus was a programmable, general-purpose computer in the modern sense. It could logically combine two data streams – one on tape, one generated from ring counters – and count the number of 1s encountered, and that was all. Much of the “programming” of Colossus was actually carried out on paper, with operators executing decision trees prepared by analysts; e.g. “if the output was less than X, set up configuration B and do Y, otherwise do Z”.5 A high-level block diagram of Colossus Nonetheless, for the task it was asked to do, Colossus was quite capable. Unlike the Atanasoff-Berry Computer, Colossus was extraordinarily fast – able to process 25,000 characters per second, each of which might require several Boolean operations. (The Mark II  quintupled the speed of the Mark I by simultaneously reading and processing five different sections of tape.) It avoided coupling the entire system to slow electro-mechanical input and output devices by using photoelectric cells (derived from anti-aircraft proximity fuses) to read the input tapes and using a register to buffer output to the teletypewriter. The leader of a team that rebuilt Colossus in the 1990s showed that, in its wheelhouse, it would still easily outperform a 1995-era Pentium processor.6 This powerful text-processing engine became the centerpiece of the Tunny code-breaking effort.7 Another ten Mark II’s were built before the end of the war, the panels churned out one per month by workers at the Post Office factory in Birmingham who had no idea what they were building, then assembled at Bletchley. One exasperated official at the Ministry of Supply, on receiving yet another requests for thousands of specialized valves, wondered whether the Post Office folks were “shooting them at the Jerries.”8 Not until well into the 1950s would another electronic computer be produced in this fashion: as an industrial product, rather than a one-off research project. At Flowers’ instruction, in order to preserve the valves, each Colossus remained on night and day, until the end of the war. Glowing quietly in the dark, warming the damp British winter, they waited patiently for instructions, until they day came when they were no longer needed. A Curtain of Silence An understandable enthusiasm for the intriguing drama of Bletchley has sometimes led to a wanton exaggeration of its military achievements. To imply, as does the movie The Imitation Game, that British civilization would have been extinguished if not for Alan Turing, is a monstrous absurdity. Colossus, specifically, seems to have had no real effect on the course of the the struggle for Europe. Its most widely touted achievement was to prove that the deception plan around the 1944 Normandy landing had worked. The Tunny traffic revealed that the allies had succeeded in convincing Hitler and his High Command that the true blow would land farther east, at the Pas de Calais. Reassuring information, but reducing the cortisol levels of allied commanders probably did not help to win the war. The technological achievement represented by Colossus itself, on the other hand, is incontrovertible. But the world would not soon learn of it. Churchill ordered all the Colossoi in existence at the end of the war to be dismantled, taking the secret of their construction with them to the scrapyard.  Two machines somehow survived this death sentence, remaining in use within the British intelligence apparatus until about 1960.9 Still the British government did not lift the curtain of silence around the activities at Bletchley. Not until the 1970s would its existence become public knowledge. The decision to suppress all discussion of the activities at Bletchley Park indefinitely was on the part of the British government a mild excess of caution. But it was a personal tragedy for Flowers. Denied all the honors and prestige that would be due to the inventor of the Colossus, he suffered frustration and disappointment, stymied repeatedly in his ongoing effort to replace relays with electronics in the British telephone system. Had he been able to pull the historic achievement of Colossus from his back pocket, he might well have had the influence necessary to carry his vision forward. By the time his achievements became known in full, Flowers had long since retired, and could no longer influence anything.  The few, scattered enthusiasts for electronic computing might have suffered a similar setback from the secrecy around Colossus, lacking any evidence to prove its viability to the skeptics. Electro-mechanical computing might have continued to dominate for some time. But there was, in fact, another project that would pave the way for electronic computing’s rise to dominance. Though also the result of a secret military effort, it was not kept hidden after the war, but instead revealed to the world with great fanfare, as ENIAC. Further Reading Jack Copeland, ed. Colossus: The Secrets of Bletchley Park’s Codebreaking Computers (2006) Thomas H. Flowers, “The Design of Colossus,” Annals of the History of Computing, July 1983 Andrew Hodges, Alan Turing: The Enigma (1983)

Read more