The Electronic Computers, Part 4: The Electronic Revolution

We have now recounted, in succession, each of the first three attempts to build a digital, electronic computer: The Atanasoff-Berry Computer (ABC) conceived by John Atanasoff, the British Colossus projected headed by Tommy Flowers, and the ENIAC built at the University of Pennsylvania’s Moore School. All three projects were effectively independent creations. Though John Mauchly, the motive force behind ENIAC, knew of Atansoff’s work, the design of the ENIAC owed nothing to the ABC. If there was any single seminal electronic computing device, it was the humble Wynn-Williams counter, the first device to use vacuum tubes for digital storage, which helped set Atanasoff, Flowers, and Mauchly alike onto the path to electronic computing.

Only one of these three machines, however, played a role in what was to come next. The ABC never did useful work, and was largely forgotten by the few who ever knew of it. The two war machines both proved themselves able to outperform any other computer in raw speed, but the Colossus remained a secret even after the defeat of Germany and Japan. Only ENIAC became public knowledge, and so became the standard bearer for electronic computing as a whole. Now anyone who wished to build a computing engine from vacuum tubes could point to the Moore School’s triumph to justify themselves. The ingrained skepticism from the engineering establishment that greeted all such projects prior to 1945 had now vanished; the skeptics either changed their tune or held their tongue.

The EDVAC Report

A document issued in 1945, based on lessons learned from the ENIAC project, set the tone for the direction of computing in the post-war world. Called “First Draft of a Report on the EDVAC,”1 it provided the template for the architecture of the first computers that were programmable in the modern sense – that is to say, they executed a list of commands drawn from a high-speed memory. Although the exact provenance of its ideas was, and shall remain, disputed, it appeared under the name of the mathematician John (János) von Neumann. As befit the mind of a mathematician, it also presented the first attempt to abstract the design of a computer from the specifications for a particular machine; it attempted to distill an essential structure from its various possible accidental forms. 

Von Neumann, born in Hungary, came to ENIAC by way of Princeton, New Jersey, and Los Alamos, New Mexico. In 1929, as an accomplished young mathematician with notable contributions to set theory, quantum mechanics, and the theory of games, he left Europe to take a position at Princeton University. Four years later, the nearby Institute for Advance Study (IAS) offered him a lifetime faculty post. With Nazism on the rise, von Neumann happily accepted the chance to remain indefinitely on the far side of the Atlantic – becoming, ex post facto, among the first Jewish intellectual refugees from Hitler’s Europe. After the war, he lamented that “I feel the opposite of a nostalgia for Europe, because every corner I knew reminds me of… a world which is gone, and the ruins of which are no solace,” remembering his “total disillusionment in human decency between 1933 and September 1938.”2

Alienated from the lost cosmopolitan Europe of his youth, von Neumann threw his intellect behind the military might of his adoptive home. For the next five years he criss-crossed the country incessantly to provide advice and consultation on a wide variety of weapons projects, while somehow also managing to co-author a seminal book on game theory. The most secret and momentous of his consulting positions was for the Manhattan Project – the effort to build an atomic weapon – whose research team resided at Los Alamos, New Mexico. Robert Oppenheimer recruited him in the summer of 1943 to help the project with mathematical modeling, and his calculations convinced the rest of the group to push forward with an implosion bomb, which would achieve a sustained chain reaction by using explosives to driving the fissile material inward, increasing its density. This, in turn, implied massive amounts of calculation to work out how to achieve a perfectly spherical implosion with the correct amount of pressure – any error would cause the chain reaction to falter and the bomb to fizzle.

Von Neumann during his time at Los Alamos

Los Alamos had a group of twenty human computers with desk calculators, but they could not keep up with the computational load. The scientists provided them with IBM punched-card equipment, but still they could not keep up. They demanded still better equipment from IBM, and got it in 1944, yet still they could not keep up.

By this time, Von Neumann had added yet another set of stops to his constant circuit of the country: scouring every possible site for computing equipment that might be of use to Los Alamos. He wrote to Warren Weaver, head of Applied Mathematics for the National Defense Research Committee (NDRC), and received several good leads. He went to Harvard to see the Mark I, but found it already fully booked with Navy work. He spoke to George Stibitz and looked into ordering a Bell relay computer for Los Alamos, but gave up after learning how long it would take to deliver it. He visited a group at Columbia University that had linked multiple IBM machines into a larger automated system, under the direction of Wallace Eckert (no relation to Presper), yet this seemed to offer no major improvement on the IBM set up that Los Alamos already had available.

Weaver had, however, omitted one project from the list he gave to von Neumann:  ENIAC. He certainly knew of it: in his capacity as directory of the Applied Mathematics Panel, it was his business to monitor the progress of all computing projects in the country. Weaver and the NDRC certainly had doubts about the feasibility and timeline for ENIAC, yet it is rather shocking that he did not even mention its existence.

Whatever the reason for the omission, because of it Von Neumann only learned about ENIAC due to a chance encounter on a train platform. The story comes from Herman Goldstine, the liason from the Aberdeen Proving Ground to the Moore School, where ENIAC was under construction. Goldstine bumped into von Neumann at the Aberdeen railway station in June 1944 – von Neumann was leaving another of his consulting gigs, as a member of the Scientific Advisory Committee to Aberdeen’s Ballistic Research Laboratory (BRL). Goldstine knew the great man by reputation, and struck up a conversation. Eager to impress, he couldn’t help mentioning the exciting new project he had underway up in Philadelphia. Von Neumann’s attitude transformed instantly from congenial colleague to steely-eyed examiner, as he grilled Goldstine on the details of his computer. He had found an intriguing new source of potential computer power for Los Alamos.

Von Neumann first visited Presper Eckert, John Mauchly and the rest of the ENIAC team in September 1944. He immediately became enamored of the project, and added yet another consulting gig to his very full plate. Both parties had much to gain. It is easy to see how the promise of electronic computing speeds would have captivated von Neumann. ENIAC, or a machine like it, might burst all the computational limits that fettered the progress of the Manhattan Project, and so many other projects or potential projects.3 For the Moore School team, the blessing of the renowned von Neumann meant an end to all their credibility problems. Moreover, given his keen mind and extensive cross-country research, he could match anyone in the breadth and depth of his insight into automatic computing.

It was thus that von Neumann became involved in Eckert and Mauchly’s plan to build a successor to ENIAC. Along with Herman Goldstine and another ENIAC mathematician, Arthur Burks, they began to sketch the parameters for a second generation electronic computer, and it was the ideas of this group that von Neumann summarized in the “First Draft” report. The new machine would be more powerful, more streamlined in design, and above all would solve the biggest hindrance to the use of ENIAC – the many hours required to configure it for a new problem, during which that supremely powerful, extraordinarily expensive computing machine sat idle and impotent. The designers of recent electro-mechanical machines such as the Harvard Mark I and Bell relay computers had avoided this fate for their machines by providing the computer with instructions via punched holes in a loop of paper tape, which an operator could prepare while the computer solved some other problem. But taking input in this way would waste the speed advantage of electronics: no paper tape feed could provide instructions as fast as ENIAC’s tubes could consume them.4

The solution outlined in the “First Draft” was to move the storage of instructions from the “outside recording medium of the device” into its “memory” – the first time this word had appeared in relation to computer storage5. This idea was later dubbed the “stored-program” concept. But it immediately led to another difficulty, the same that stymied Atansoff in designing the ABC – vacuum tubes are expensive. The “First Draft” estimated that a computer capable of supporting a wide variety of computational tasks would need roughly 250,000 binary digits of memory for instructions and short-term data storage. A vacuum-tube memory of that size would cost millions of dollars, and would be terribly unreliable to boot.

The resolution to the dilemma came from Eckert, who had worked on radar research in the early 1940s, as part of a contract between the Moore School and the “Rad Lab” at MIT, the primary center of radar research in the U.S. Specifically, Eckert worked on a radar system known as the Moving Target Indicator (MTI), which addressed the problem of “ground clutter”: all the noise on the radar display from buildings, hills, and other stationary objects that made it hard for the operator to discern the important information – the size, location, and velocity of moving formations of aircraft.

The MTI solved the clutter problem using an instrument called an acoustic delay line. It transformed the electrical radar pulse into a sound wave, and then sent that wave through a tube of mercury6, such that the sound arrived at the other end and was transformed back into an electrical pulse just as the radar was sweeping the same point in the sky. Any signal arriving from the radar at the same time as from the mercury line was presumed to be a stationary object, and cancelled.

Eckert realized that the pulses of sound in the delay line could be treated as binary digits – with a sound representing 1, and its absence a 0. A single tube of mercury could hold hundreds of such digits, each passing through the line several times per millisecond, meaning that the computer need only wait a couple hundred microseconds to access a particular digit. It could access a sequential series of digits in the same tube much faster still, with each digit spaced out only a handful of microseconds a part.

Mercury delay lines for the British EDSAC computer

With the basic problems of the how the machine would be structured resolved, von Neumann collected the group’s ideas in the 101-page “First Draft” report in the spring of 1945, and circulated it among the key stakeholders in the second-generation EDVAC project. Before long, though, it found its way into other hands. The mathematician Leslie Comrie, for instance, took a copy back to Britain after his visit to the Moore School in 1946, and shared it with colleagues. The spread of the report fostered resentment on the part of Eckert and Mauchly for two reasons: first, the bulk of the credit for the design flowed to the sole author on the draft: von Neumann7. Second, all the core ideas contained in the design were now effectively published, from the point of view of the patent office, undermining their plans to commercialize the electronic computer.

The very grounds for Eckert and Mauchly’s umbrage, in turn, raised the hackles of the mathematicians: von Neumann, Goldstine, and Burks. To them, the report was important new knowledge that ought to have been disseminated as widely as possible in the spirit of academic discourse. Moreover, the government, and thus the American taxpayer, had funded the whole endeavor in the first place. The sheer venality of Eckert and Mauchly’s schemes to profit from the war effort irked them. Von Neumann wrote, “I would never have undertaken my consulting work at the University had I realized that I was essentially giving consulting services to a commercial group.”8

Each faction went its separate ways in 1946: Eckert and Mauchly set up their own computer company, on the basis of a seemingly more secure patent on the ENIAC technology. They at first called their enterprise the Electronic Control Company, but renamed it the following year to Eckert-Mauchly Computer Corporation. Von Neumann returned to the Institute for Advance Study (IAS) to build a EDVAC-style computer there, and Goldstine and Burks joined him. To prevent a recurrence of the debacle with Eckert and Mauchly, they ensured that all the intellectual products of this new project would become public property.

Von Neumann in front of the IAS computer, completed in 1951.

An Aside on Alan Turing

Among those who got their hands on the EDVAC report through side channels was the British mathematician Alan Turing. Turing does not figure among the first to build or design an automatic computer, electronic or otherwise, and some authors have rather exaggerated his place in the history of computing machines.9 But we must credit him as among the first to imagine that a computer could do more than merely “compute” in the sense of processing large batches of of numbers. His key insight was that all the kinds of information manipulated by human minds could be rendered as numbers, and so any intellectual process could be transformed into a computation.

Alan Turing in 1951

In late 1945, Turing published his own report, citing von Neumann’s, on a “Proposed Electronic Calculator” for Britan’s National Physical Laboratory (NPL). It delved far lower than the “First Draft” into the details of how his proposed electronic computer would actually be built. The design reflected the mind of a logician. It would have no special hardware for higher-level functions which could be composed from lower level-primitives; that would be an ugly wart on the machine’s symmetry. Likewise Turing did not set aside any linear area of memory for the computer’s program: data and instructions could live intermingled in memory, for they were all simply numbers. An instruction only became an instruction when interpreted as such.10 Because Turing knew that numbers could represent any form of well-specified information, the list of problems he proposed for his calculator included not just the construction of artillery tables and the solution of simultaneous linear equations, but also the solving of a jig-saw puzzle or a chess endgame.

Turing’s Automatic Computing Engine (ACE), was never built as originally proposed. Slow to get moving, it had to compete with other, more vigorous, British computing projects for the best talent. The project struggled on for several years before Turing lost interest. NPL completed a smaller machine with a somewhat different design, known as the Pilot ACE, in 1950, and several other early-1950s computers drew inspiration from the ACE architecture. But it had no wider influence and faded quickly into obscurity.

None of this is to belittle Turing or his accomplishments, only to place them in the proper context. His importance to the history of computing derives not from his influence on the design of 1950s computers, but rather from the theoretical ground he prepared for the field of academic computer science, which emerged in the 1960s. His early papers in mathematical logic, which surveyed the boundaries between that which is computable and that which is not, became the fundamental texts of this new discipline.

The Slow Revolution

As news about ENIAC and the EDVAC report spread, the Moore School became a site of pilgrimage. Numerous visitors came to learn at the foot of the evident masters, especially from with in the U.S. and Britain. In order to bring order to this stream of petitioners, the dean of the school organized an invitation-order summer school on automatic computing in 1946. The lecturers included such luminaries as Eckert, Mauchly, von Neumann, Burks, Goldstine, and Howard Aiken (designer of the Harvard Mark I electromechanical computer).

Nearly everyone now wanted to build a machine on the template of the EDVAC report.11 The wide influence of ENIAC and EDVAC in the 1940s and 50s evinced itself in the very names that teams from around the world bestowed on their new computers. Even if we set aside UNIVAC and BINAC (built by Eckert and Mauchly’s new company) and EDVAC itself (finished by the Moore School after being orphaned by its parents), we still find AVIDAC, CSIRAC, EDSAC, FLAC, ILLIAC, JOHNNIAC, ORDVAC, SEAC, SILLIAC, SWAC, and WEIZAC. Many of these machines directly copied the freely published IAS design (with minor modifications), benefiting from Von Neumann’s open policy on intellectual property.

Yet the electronic revolution unfolded gradually, overturning the existing order piece by piece. Not until 1948 did a single EDVAC-style machine come to life, and that only a tiny proof-of-concept, the Manchester “baby,” designed to prove out its new Williams tube memory system.12 In 1949, four more substantial machines followed: the full-scale Manchester Mark I, the EDSAC, at Cambridge University, the CSIRAC in Sydney, Australia, and the American BINAC – though the last evidently never worked properly. A steady trickle of computers continued to appear over the next five years.13

Some writers have portrayed the ENIAC as drawing a curtain over the past and instantly ushering in an era of electronic computing. This has required painful-looking contortions in the face of the evidence. “The appearance of the all-electronic ENIAC made the Mark I obsolete almost immediately (although capable of performing successfully for fifteen years afterward),” wrote Katherine Fishman.14 Such a statement is so obviously self-contradictory one must imagine that Ms. Fishman’s left hand did not know what her right was doing. One might excuse this as the jottings of a mere journalist. Yet we can also find a pair of proper historians, again choosing the Mark I as their whipping boy,  writing that “[n]ot only was the Harvard Mark I a technological dead end, it did not even do anything very useful in the fifteen years that it ran. It was used in a number of applications for the navy, and here the machine was sufficiently useful that the navy commissioned additional computing machines from Aiken’s laboratory.”15 Again the contradiction stares, nearly slaps, one in the face.

In truth, relay computers had their merits, and continued to operate alongside their electronic cousins. Indeed, several new electro-mechanical computers were built after World War II, even into the early 1950s, in the case of Japan. Relay machines were easier to design, build, and maintain, and did not require huge amounts of electricity and climate control (to dissipate the vast amount of heat put out by thousands of vacuum tubes). ENIAC used 150 kilowatts of electricity, 20 for its cooling system alone.16

The American military continued to be a major customer for computing power, and did not disdain “obsolete” electromechanical models. In the late 1940s, the Army had four relay computers and the Navy five. Aberdeen’s Ballistics Research Laboratory held the largest concentration of computing power in the world, operating ENIAC alongside Bell and IBM relay calculators and the old differential analyzer. A September 1949 report found that each had their place: ENIAC worked best on long but simple calculations; the Bell Model V calculators served best for complex calculations due to their effectively unlimited tape of instructions and their ability to handle floating point, while the IBM could process very large amounts of data stored in punched cards. Meanwhile certain operations such as cube roots were still easiest to solve by hand (with a combination of table look-ups and desk calculators), saving machine time.17

Rather than the birth of ENIAC in 1945, 1954 makes a better year to mark the completion of the electronic revolution in computing, the year that the IBM 650 and 704 computers appeared. Though not the first commercial electronic computers, they were the first to be produced in the hundreds,18 and they established IBM’s dominance over the computer industry, a dominance that lasted for thirty years. In Kuhnian19 terms, electronic computing was no longer the strange anomaly of 1940, existing only in the dreams of outsiders like Atansoff and Mauchly; it had become normal science.

One of many IBM 650 machines, this one at Texas A&M University. Its magnetic drum memory (visible at bottom) made it relatively slow but also relatively inexpensive.

Leaving the Nest

By the mid-1950s, the design and construction of digital computing equipment had come unmoored from its origins in switches or amplifiers for analog systems. The computer designs of the 1930s and early 1940s drew heavily on ideas borrowed from physics and radar labs, and especially from telecommunications engineers and research departments. Now computing was becoming its own domain, and specialists in that domain developed their own ideas, vocabulary, and tools to solve their own problems.

The computer in the modern sense had emerged, and our story of the switch thus draws near its close. But the world of telecommunications had one last, supreme surprise up its sleeve. The tube had bested the relay in speed by having no moving parts. The final switch of our story did one better by having no internal parts at all. An innocuous-looking lump of matter sprouting a few wires, it came from a new branch of electronics known as “solid-state.”

For all their speed, vacuum tubes remained expensive, bulky, hot, and not terribly reliable. They could not have ever powered, say, a laptop. Von Neumann wrote in 1948 that “it is not likely that 10,000 (or perhaps a few times 10,000) switching organs will be exceeded as long as the present techniques and philosophy are employed.”20 The solid-state switch made it possible for computers to surpass this limit again and again, many times over; made it possible for computers to reach small businesses, schools, homes, appliances, and pockets; made possible the creation of the digital land of Faerie that now permeates our existence. To find its origins we must rewind the clock some fifty years, and go back to the exciting early days of the wireless.

Further Reading

David Anderson, “Was the Manchester Baby conceived at Bletchley Park?”, British Computer Society (June 4th, 2004)

William Aspray, John von Neumann and the Origins of Modern Computing (1990)

Martin Campbell-Kelly and William Aspray, Computer: A History of the Information Machine (1996)

Thomas Haigh, et. al., Eniac in Action (2016)

John von Neumann, “First Draft of a Report on EDVAC” (1945)

Alan Turing, “Proposed Electronic Calculator” (1945)

 

The Hobby Computer Culture

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] From 1975 through early 1977, the use of personal computers remained almost exclusively the province of hobbyists who loved to play with computers and found them inherently fascinating. When BYTE magazine came out with its premier issue in 1975, the cover called computers “the world’s greatest toy.” When Bill Gates wrote about the value of good software in the spring of 1976, he framed his argument in terms of making the computer interesting, not useful: “…software makes the difference between a computer being a fascinating educational tool for years and being an exciting enigma for a few months and then gathering dust in the closet.”[1] Even as late as 1978, an informed observer could still consider interest in personal computers to be exclusive to a self-limiting community of hobbyists. Jim Warren, editor of Dr. Dobb’s Journal of Computer Calisthenics and Orthodontia, predicted a maximum market of one million home computers, expecting them to be somewhat more popular than ham radio, which attracted about 300,000.[2] A survey conducted by BYTE magazine in late 1976 shows that these hobbyists were well-educated (72% had at least a bachelor’s degree), well-off (with a median annual income of $20,000, or $123,000 in 2025 dollars), and overwhelmingly (99%) male. Based on the letters and articles appearing in BYTE in that same centennial year of 1976, it is clear that what interested these hobbyists above all was the computers themselves: which one to buy, how to build it, how to program it, how to expand it and to accessorize it.[3] Discussion of practical software applications appeared infrequently. One intrepid soul went so far as to hypothesize a microcomputer-based accounting program, but he doesn’t seem to have actually written it. When  mention of software appeared it came most often in the form of games. The few with more serious scientific and statistical work in mind for their home computer complained of the excessive discussion of “super space electronic hangman life-war pong.” Star Trek games were especially popular:  In July, D.E. Hipps of Miami advertised a Star Trek BASIC game for sale for $10; in August, Glen Brickley of Florissant, Missouri wrote about demoing his “favorite version of Star Trek” for friends and neighbors; and in August, BYTE published, with pride, “the first version of Star Trek to be printed in full in BYTE” (though the author consistently misspelled “phasers” as “phasors”). Most computer hobbyists were electronic hobbyists first, and the electronics hobby grew up side-by-side with modern science fiction, and shared its fascination with the possibilities of future technology. We can guess that this is what drew them to this rare piece of popular culture that took the future and the “what-ifs” it poses seriously, rather than treating it as a mere backdrop for adventure stories.[4] The June 1976 issue of Interface is one of many examples of the hobbyists’ ongoing fascination with Star Trek. Other than a shared interest in computers—and, apparently, Star Trek—three kinds of organizations brought these men together: local clubs, where they could share expertise in software and hardware and build a sense of belonging and community; magazines like BYTE where they could learn about new products and get project ideas; and retail stores, where they could try out the latest models and shoot the shit with fellow enthusiasts. The computer hobbyists were also bound by a force more diffuse than any of these concrete social forms: a shared mythology of the origins of hobby computing that gave broader social and cultural meaning to their community. The Clubs The most famous computer club of all, of course, is the Homebrew Computer Club, headquartered in Silicon Valley, whose story is well documented in several excellent sources, especially Steven Levy’s book, Hackers. Its fame is well-deserved, for its role as the incubator of Apple Computer, if nothing else. But the focus of the historical literature on Homebrew as the computer club has tended to distort the image of American personal computing as a whole. The Homebrew Computer Club had a distinctive political bent, due to the radical left leanings of many of its leading members, including co-founder Fred Moore. In 1959, Moore had gone on hunger strike against the Reserve Officers’ Training Corps (ROTC) program at Berkeley, which had been compulsory for all students since the nineteenth century. He later became a draft resister and published a tract against institutionalized learning, Skool Resistance. Yet even the bulk of Homebrew’s membership stubbornly stuck to technical hobbyist concerns, despite Moore’s efforts to turn their attention to social causes such as aiding the disabled or protesting nuclear weapons. To the extent that personal computing had a politics, it was a politics of independence, not social justice.[5] Cover of the second Homebrew Computer Club newsletter, with sketches of members. Only Fred Moore is labeled, but the man with glasses on the far right is likely Lee Felsenstein. Moreover, excitement about personal computing was not at all a phenomenon confined to the Bay Area. By the summer of 1975, Altair shipments had begun in earnest, and clubs formed across the United States and beyond where enthusiasts could share information and ask for help with their new (or prospective) machines. The movement continued to grow as new companies sprang up and shipped more hobby machines. Over the course of 1976, dozens of clubs advertised their existence or attempted to find a membership through classifieds in BYTE, from the Oregon Computer Club headquartered in Portland (with a membership of forty-nine), to a proposed club in Saint Petersburg, Florida, mooted by one Allen Swan. But, as one might expect, the largest and most successful clubs were concentrated in and around major metropolitan areas with a large pool of existing computer professionals, such as Los Angeles, Chicago, and New York City.[6] The Amateur Computer Group of New Jersey convened for the first time in June 1975, in under the presidency of Sol Libes. Libes, a professor at Union County College, was another of those computer lovers working on their own home computers for years before the arrival of the Altair, who then suddenly found themselves joined by hundreds of like-minded hobbyists once computing became somewhat more accessible. Libe’s club grew to 1,600 members by the early 1980s, had a newsletter and software library, sponsored the annual Trenton Computer Festival, and is likely the only organization from the hobby computer years other than Apple and Microsoft to still survive today.[7] The Chicago Area Computer Hobbyist Exchange attracted several hundred members to its first meeting at Northwestern University in the summer of 1975. Like many of the larger clubs, they organized information exchange around “special interest groups” for each brand of computer (Digital Group, IMSAI, Altair, etc.). The club also gave birth to one of the most significant novel software applications to emerge from the personal computer hobby, the bulletin board system—we will have more to say on that later in this series.[8] The most ambitious—one might say hubristic—of the clubs was the Southern California Computer Society (SCCS) of Los Angeles, founded in Don Tarbell’s apartment in June of 1975. Within the year the club could boast of a glossy club magazine(in contrast to the cheap newsletters of most clubs) called Interface, plans to develop a public computer center, and—in answer to the challenge of Micro-Soft BASIC—ideas about distributing their own royalty-free program library, including “’branch’ repositories that would reproduce and distribute on a local basis.”[9] Not content with a regional purview, the leadership also encouraged the incorporation of far-flung club chapters into their organization; in that spirit, they changed their name in early 1977 to the International Computer Society. Several chapters opened in California, and more across the U.S, from Minnesota to Virginia, but interest in SCCS/ICS chapters could be found as far away as Mexico City, Japan, and New Zealand. Across all of these chapters, the group accumulated about 8,000 members.[10] The whole project, however, ran atop a rickety foundation of amateur volunteer work, and fell apart under its own weight. First came the breakdown in the relationship between the club and the publisher of Interface, Bob Jones. Whether frustrated with the club’s failure to deliver articles to fill the magazine (his version), or greedy to make more money as a for-profit enterprise (the club’s version), Jones broke away to create Interface Age, leaving SCCS scrambling to start up its own replacement magazine. Expensive lawsuits flew in both directions. Then came the mismanagement of the club’s group buy program: intended to save members money by pooling their purchases into a large-scale order with volume discounts, it instead lost thousands of members’ dollars to a scammer: “a vendor,” as one wry commenter put it “who never vended” (the malefactor traded under the moniker of “Colonel Winthrop.”)[11] The December 1976 issues of SCCS Interface and Interface Age. Which is authentic, and which the impostor? More lawsuits ensued. Squeezed by money troubles, the club leadership raised dues to $15 annually, and sent out a plea for early renewal and prepayment of multiple years’ dues. The club magazine missed several issues in 1977, then ceased publication in September. The ICS sputtered on into 1978 (Gordon French of Processor Technology announced his candidacy for the club presidency in March), then disappeared from the historical record.[12] Whatever the specific historic accidents that brought down SCCS, the general project—a grand non-profit network that would provide software, group buying programs and other forms of support to its members—was doomed by larger historical forces. Though many clubs survived into the 1980s or beyond, they waned in significance with the maturing of commercial software and the turn of personal computer sellers away from hobbyists and towards the larger and more lucrative consumer and business markets. Newer computer products no longer required access to secret lore to figure out what to do with them, and most buyers expected to get any support they did need from a retailer or vendor, not to rely on mutual support networks of other buyers. One-to-one commercial relations between buyer and seller became more common than the many-to-many communal webs of the hobby era. The Retailers The first buyers of Altair could not find it in any shop. Every transaction occurred via a check sent to MITS, sight unseen, in the hopes of receiving a computer in exchange. This way of doing businesses suited the hardcore enthusiast just fine, but anyone with uncertainty about the product—whether they wanted a computer at all, which model was best, how much memory or other accessories they needed—was unlikely to bite. It had disadvantages for the manufacturer, too. Every transaction incurred overhead for payment processing and shipping, and demand was uncertain and unpredictable week to week and month to month. Without any certainty about how many buyers would send in checks next month, they had to scale up manufacturing carefully or risk overcommitting and going bust. Retail computer shops would alleviate the problems of both sides of the market. For buyers, they provided the opportunity to see, touch, and try out various computer models, and get advice from knowledgeable salespeople. For sellers, they offered larger, more predictable orders, improving their cash flow and reducing the overhead of managing direct sales. The very first computer shop appeared around the same time when the clubs began spreading, in the summer of 1975. But they did not open in large numbers until 1976, after the hardcore enthusiasts had primed the pump for further sales to those who had seen or heard about the computers being purchased by their friends or co-workers. The earliest documented computer shop, Dick Heiser’s Computer Store, opened in July 1975 in a 1,000-square-foot store front on Pico Boulevard in West Los Angeles. Heiser had attended the very first SCCS meeting in Don Tarbell’s apartment, and, seeing the level of excitement about Altair, signed up to become the first licensed Altair dealer. Paul Terrell’s Byte Shop followed later in the year in Mountain View, California. In March of 1976, Stan Veit’s Computer Mart opened on Madison Avenue in New York City and Roy Borrill’s Data Domain in Bloomington, Indiana (home to Indiana University). Within a year, stores had sprouted across the United States like spring weeds: five hundred nation-wide by July 1977.[13] Paul Terrell’s Byte Shop at 1063 El Camino Real in Mountain View. Ed Roberts tries to enforce an exclusive license on Altair dealers, based on the car dealership franchise model. But the industry was too fast-moving and MITS too cash- and capital-strapped to make this workable. Hungry new competitors, from IMSAI to Processor Technology, entered the market constantly with new-and-improved models. Many buyers weren’t satisfied with only Altair offerings, MITS couldn’t supply dealers with enough stock to satisfy those who were, and they undercut even their few loyal dealers by continuing to offer direct sales in order to keep as much cash as possible flowing in. Even Dick Heiser, founder of the original Los Angeles Computer Store, broke ties with MITS in late 1977, unable to sustain an Altair-only partnership.[14] Dick Heiser with a customer at The Computer Store in Los Angeles in 1977. Not only is the teen here playing a Star Trek game, a picture of the ubiquitous starship Enterprise can be seen hanging in the background. [Photo by George Birch, from Benj Edwards, “Inside Computer Stores of the 1970s and 1980s,” July 13, 2022] Given the number of competing computer makers, retailers ultimately had the stronger position in the relationship. Manufacturers who could satisfy the desires of the stores for reliable delivery of stock and robust service and customer support would thrive, while the others withered.[15] But independent dealers faced competition of their own. Chain stores could extract larger volume discounts from manufacturers and build up regional or even national brand recognition. Byte Shop, for example, expanded to fifty locations by March 1978. The most successful chain was ComputerLand, run by the same Bill Millard who had founded IMSAI. Though he later claimed everything was “clean and appropriate,” Millard clearly extracted money and employee time from the declining IMSAI in order to get his new enterprise off the ground. As the company’s chronicler put it, “There was magic in ComputerLand. Started on just Milliard’s $10,000 personal investment, losing $169,000 in its maiden year, the fledgling company required no venture capital or bank loans to get off the ground.” Some small dealers, such as Veit’s Computer Mart, responded by forming a confederacy of independent dealers under a shared front called “XYZ Corporation” that they could use to buy computers with volume discounts.[16] A ComputerLand ad from the February 1978 issue of BYTE. Note that the store offers many of the services that most people could have only found in a club in 1975 or 1976: assistance with assembly, repair, and programming. The Publishers Just like manufacturers, retailers faced their own cash flow risks: outside the holiday season they might suffer from long dry spells without many sales. The early retailers typically solved this by simply not carrying inventory: they took customer orders until they accumulated a batch of ten or so computers from the same manufacturer, then filled all of the orders at once. But a big boon for their cash flow woes came in the form of publications that sold for much less than a computer, but at a much higher and steadier volume, especially the rapidly growing array of computer magazines.[17] BYTE was both the first of the national computer magazines, and the most successful. Launched in New Hampshire in the late summer of 1975, by 1978 it built up a circulation of 140,000 issues per month. It got a head start by cribbing thousands of addresses from the mailing lists of manufacturers such as Nat Wadsworth’s Connecticut-based SCELBI, one of the proto-companies of the pre-Altair era. But, like so much of the hobby computer culture, BYTE also had direct ancestry in the radio electronics hobby.[18] Conflict among the three principal actors has muddled the story of its origins. Wayne Green, publisher of a radio hobby magazine called 73 in Peterborough, New Hampshire, started printing articles about computers in 1974, and found that they were wildly popular. Virginia Londner Green, his ex-wife, worked at the magazine as a business manager. Carl Helmers, a computer enthusiast in Cambridge, Massachusetts, authored and self-published a newsletter about home computers. One of the Greens learned of Helmers’ newsletter, and one or more of the three came up with the idea of combining Helmers’ computer expertise with the infrastructure and know-how from 73 to launch a professional-quality computer hobby magazine.[19] The cover of BYTE‘s September 1976 0.01-centennial issue (i.e., one year anniversary). The phrase “cyber-crud” and the image of a fist on the shirt of the man at center both come from Ted Nelson’s Computer Lib/Dream Machines. Also, these people really liked Star Trek. Within months, for reasons that remain murky, Wayne Green found himself ousted by his ex-wife, who took over publishing of BYTE, with Helmers as editor. Embittered, Green launched a competing magazine, which he wanted to call Kilobyte, but was forced to change to Kilobaud. Thus began a brief period in which Peterborough, with a population of about 4,000, served as a global hub of computer magazine publishing.[20] Another magazine, Personal Computing, spun off from MITS in Albuquerque. Dave Bunnell, hired as a technical writer, had become so fond of running the company newsletter Computer Notes, that he decided to go into publishing on his own. On the West Coast, in addition to the aforementioned Interface Age, there was also Dr. Dobb’s Journal of Computer Calisthenics and Orthodontia—conceived by Stanford lecturer Dennis Allison and computer evangelist Bob Albrecht (Dennis and Bob making “Dobb”), and edited by the hippie-ish Jim Warren, who drifted into computers after being fired from a position teaching math at a Catholic school for holding (widely-publicized) nude parties. Bunnell (right) with Bill Gates. This photo probably dates to sometime in the early 1980s. Computer books also went through a publishing boom. Adam Osborne, born to British parents in Thailand and trained as a chemical engineer, began writing texts for computer companies after losing his job at Shell Oil in California. When Altair arrived, it shook him with the same sense of revelation that so many other computer lovers had experienced. He whipped out a new book, Introduction to Microcomputers, and put it out himself when his previous publishers declined to print it. A highly technical text, full of details on Boolean logic and shift registers, it nonetheless sold 20,000 copies within a year to buyers eager for any information to help them understand and use their new machines.[21] The magazines served several roles. They offered up a cornucopia of content to inform and entertain their readers: industry news, software listings, project ideas, product announcements and reviews, and more. One issue of Interface Age even came with a BASIC implementation inscribed onto a vinyl record, ready to be loaded directly into a computer as if from a cassette reader. The magazines also provided manufacturers with a direct advertising and sales channel to thousands of potential buyers—especially important for smaller makers of computers or computer parts and accessories, whose wares were unlikely to be found in your local store. Finally, they became the primary texts through with the culture of the computer hobbyist was established and promulgated.[22] Each of the magazines had its own distinctive character and personality. BYTE was the magazine for the established hobbyist and tried to cover it all: hardware, software, community news, book reviews, and more. But the hardcore libertarian streak of founding editor Carl Helmers (an avid fan of Ayn Rand) also shone through in the slant of some of its articles. Wayne Green’s Kilobaud, with its spartan cover (title and table of contents only), appealed especially those with an interest in starting a business to make money off of their interest in computers. The short-lived ROM spoke to the humanist hobbyist, offering longer reports and think-pieces. Dr. Dobb’s had an amateur, free-wheeling aesthetic and tone not far removed from an underground newsletter. In keeping with its origins as a vehicle to publish Tiny BASIC (a free Microsoft BASIC alternative), itfocused on software listings. Creative Computing also had a software bent, but as a pre-Altair magazine designed to target users of BASIC in schools and universities, it took a more lighthearted and less technical tone, while Bunnell’s Personal Computing opened its arms to the beginner, with the message that computing was for everyone.[23] The Mythology of the Microcomputer Running through many of these early publications can be found a common narrative, a mythology of the microcomputer. To dramatize it: Until recently, darkness lay over the world of computing. Computers, a font of intellectual power, had served the interests only of the elite few. They lay solely in the hands of large corporate and government bureaucracies. Worse yet, even within those organizations, an inner circle of priests mediated access to the machine: the ordinary layperson could not be allowed to approach it. Then came the computer hobbyist. A Prometheus, a Martin Luther, and a Thomas Jefferson all wrapped into one, he ripped the computer and the knowledge of how to use it from the hands of the priests, sharing freedom and power with the masses. The “priesthood” metaphor came from Ted Nelson’s 1974 book, Computer Lib/Dream Machines, but became a powerful means for the post-Altair hobbyist to define himself against what came before. The imagery came to BYTE magazinein an October 1976 article by Mike Wilbur and David Fylstra: The movement towards personalized and individualized computing is an important threat to the aura of mystery that has surrounded the computer for its entire history. Until now, computers were understood by only a select few who were revered almost as befitted the status of priesthood.[24] In this cartoon from Wilbur and Fylstra’s article on the “computer priesthood,” the sinister “HAL” (aka IBM) finds himself chagrined by the spread of hobby computerists. BYTE editor Carl Helmers made the historical connection with the Enlightenment explicit: Personal computing as practiced by large numbers of people will help end the concentration of apparent power in the “in” group of programmers and technicians, just as the enlightenment and renaissance in Europe brought about a much wider understanding beginning in the 14th century.[25] The notion that computing had been jealously guarded by the powerful and kept away from the people can be found as early as June 1975, in the pages Homebrew Computer Club newsletter. In the words of club co-founder Fred Moore: The evidence is overwhelming the people want computers… Why did the Big Companies miss this market? They were busy selling overpriced machines to each other (and the government and military). They don’t want to sell directly to the public.[26] In the first collected volume of Dr. Dobb’s Journal, editor Jim Warren sounded the same theme of a transition from exclusivity to democracy in more eloquent language: …I slowly come to believe that the massive information processing power which has traditionally been available only to the rich and powerful in government and large corporations will truly become available to the general public. And, I see that as having a tremendous democratizing potential, for most assuredly, information–ability to organize and process it–is power. …This is a new and different kind of frontier. We are part of the small cadre of frontiersmen who are exploring it. exploring this new frontier.[27] Personal Computing editor Dave Bunnell further emphasized the potential for the computer as a political weapon against entrenched bureaucracy: …personal computers have already proliferated beyond most government regulation. People already have them, just like (pardon the analogy) people already have hand guns. If you have a computer, use it. It is your equalizer. It is a way to organize and fight back against the impersonal institutions and the catch-22 regulations of modern society.[28] The journalists and social scientists who began to write the first studies of the personal computer in the mid-1980s lapped up this narrative, which provided a heroic framing for the protagonists of their stories. They gave it new life and a much broader audience in books like Silicon Valley Fever (“Until the mid-1970s when the microcomputer burst on the American scene, computers were owned and operated by the establishment–government, big corporations, and other large institutions”) and Fire in the Valley (“Programmers, technicians, and engineers who worked with large computers all had the feeling of being ‘locked out’ of the machine room… there also developed a ‘computer priesthood’… The Altair from MITS breached the machine room door…”)[29] This way of telling the history of the hobby computer gave deeper meaning to a pursuit that looked frivolous on the surface: paying thousands of dollars for a machine to play Star Trek. And, like most myths, it contained elements of truth. There was a large installed base of batch-processing systems, surrounded by a contingent of programmers denied direct access to the machine. Between the two there did stand a group of technicians whose relation to the computer was not unlike the relation of the pre-Vatican II priest to the Eucharist. But in promoting this myth, the computer hobbyists denied their own parentage, obscuring the time-sharing and minicomputer cultures that had made the hobby computer possible and from which it had borrowed most of its ideas. The Altair was not an ex nihilo response to an oppressive IBM batch-processing culture that had made access to computers impossible. The announcement of Altair had called it the “world’s first minicomputer kit”: it was the fulfillment of the dream of owning your own minicomputer, a type of computer most of its buyers had already used. It could not have been successful if thousands of people hadn’t already gotten hooked on the experience of interacting directly with a time-sharing system or minicomputer. This self-confident hobby computer culture, however—with its clubs, its local shops, its magazines, and its myths—would soon be subsumed by a larger phenomenon. From this point forward, no longer will nearly every major character in the story of the personal computer have a background in hobby electronics or ham radio. No longer will nearly all the computer makers and buyers alike be computer lovers who found their passion on mainframe, minicomputer, or time-sharing systems. In 1977, the personal computer entered a new phase of growth, led by a new class of businessmen who targeted the mass market.

Read more
Steamships, Part 2: The Further Adventures of Isambard Kingdom Brunel

Iron Empire As far back as 1832, Macgregor Laird had taken the iron ship Alburkah to Africa and up the Niger, making it among the first ship of such construction to take the open sea. But the use of iron hulls in British inland navigation can be traced decades earlier, beginning with river barges in the 1780s. An iron plate had far more tensile strength than even an oaken board of the same thickness. This made an iron-hulled ship stronger, lighter, and more spacious inside than an equivalent wooden vessel: a two-inch thickness of iron might replace two-foot’s thickness of timber.[1]  The downsides included susceptibility to corrosion and barnacles, interference with compasses, and, at least at first, the expense of the material. As we have already seen, the larger the ship, the smaller the proportion of its cargo space that it would need for fuel; but the Great Western and British Queen pushed the limits of the practical size of a wooden ship (in fact, Brunel had bound Great Western’s hull with iron straps to bolster its longitudinal strength and prevent it from breaking in heavy seas).[2] The price of wood in Britain grew ever more dear as her ancient forests disappeared, but to build more massive ships economically also required iron prices to fall: and they did just that, starting in the 1830s, because of a surprisingly simple change in technique. Ironmongers had noticed long ago that their furnaces produce more metal from the same amount of fuel in the winter months. They assumed that the cooler air produced this result, and so by the nineteenth century it had become a basic tenet of the iron-making business that one should blast cool air into the furnace with the bellows to maximize its efficiency.[3] This common wisdom was mistaken; entirely backwards, in fact. In 1825, a Glasgow colliery engineer named James Neilson found that a hotter blast made the furnaces more efficient (it was the dryness, not the coolness, of the winter air that had made the difference). Neilson was asked to consult at an ironworks in the village of Muirkirk which was having difficulty with its furnace. He realized that heating the blast air would expand it, and thus increase the pressure of the air flowing into the furnace, strengthening the blast. In 1828 he patented the method of using a stove to heat the blast air. He convinced the Clyde Ironworks to adopt it, and together they perfected the method over the following few years. The results were astounding. A 600° F blast reduced coal consumption of the furnace by two-thirds and increased output from about five-and-a-half tons of pig iron per day to over eight.[4] On top of all that, this simple innovation allowed the use of plain coal as fuel in lieu of (more expensive) refined coke. Ironmakers had adopted coke in the 1750s because when iron was smelted with raw coal the impurities (especially sulfur) in the fuel made the resulting metal too brittle. But the hot blast sent the temperature inside the furnace so high that it drove the sulfur out in the slag waste rather than baking it into the iron. During the 1830s and 40s, Neilson’s hot blast technique spread from Scotland across all of Great Britain, and drove a rapid increase in iron production, from 0.7 million tons in 1830 to over two million in 1850. This cut the market price per ton of pig iron in half.[5] With its vast reserves of coal and iron, made accessible with the power of steam pumps (themselves made in Britain of British iron and fueled by British coal), Britain was perfectly placed to supply the demand induced by this decline in price. Much of the growth in iron output went to exports, strengthening the commercial sinews of the British empire while providing the raw material of industrialization to the rest of the world. The frenzies of railroad building in the United States and continental Europe in the middle of the nineteenth century relied heavily on British rails made from British iron: in 1849, for example, the Baltimore and Ohio railroad secured 22,000 tons of rails from a Welsh trading concern.[6] The hunger of the rapidly growing United States for iron proved insatiable; circa 1850 the young nation imported about 450,000 tons of British iron per year.[7] Good Engineering Makes Bad Business The virtues of iron were also soon on the brain of Isambard Kingdom Brunel. The Great Western Steam Ship Company’s plan for a successor to Great Western began sensibly enough; they would build a slightly improved sister ship of similar design. But Brunel and his partners were seduced, in the fall of 1838, by the appearance in Bristol harbor of an all-iron channel steamer called Rainbow, the largest such ship yet built. Brunel’s associates Claxton and Patterson took a reconnaissance voyage on her to Antwerp and upon their return all three men became convinced that they should build in iron.[8] As if that were not enough novelty to take on in one design, in May 1840 another innovative ship steamed into Bristol harbor, leaving Brunel and his associates swooning one more. The aptly named Archimedes, designed by Francis Petit Smith, swam through the water with unprecedented smoothness and efficiency, powered by a screw propeller rather than paddle wheels.[9] Any well-educated nineteenth-century engineer knew that paddles wasted a huge amount of energy pushing water down at the front of the wheel and lifting it up at the back. Nor was screw propulsion a surprising new idea in 1840. As we have seen, early steamboat inventors tried out just about every imaginable means of pushing or pulling a ship. In his very thorough Treatise on the Screw Propeller, the engineer John Bourne cites fifty some-odd proposals, patents, or practical attempts at screw propulsion prior toSmith’s.[10] After so many failures, most practical engineers assumed (reasonably enough) that the screw could never replace the proven (albeit wasteful) paddlewheel. The difficulties were numerous, including reducing vibration, transmitting power effectively to the screw, and choosing its shape, size, and angle among many potential alternatives. Most fundamental though, was producing sufficient thrust: early steam engines operated at modest speed, cycling every three seconds or so. At twenty revolutions per minute, a screw would have to be of an impractical diameter to actually push a ship forward rapidly. Smith overcame this last problem with a gearing system to allow the propeller shaft to turn 140 times per minute. His propeller design at first consisted of a true helical screw, of two turns (which created excessive friction), then later a single turn. Then, in 1840 he refitted Archimedes with a more recognizably modern propeller with two blades (each of half a turn).[11] Even with these design improvements, Brunel found that noise and vibration made the Archimedes of 1840 “uninhabitable” for passengers.[12]  But he had unshakeable faith in its potential. No doubt, advocates of the screw could tout many potential advantages over the paddlewheel: a lower center of gravity, a more spacious interior, more maneuverability in narrow channels, and more efficient use of fuel  (especially in headwinds, which caught the paddles full on, and rolling sidelong waves, which would lift one paddlewheel or the other out of the water).[13]  So, the weary investors of the Great Western Steam Ship Company saw the timetable of the  Great Britain’s construction set back once more, in order to incorporate a screw. As steamship historian Stephen Fox put it, “[i]n commercial terms, what the Great Western company needed in that fall of 1840 was a second ship, as soon as possible, to compete with the newly established Cunard line,” but that is not what they would get.[14] The completed ship finally launched in 1843, but did not take to sea for a transatlantic voyage until July 1845, having already cost the company some £200,000 pounds in total. With 322 feet of black iron hull driven by a 1000 horsepower Maudslay engine and a massive 36-ton propeller shaft, she dwarfed Great Western. Her all-iron construction gave an impression of gossamer lightness that fascinated a public used to burly wood.[15] The Launching of the Great Britain. But if her appearance impressed, her performance at sea did not. Her propeller fell apart, her engine failed to achieve the expected speed and she rolled badly in a swell. After major, expensive renovations in the winter of 1845, she ran aground at the end of the 1846 sailing season at Dundrum Bay off Ireland. Her iron hull proved sturdier than the organization that had constructed it: by the time she was at last floated free in August 1847, the Great Western Steam Company had already sunk. Another concern bought Great Britain for £25,000, and she ended up plying the route to Australia, operating mostly by sail.[16] In the long run, Brunel and his partners were right that iron hulls and screw propulsion would surpass wood and paddles, but Great Britain failed to prove it. The upstart Inman steamer line launched the iron-hulled, screw-powered City of Glasgow in 1850, which did prove that the ideas behind Great Britain could be turned to commercial success. But the more conservative Cunard line did not dispatch its first iron-hulled ship on its maiden voyage until 1856. Though even larger than Great Britain, at 376 feet and 3600 tons, the Persia still sported paddlewheels. This did not prevent her from booking more passengers than any other steamship to date, nor from setting a transatlantic speed record.[17] Not until the end of the 1860s did oceanic paddle steamers become obsolete. The Archimedes. Without any visible wheels, she looked deceptively like a typical sailing schooner, but for the telltale smokestack. A Glorious Folly For a time, Brunel walked away from shipbuilding. Then, late in 1851, he began crafting plans for a new liner to far surpass even Great Britain, one large enough to ply the routes to Indian and Australia without coaling stops on the African coast. Stopping to refuel wasted time but also quite a lot of money: coal in Africa cost far more than in Europe, because another ship had to bring it there in the first place.[18]    Because it would sail around Africa, not towards America, the new ship was christened Great Eastern. Monstrous in all its dimensions, the Great Eastern, can only be regarded as a monster in truth, in the archaic sense of “a prodigy birthed outside the natural order of things”; it was without precedent and without issue.[19] Given the total failure of Brunel’s last steam liner company, not to mention other examples of excessive exuberance in his past, such as an atmospheric railway project that shut down within a year, it is hard to conceive of how he was able to convince new backers to finance this wild new idea. He did have the help of one new ally, an ambitious Scottish shipbuilder named John Russell, who was also wracked by career disappointment and eager for a comeback. Together they built an astonishing vessel: at 690 feet long and over 22,000 tons, it exceeded in size every other ship built to its time, and also every other ship built in the balance of the nineteenth century. It would carry (in theory) 4,000 passengers and 18,000 tons of coal or cargo, and mount both paddlewheels and a propeller, the latter powered by the largest steam engine ever built, of 1600 horsepower. Brunel died of a stroke in 1859, and never saw the ship take to sea. That is just as well, for it failed even more brutally than the Great Britain. It was slow, rolled badly, maneuvered poorly, and demanded prodigious quantities of labor and fuel.[20] Like Great Britain, after a brief service its owners auctioned it off to new buyers at a crushing loss. Great Eastern did, however, have still in its future a key role to play in the extension of British imperial and commercial power, as we shall see. The Great Eastern in harbor in Wales in 1860. Note the ‘normal-size’ three-masted ship in the foreground for scale. I have lingered on Brunel’s career for so long not because he was of unparalleled import to the history of the age of steam (he was not), but because his character and his ambition fascinate me. He innovated boldly, but rarely as effectively as his more circumspect peers, such as Samuel Cunard. Much—though certainly not all—of his career consists of glorious failure. Whether you, dear reader, emphasize the glory or the failure, may depend on the width of the romantic streak that runs through your soul.

Read more
Steam Revolution: The Turbine

Incandescent electric light did not immediately snuff out all of its rivals: the gas industry fought back with its own incandescent mantle (which used the heat of the gas to induce a glow in another material) and the arc lighting manufacturers with a glass-enclosed arc bulb.[1] Nonetheless, incandescent lighting grew at an astonishing pace: the U.S. alone had an estimated 250,000 such lights in use by 1885, three million by 1890 and 18 million by the turn of the century.[2] Edison’s electric light company expanded rapidly across the U.S. and into Europe, and its success encouraged the creation of many competitors. An organizational division gradually emerged between manufacturing companies that built equipment and supply companies that used it to generate and deliver power to customers. A few large competitors came to dominate the former industry: Westinghouse Electric and General Electric (formed from the merger of Edison’s company with Thomson-Houston) in the U.S., and the Allgemeine Elektricitäts-Gesellschaft (AEG) and Siemens in Germany. In a sign of its gradual relative decline, Britain produced only a few smaller firms, such as Charles Parsons’ C. A. Parsons and Company—of whom more later.  In accordance with Edison’s early imaginings, manufacturers and suppliers expanded beyond lighting to general-purpose electrical power, especially electric motors and electric traction (trains, subways, and street cars). These new fields opened up new markets for users: electric motors, for example, enabled small-scale manufacturers who lacked the capital for a steam engine or water wheel to consider mechanization, while releasing large-scale factories from the design constraints of mechanical power transmission. They also provided electrical supply companies with a daytime user base to balance the nighttime lighting load. The demands of this growing electric power industry pushed steam engine design to its limits. Dynamos typically rotated hundreds of times a minute, several times the speed of a typical steam engine drive shaft. Engineers overcame this with belt systems, but these gave up energy to friction. Faster engines that could drive a dynamo directly required new high-speed valve control machinery, new cooling and lubrication systems to withstand the additional friction, and higher steam pressures more typical of marine engines than factories. That, in turn, required new boiler designs like the Babcock and Wilcox, which could operate safely at pressures well over 100 psi.[3] A high-speed steam engine (made by the British firm Willans) directly driving a dynamo (the silver cylinder at left). From W. Norris and Ben. H. Morgan, High Speed Steam Engines, 2nd edition (London: P.S. King & Son, 1902), 13. But the requirement that ultimately did in the steam engine was not for speed, but for size. As the electric supply companies evolved into large-scale utilities, providing power and light to whole urban centers and then beyond, they demanded more and more output from their power houses. Even Edison’s Pearl Street station, a tiny installation when looking back from the perspective of the turn of the century, required multiple engines to supply it. By 1903, the Westminster Electric Supply Corporation, which supplied only a part of London’s power, required forty-nine Willans engines in three stations to provide about 9 megawatts of power (an average of about 250 horsepower an engine). But demand continued to grow, and engines grew in response. Perhaps the largest steam engines ever built were the 12,000 horsepower giants designed by Edwin Reynolds and installed in 1901 for the Manhattan Elevated Railway Company and in 1904 for the Interborough Rapid Transit (IRT) subway company. Each of these engines actually consisted of two compound engines grafted together, each with its own high- and low-pressure cylinder, set at right angles to give eight separate impulses per rotation to the spinning alternator (an alternating current dynamo). The combined unit, engine and alternator, weighed 720 tons. But the elevated railway required eight of these monsters, and the IRT expected to need eleven to meet its power needs. The IRT’s power house, with a Renaissance Revival façade designed by famed architect Stanford White, filled a city block near the Hudson River (where it still stands today).[4] The inside of the IRT power house, with five engines installed. Each engine consists of two towers, with a disc-shaped dynamo between them. From Scientific American, October 29th, 1904. How much farther the reciprocating steam engine might have been coaxed to grow is hard to say with certainty, because even as the IRT powerhouse was going up in Manhattan, it was being overtaken by a new power technology based on whirling rotors instead of cycling pistons, the steam turbine. This great advancement in steam power borrowed from developments that had been brewing for decades in its most long-standing rival, water power. Niagara The signature electrical project of the turn of the twentieth century was the Niagara Falls Power Company. The immense scale of its works, its ambitions to distribute power over dozens of miles, its variety of prospective customers, and its adoption of alternating current: all signaled that the era of local, Pearl Street-style direct-current electric light plants was drawing to a close. The tremendous power latent in Niagara’s roaring cataract as it dropped from the level of Lake Erie to that of Lake Ontario was obvious to any observer—engineers estimated its potential horsepower in the millions—the problem was how to capture it, and where to direct it. By the late nineteenth century, several mills had moved to draw off some of its power locally. But Niagara could power thousands of factories, with each having to dig its own canals, tunnels and wheel pits to draw off the small fraction of waterfall that it required. New York State law, moreover, forbid development in the immediate vicinity of the falls to protect its scenic beauty. The solution ultimately decided on was to supply power to users from a small number of large-scale power plants, and the largest nearby pool of potential users lay in Buffalo, about twenty miles away.[5] The Niagara project originated in the 1886 designs of New York State engineer Thomas Evershed for a canal and tunnel lined with hundreds of wheel pits to supply power to an equal number of local factories. But the plan took a different direction in 1889 after securing the backing of a group of New York financiers, headed once again by J.P. Morgan. The Morgan group consulted a wide variety of experts in North America and Europe before settling on an electric power system as the best alternative, despite the unproven nature of long-distance electric power transmission. This proved a good bet: by 1893, Westinghouse had proved in California that it could deliver high-voltage alternating current over dozens of miles, convincing the Niagara company to adopt the same model.[6] Cover of the July 22, 1899 issue of Scientific American with multiple views of the first Niagara Falls Power Company power house and its five-thousand-horsepower turbine-driven generators. By 1904, the company had completed canals, vertical shafts for the fall of water, two powerhouses with a total capacity of 110,000 horsepower, and a mile-long discharge tunnel. They supplied power to local industrial plants, the city of Buffalo, and a wide swath of New York State and Ontario.[7] The most important feature of the power plant for our story, however, were the Westinghouse generators driven by water turbines, each with a capacity of 5,000 horsepower each. As Terry Reynolds, a historian of the waterwheel, put it, this was “more than ten times [the capacity] of the most powerful vertical wheel ever built.”[8] Water turbines had made possible the exploitation of water power on a previously inconceivable scale; appropriately so, for they originated from a hunger on the European continent for a power that could match British steam. Water Turbines The exact point at which a water wheel becomes a turbine is somewhat arbitrary; a turbine is simply a kind of water wheel that has reached a degree of efficiency and power that earlier designs could not approach. But the distinction most often drawn is in terms of relative motion: the water in a traditional wheel pushes the vane along with the same speed and direction as its own flow (like a person pushing a box along the floor). A turbine, on the other hand, creates “motion of the water relative to the buckets or floats of the wheel” in order to extract additional energy: that is to say, it uses the kinetic energy of the water as well as its weight or pressure. That can occur through either impulse (pressing water against the turning vanes), or reaction (shooting water out from them to cause them to turn) but very often includes a combination of both.[9] The exact origins of the horizontal water wheel are unknown, but they had been used in Europe since at least the late Middle Ages. They offered by far the simplest way to drive a millstone, since it could be attached directly to the wheel without any gearing, and remained in wide use in poorer regions of the continent well into the modern period. For centuries, the manufacturers and engineers of Western Europe focused their attention on the more powerful and efficient vertical water wheel, and this type constitutes most of our written record of water technology. Going back to the Renaissance, however, descriptions and drawings can be found of horizontal wheels with curved vanes intended to capture more of the flow of water, and it was the application of rigorous engineering to this general idea that led to the modern turbine. The turbine was in this sense the revenge of the horizontal water wheel, transforming the most low-tech type of water wheel into the most sophisticated. All of the early development of the water turbine occurred in France, which could draw on a deep well of hydraulic theory but could not so easily access coal and iron to make steam as could their British neighbors. Bernard Forest de Belidor, an eighteenth-century French engineer, recorded in his 1737 treatise on hydraulic engineering the existence of some especially ingenious horizontal wheels, used to grind flour at Bascale on the Garonne. They had curved blades fitted inside a surrounding barrel and angled like the blades of a windmill, such that “the water that pushes it works it with the force of its weight composed with the circular motion given to it by the barrel…”[10] Nothing much came of this observation for another century, but Belidor had identified what we could call a proto-turbine, where water not only pushed on the vanes but also glided down through them like the breeze on the arms of a windmill, capturing more of its energy. The horizontal mill wheels observed on the Garonne by Belidor. From Belidor, Architecture hydraulique vol. 1, part 2, Plan 5. In the meantime, theorists came to an important insight. Jean-Charles de Borda, another French engineer (there will be a lot of them in this part of the story), was only a small child in a spa town just north of the Pyrenees when Belidor was writing about water wheels. He studied mathematics and wrote mathematical treatises, became an engineer for the Army and then the Navy, undertook several scientific voyages, fought in the American Revolutionary War, and headed the commission that established the standard length of the meter. In the midst of all this he found some time in 1767 to write up a study on hydraulics for the French Academy of Sciences, in which he articulated the principle that, to extract the most power from a water wheel, the water should enter the machine without shock and leave it without velocity. Lazare Carnot, father of Sadi, restated this principle some fifteen years later, in a treatise that reached a wider audience than de Borda’s paper.[11] Though it is obviously impossible for the water to literally leave the wheel without velocity (for after all without velocity it would never leave), it was through striving for this imaginary ideal that engineers developed the modern, highly efficient water turbine. First came Jean-Victor Poncelet (from now on, if I mention someone, just assume they are French), another military engineer who had accompanied Napoleon’s Grande Armée into Russia in 1812, where he ended up a prisoner of war for two years. After returning home to Metz he became the professor of mechanics at the local military engineering academy. While there he turned his mind to vertical water wheels, and a long-standing tradeoff in their design: undershot wheels, in which the water passed under the wheel, were cheaper to construct but not very efficient, while overshot wheels, where the water came to the top of the wheel and fell on its vanes or buckets, had the opposite attributes. Poncelet combined the virtues of both by applying the principle of de Borda and Carnot. The traditional undershot waterwheel had a maximum theoretical efficiency of 50%, because the ideal wheel turned at half the speed of the water current, allowing the water to leave the vanes of the wheel behind with half of its initial velocity. The appearance of cheap sheet iron had made it possible to substitute metal vanes for wooden, and iron vanes could easily be bent in a curve. By curving the vanes of the wheel just so towards the incoming water, Poncelet found that it would run up the cupped vane, expending all of its velocity, and then fall out of the bottom of the wheel.[12] He published his idea in 1825 to immediate acclaim: “no other paper on water-wheels… had proved so interesting and commanded such attention.”[13] The Poncelet water wheel. Poncelet’s advance hinted at the possibility of a new water-powered industrial future for France. His wheel design soon became a common sight in a France eager to develop its industrial might, and richer in falling water than in reserves of coal. It inspired the Société d’Encouragement pour l’Industrie Nationale, an organization founded in 1801 to push France to be more industrially competitive with Britain, to offer a prize of 6,000 francs to anyone who “would apply on a large scale, in a satisfactory manner, in factories and manufacturing works, the water turbines or wheels with curved blades of Belidor.” The revenge of the horizontal wheel was at hand.[14] Benoît Fourneyron, an engineer at a water-powered ironworks in the hilly country near the Swiss border, claimed the prize in 1833. Even before the announcement of the prize, he had, in fact, already undertaken a deep study of hydraulic theory, reading up on Borda and his successors. He had devised and tested an improved “Belidor-style” wheel, applying the curved metal vanes of Poncelet to a horizontal wheel situated in a barrel-shaped pit, which we can fairly call the first modern water turbine. He went on to install over a hundred of these turbines around Europe, but his signal achievement was the 1837 spinning mill amid the hills of the Black Forest in Baden, which took in a head of water falling over 350 feet and generated sixty horsepower at 80% efficiency. The spinning rotor of the turbine responsible for this power was a mere foot across and weighed only forty pounds. A traditional wheel could neither take on such a head of water nor derive so much power, so efficiently, from such a compact machine.[15] The Fourneyron turbine. The inflowing water, from the reservoir A drives the rotor before emptying from its radial exterior into the basin D. From Eugène Armengaud, Traité théorique et pratique des moteurs hydrauliques et a vapeur, nouvelle edition (Paris: Armengaud, 1858), 279. Steam Turbines The water turbine was thus a far smaller and more efficient machine than its ancestor, the traditional water wheel. Its basic form had existed since at least the time of Belidor, but to achieve an efficient, high-speed design like Fourneyron’s required a body of engineers deeply educated in mathematical physics and a surrounding material culture capable of realizing those mathematical ideas in precisely machined metal. It also required a social context in which there existed demand for more power than traditional sources could ever provide: in this case, a France racing to catch up with rapidly industrializing Britain. The same relation held between the steam turbine and the reciprocating steam engine: the former could be much more compact and efficient, but put much higher demands on the precision of its design and construction. It was no great leap to imagine that steam could drive a turbine in the same way that water did: through the reaction against or impulse from moving steam. One could even look to some centuries-old antecedents for inspiration: the steam-jet reaction propulsion of Heron’s of Alexandria’s whirling “engine” (mentioned much earlier in this history), or a woodcut in Giovanni Branca’s seventeenth-century Le Machine, which showed the impulse of a steam jet driving a horizontal paddlewheel.   But it is one thing to make a demonstration or draw a picture, and another to make a useful power source. A steam turbine presented a far harder problem than a water turbine, because steam was so much less dense than liquid water. Simply transplanting steam into a water turbine design would be like blowing on a pinwheel: it would spin, but generate little power.[16] The difficulty was clear even in the eighteenth century: when confronted in 1784 with reports of a potential rival steam engine driven by the reaction created by a jet of steam, James Watt calculated that, given the low relative density of steam, the jet would have to shoot from the ends of the rotor at 1,300 feet per second, and thus “without god makes it possible for things to move 1000 feet [per second] it can not do much harm.” As historian of steam Henry Dickinson epitomized Watt’s argument, “[t]he analysis of the problem is masterly and the conclusion irrefutable.”[17] Even when future generations of metal working made the speeds required appear more feasible, one could get nowhere with traditional “cut and try” techniques with ordinary physical tools; the problem demanded careful analysis with the precision tools offered by mathematics and physics.[18] Dozens of inventors took a crack at the problem, nonetheless, including another famed steam engine designer, Richard Trevithick. None found success. Though Fourneyron had built an effective water turbine in the 1830s, the first practical steam turbines did not appear until the 1880s: a time when metallurgy and machine tools had achieved new heights (with mass-produced steels of various grades and qualities available) and a time when even the steam engine was beginning to struggle to sate modern society’s demand for power. It first appeared in two places more or less at once: Sweden and Britain. Gustaf de Laval burst from his middle-class background in the Swedish provinces into the engineering school at Uppsala with few friends but many grandiose dreams: he was the protagonist in his own heroic tale of Swedish national greatness, the engineering genius who would propel Sweden into the first rank of great nations. He lived simultaneously in grand style and constant penury, borrowing from his visions for an ever more prosperous tomorrow to live beyond his means of today. In the 1870s, while working a day job at a glassworks, he developed two inventions based on centrifugal force generated by a rapidly spinning wheel. The first, a bottle-making machine, flopped, but the second, a cream separator, became the basis for a successful business that let him leave his day job behind.[19] Then, in 1882 he patented a turbine powered by a jet of steam directed at a spinning wheel. De Laval claimed that his inspiration came from seeing a nozzle used for sandblasting at the glassworks come loose and whip around, unleashing its powerful jet into the air; it is also not hard to see some continuity in his interest in high-speed rotation. De Laval used his whirling turbines to power his whirling cream separators, and then acquired an electric light company, giving himself another internal customer for turbine power.[20] Though superficially similar to de Branca’s old illustration, de Laval’s machine was far more sophisticated. As Watt had calculated a century earlier, the low density of steam demanded high rotational speeds (otherwise the steam would escape from the machine having given up very little energy to the wheel) and thus a very high-velocity jet: de Laval’s steel rotor spun at tens of thousands of rotations per minute in an enclosed housing. A few years later he invented an hourglass-shaped nozzle to propel the steam jet to supersonic speeds, a shape that is still used in rocket engines for the same purpose today. Despite the more advanced metallurgy of the late-nineteenth century, however, de Laval still ran up against its limits: he could not run his turbine at the most efficient possible speed without burning out his bearings and reduction gear, and so his turbines didn’t fully capture their potential efficiency advantage over a reciprocating engine.[21] Cutaway view of a de Laval turbine, from William Ripper, Heat Engines (London: Longmans, Green, 1909), 234. Meanwhile, the British engineer Charles Parsons came up with a rather different approach to extracting energy from the steam, which didn’t require such rapid rotation. Whereas De Laval strove up from the middle class, Parsons came from the highest gentry. Son of the third Earl of Rosse, he grew up in a castle in Ireland, with grounds that included a lake and a sixty-foot-long telescope constructed to his father’s specifications. He studied at home under, Robert Ball, who later became the Astronomer Royal of Ireland, then went on to graduate from Cambridge University in 1877 as eleventh wrangler—the eleventh best in his class on the mathematics exams.[22] Despite his noble birth, Parsons appeared determined to find his own way in the world. He apprenticed himself at Elswick Works, a manufacturer of heavy construction and mining equipment and military ordnance in Newcastle on Tyne. He spent a couple years with a partner in Leeds trying to develop rocket-powered torpedoes before taking up as a junior partner at another heavy engineering concern, Clarke Chapman in Gateshead (back on the River Tyne).[23] His new bosses directed Parsons away from torpedoes toward the rapidly growing field of electric lighting. He turned to the turbine concept in search of a high-speed rotor that could match the high rotational speeds of a dynamo. Parsons came up with a different solution for the density problem than Laval’s. Rather than try to extract as much power as possible from the steam jet with one extremely fast rotor, he would send the steam through a series of rotors arranged horizontally. They would then not have to spin so quickly (though Parson’s first prototype still ran at 18,000 rotations per minute), and each could extract a bit of energy from the steam as it flowed through the turbine, dropping in pressure. This design extended the two-or three- stages of pressure reduction in a multi-cylinder steam engine into a continuous flow across a dozen or more rotors. Parsons’ approach created some new challenges (keeping the long, rapidly spinning shaft from bowing too far in one direction or the other, for example) but ultimately most future steam turbines would copy this elongated form.[24] Parson’s original prototype turbine and dynamo, with the top removed. Steam entered at the center and exited from both ends, which eliminated the need to deal with “end thrust,” a force pushing on one end of the turbine. From Dickinson, A Short History of the Steam Engine, plate vii. The Rise of Turbines Parsons soon founded his own firm to exploit the turbine. Because it has far less inherent friction than the piston of a traditional engine, and because none of its parts have to touch both hot and cold steam, a turbine had the potential to be much more efficient, but they didn’t start out that way. So his early customers were those who cared mainly about the smaller size of turbines: shipbuilders looking to put in electric lighting without adding too much weight or using too much space in the hull. In other applications reciprocating engines still won out.[25] Further refinements, however, allowed turbines to start to supplant reciprocating engines in electrical systems more generally: more efficient blade designs, the addition of a regulator to ensure that steam entered the turbine only at full pressure, the superheating of steam at one end and the condensing of it at the other to maximize the fall in temperature across the entire engine. Turbo-generators—electrical dynamos driven by turbines—began to find buyers in the 1890s. By 1896, Parsons could boast that a two-hundred-horsepower turbine his firm constructed for a Scottish electric power station ran at 98% of its ideal efficiency, and Westinghouse had begun to develop turbines under license in the United States.[26] Cutaway view of a fully developed Parsons-style turbine. Steam enters at left (A) and passes through the rotors to the right. From Ripper, Heat Engines, 241. At the same time, Parsons was pushing for the construction of ships with turbine powerplants, starting with the prototype Turbinia, which drove nine propellers with three turbines and achieved a top speed of nearly forty miles-per-hour. Suitably impressed, the British Admiralty ordered turbine-powered destroyers (starting with Viper in 1897), but the real turning point came in 1906 with the completion of the first turbine-driven battleship (Dreadnought) and transatlantic steamers (Lusitania and Muaretania), all supplied with Parsons powerplants.[27] HMS Dreadnought was remarkable not only for her armament and armor, but also for her speed of 21 knots (24 miles per hour), made possible by Parsons turbines. The very first steam turbines had demonstrated their advantage over traditional engines in size; a further decade-and-a-half of development allowed them to realize their potential advantages in efficiency; and now these massive vessels made clear their third advantage: the ability to scale to enormous power outputs. As we saw, the monster steam engines at the subway power house in New York could generate 12,000 horsepower, but the turbines aboard Lusitiania churned out half again as much, and that was far from the limit of what was possible. In 1915, the Interborough Rapid Transit Company, facing ever-growing demand for power with the addition of a third (express) track to its elevated lines, installed three 40,000 horsepower turbines for electrical generation, obsoleting Reynolds’ monster engines of a decade earlier. By the 1920s, 40,000 horsepower turbines were being built in the U.S., and burning half as much coal per watt of power generated as the most efficient reciprocating engines.[28] Parsons lived to see the triumph of his creation. He spent his last years cruising the world, and preferred to spend the time between stops talking shop with the crew and engineers rather than lounging with other wealthy passengers. He died in 1931, at age 76, in the Caribbean while aboard ship on the (turbine-powered of course) Duchess of Richmond.[29] Meanwhile, power usage shifted towards electricity, made widely available by the growth of steam and water turbines and the development of long-distance power transmission, not by traditional steam engines. Niagara was just a foretaste of the large-scale water power projects made feasible by the newly found capacity to transmit that power wherever it was needed: the Hoover Dam and Tennessee Valley Authority in the U.S., the Rhine power dams in Europe, and later projects intended to spur the modernization of poorer countries, from the Aswan Dam on the Nile and the Gezhouba Dam on the Yangtze. In regions with easy access to coal, however, steam turbines provided the majority of all electric power until far in the twentieth century. Cheap electricity transformed industry after industry. By 1920, manufacturing consumed half of the electricity produced in the U.S., mainly through dedicated electric motors at each tool, eliminating the need for the construction and maintenance of a large, heavy steam engine and for bulky and friction-heavy shafts and belts to transmit power through the factory. The capital barriers to starting a new manufacturing plant thus dropped substantially along with the recurring cost of paying for power, and the way was opened to completely rethink how manufacturing plants were built and operated. Factories became cleaner, safer, and more pleasant to work in, and the ability to organize machines according to the most efficient work process rather than the mechanical constraints of power delivery produced huge dividends in productivity.[30] A typical pre-electricity factory power distribution system, based on line shafts and belts (in this case driving power looms). All the machines in the factory have to be organized around the driveshafts. [Z22, CC BY-SA 3.0] The 1910 Ford Highland Park plant represents a hybrid stage on the way to full electrification of every machine; the plant still had overhead line shafts (here for milling engine blocks), but each area was driven by a local electric motor, allowing for a much more flexible arrangement of machinery. By that time, the heyday of the piston-driven steam engine was over. For large-scale installations, it could no longer compete with turbines (whether powered by liquid water or steam). At the same time, feisty new competitors, diesel and gasoline engines, were gnawing away at its share of the lower horsepower market. The warning shot fired by the air engine had finally caught up to steam. It could not outrun thermodynamics, and the incredibly energy-dense new fuel source that had come bubbling up out of the ground: rock oil, or petroleum.

Read more
High Pressure, Part 2: The First Steam Railway

Railways long predate the steam locomotive. Trackways with grooves to keep a wheeled cart on a fixed path date back to antiquity (such as the Diolkos, which could carry a naval vessel across the Isthmus of Corinth on a wheeled truck). The earliest evidence for carts running atop wooden rails, though, comes from the mining districts of sixteenth century Europe. Agricola describes a kind of primitive railway used by German miners in his 1556 treatise De Re Metallica. Agricola reports that the miners ran trucks called Hunds (“dogs”) (supposedly because of the barking noise they made while in motion) over two parallel wooden planks. A metal pin protruding down from the truck into the gap between the planks kept it from rolling off the track.[1] This system allowed a laborer to carry far more material out of the mine in a single trip than they could by carrying it themselves. British Railways Wooden railways called “waggon ways” are first attested in the coal-mining areas of Britain around 1600. These differed in two important ways from earlier mining carts: first, they ran outside the mine, carrying coal a short distance (perhaps a mile or two) to the nearest high-quality road or navigable waterway from which it could be brough to market. Second, they were drawn by horses, at least on the uphill courses—on some eighteenth-century wagon ways, the horse actually caught a ride downhill, standing on a flat carriage behind the cart. Flanged wheels to keep the wagon on the track were also probably introduced around this time. Both wheels and rails were still constructed of wood, however, which limited the load the wagons could carry.[2] By the middle of the eighteenth century, waggon ways crisscrossed the mining districts of northern England, especially around the coalfields, creating a substantial trade in birch wheels and rails of beech or ash from the South. They were called by many different names, such as “gangways,” “plateways,” “tramways,” or “tramroads.” Colliers invested sophisticated engineering into their design, using bridges, causeways, and tunnels to create a smooth grade from the pithead to the point of embarkation (such as the Tyne or the Severn rivers).[3] Most were no more than a mile or two long, but some ran as far as ten miles. They were smooth enough that a single horse could haul several times on rails what it could on an ordinary eighteenth-century road: the figures given by various sources for the load of a horse-drawn rail carriage range from two to ten tons, likely depending on the grade of the railway and the material composition of the rails and wheels.[4] The Little Eaton Gangway, a railway built in the 1790s, that, incredibly, continued to operate until 1908, when this photo was taken. It carried coal five miles down to the Derby Canal. This close-up of the Little Eaton Gangway shows clearly the design of the railbed, with L-shaped rails to hold the wagon on the track, and stone blocks underneath to which they were nailed. The Penydarren railway, discussed below, had the same design. This may seem prologue enough, but two further milestones in the development of railways still intervened before the steam locomotive came into the picture. Around the late 1760s, the Darbys of Coalbrookdale step into our history once more. They are reputed to have been the first to introduce durable cast iron plates to strengthen the rails that they used to carry materials among their various Shropshire properties.[5] Later the Darbys and others introduced fully cast-iron rails, doing away with wood altogether. With this change in material the railways of England (already intimately linked with coal mining) now became fully enmeshed in the cycle of the triumvirate—coal, iron, and steam—well before they became steam-powered. Then, in 1799, came the first public horse-drawn railway. Up to this time, all railways  served the needs of a single owner (though some required an easement across neighboring properties), typically a mining concern. But the Surrey Iron Railway, which ran from Croydon (south of London) up to the Thames at Wandsworth, was open to any paying cargo, much like a turnpike road or a canal. Among the backers of the Surrey Iron Railway was a Midlands colliery owner, William James, who will have an important part to play later in our story.[6] So, although we think of them now as two components of a single technological system, the locomotive and the railway did not start out that way. Instead, the locomotive appeared on the scene as an alternative way of hauling freight over an already familiar and well-established transportation medium. Trevithick Richard Trevithick was the first Englishman to attempt this substitution. He was born in 1771, in the heart of the copper-mining region of Cornwall. His birthplace, the village of Illogan, sat beneath the weathered hill of Carn Brea, said to be the ancient dwelling place of a giant.[7] But the only giants still found upon the landscape of eighteenth-century Cornwall breathed steam. They sheltered in the stone engine houses that still dot the countryside today, and raised water from the bottom of the mine, allowing the proprietors to delve ever deeper into the earth. Trevithick’s father was a mine “captain,” a high-status position with the responsibilities of a general manager and some of the same cachet among the mining community as a sea captain would have in a nautical community. This included the privilege of an honorific title: he was “Captain Trevithick” to his neighbors. The elder Trevithick’s work included serving as mine engineer and assayer, and he would have been familiar with all the technical workings of the mine, from the digging equipment to the pumping engine. The younger Trevithick must have learned well from his father. At fifteen, he was employed by his father at Dolcoath, the most lucrative copper mine of the region. By age 21, having grown into something of a giant himself—standing a burly six feet two, his pastimes were said to include hurling sledgehammers over buildings—the miners of Cornwall already consulted him for his expertise on steam engines.[8] Linnell, John; Richard Trevithick (1771-1833); Science Museum, London ; http://www.artuk.org/artworks/richard-trevithick-17711833-179865 " data-medium-file="https://cdn.accountdigital.net/Fvh4aGvmBxWfkJ8nuFse_zi81ic9" data-large-file="https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j?w=739" loading="lazy" src="https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j?w=831" alt="" class="wp-image-14451" width="566" height="696" srcset="https://cdn.accountdigital.net/FnhVecf3Lm75yyCxkoj00B9ZFNOG 566w, https://cdn.accountdigital.net/FikdTskwdmnAoksYCE_1j4a5Fl2B 122w, https://cdn.accountdigital.net/Fvh4aGvmBxWfkJ8nuFse_zi81ic9 244w, https://cdn.accountdigital.net/FgMy22xNugUYyjZ88KOSKWMwaLrv 768w, https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j 974w" sizes="(max-width: 566px) 100vw, 566px">A portrait of Trevithick painted in 1816, when he was 45. He gestures to the Andes of Peru in the background, where Trevithick intended, at the time, to make his fortune in silver mining. By the 1790s, Boulton and Watt were about as popular in Cornwall as Fulton and Livingston were in the American West, and for the same reason: they were seen as grasping monopolists who kept the miners of Cornwall, who depended on effective pumps for their livelihood, in thrall to the Watt patent. Fifteen years earlier, Watt’s efficient engines had appeared as a lifeline to copper mines suffering under competition from the prodigious Parys Mountain in Anglesey, whose ample ores could be cheaply mined directly from the surface.[9] But as the mines continued to struggle, Boulton and Watt began to take shares in mines in lieu of payment, and set up a headquarters at Cusgarne, right in the copper district, to oversee their investments. One of their most skilled mechanics, William Murdoch, moved to Cornwall and acted as their local agent. To the copper miners, Boulton and Watt began to look like meddlers as well as leeches. By the 1790s, Anglesey ran out of easy-to-reach ore, and the fortunes of the Cornwall copper mines began to look up. With their mutual enemy gone, the grudging partnership between the Cornish miners and Boulton and Watt soured rapidly. The Dolcoath Copper Mine, Camborne, Cornwall, circa 1831. (Photo by Hulton Archive/Getty Images) " data-medium-file="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=300" data-large-file="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=739" loading="lazy" width="902" height="637" src="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=902" alt="" class="wp-image-14453" srcset="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96 902w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=150 150w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=300 300w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=768 768w" sizes="(max-width: 902px) 100vw, 902px">An 1831 engraving of Dolcoath copper mine, in Cornwall. Trevithick, a hot-headed young man, took up the banner of revolution against the Boulton and Watt regime in 1792, fighting a series of legal battles on behalf of the competing engine design of Edward Bull. By 1796 every battle had been lost—Bull and Trevithick’s attempt to defy the Watt patent had failed, and there seemed to be nothing for the Cornwall interests to do but wait for the expiration of its term, in 1800.[10] But Trevithick found another way forward: strong steam. More than any other element, the separate condenser distinguished Watt’s patent engine from its predecessors. By shedding the condenser and operating well above atmospheric pressure instead, Trevithick could avoid claims of infringement. Concerned that releasing uncondensed steam would waste all the power of the engine, he consulted Cornwall’s resident mathematician, Davies Giddy. Giddy reassured him that he would waste a fixed amount of power equal to the weight of the atmosphere, and would gain some compensation in return by saving the power required to work an air pump and lift water into the condenser.[11] As in the U.S., then, the socioeconomic environment pushed steam engine users on the periphery toward high-pressure, though in this case it was the presence of a rival patent rather than an absence of capital resources. Trevithick saw an immediate application for high-pressure steam as a replacement for the horse whim, an animal-powered lift which worked alongside the pumping engine in many Cornish mines, usually in the same vertical shaft, to raise ore and dross from below. A few whims had been installed with Watt engines, but Trevithick’s “puffers” (so called for the visible puff of exhaust steam they released) cost less to build and transport. The compact high-pressure engine also fit much more comfortably in the engine house alongside the pumping engine than a second Watt behemoth would.  An 1806 Trevithick stationary steam engine, minus the flywheel it would have had at the time to maintain a steady motion. Note how the exhaust flue comes out of the middle of the cylindrical boiler, the same return-flue design used by Evans to extract additional heat from the hot gases of the furnace. Trevithick’s engines thus began replacing horse whims in engine houses across Cornwall in the early 1800s.[12] The Watt interests were not happy: much later in life Trevithick claimed that Watt (probably referring in this case to the belligerent James Watt, Jr., the inventor’s son), “said to an eminent scientific character still living that I deserved hanging for bringing into use the high pressure,” presumably because of the danger of explosion.[13] One of Trevithick’s boilers, installed to drain the foundation for a corn mill in Greenwich, did in fact explode in 1803 when left unattended, and the Watts did not miss the opportunity to get in their “I told you sos” in the press.[14] In future engines Trevithick would include two safety valves, plus a plug soldered with lead as a final safety measure: if the water level fell too low, the heat would melt the solder and blow out the plug, relieving excess pressure. But Trevithick’s interest had by this time already wandered from staid industrial applications to the more romantic dream of a steam carriage. Steam Carriage As we have seen already several times in this story, many inventors and philosophers had dreamed the same dream, dating back well over a century. To realize how readily available the idea of a steam carriage was, we must remember that steam power’s job, in a sense, had always been to replace either horse- or water-power, and that carriages were the most ubiquitous piece of horse-powered machinery around in early modern Europe. The first person we know of to successfully build a steam carriage (if we construe success loosely), was a French army officer named Nicolas-Joseph Cugnot. More specifically, he built a steam fardier, a cart for pulling cannon. It was a curious looking tricycle with the boiler hanging off the front like an elephantine proboscis. Cugnot carried out some trial runs of his vehicle in 1769, but with no way to refill the boiler while in use, it had to stop every fifteen minutes to let the boiler cool, refill it, and work up steam once more. This was a curiosity without real practical value.[15] Cugnot’s Fardier à Vapeur, preserved at the Musée des Arts et Métiers in Paris. Trevithick probably never heard of Cugnot, but he certainly knew William Murdoch, Watt’s representative in Cornwall. Murdoch began experimenting with high-pressure steam carriages in the 1780s, and built a three-wheeled carriage that (like Cugnot’s cart) survives today in a museum. Unlike Cugnot’s, vehicle however, Murdoch’s surviving machine is a model, no more than a foot tall. Lacking the backing of his employers, who disliked strong steam and found the carriage concept unpromising if not ridiculous, Murdoch’s tinkerings did not even get as far as Cugnot’s. There is no evidence that he ever built a full-sized carriage. [16] Editing Undertaken: Levels, Unsharp Mask " data-medium-file="https://cdn.accountdigital.net/FhB8HkLDtOTEC4mIMPtlRbD1ZWQw" data-large-file="https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T?w=739" loading="lazy" src="https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T?w=1024" alt="" class="wp-image-14457" width="561" height="421" srcset="https://cdn.accountdigital.net/FpqH97sUpkaqcDXUxuJIXFhnlUoO 561w, https://cdn.accountdigital.net/FjVNxoFYXseTtLwNZJAeiqhVHwOf 150w, https://cdn.accountdigital.net/FhB8HkLDtOTEC4mIMPtlRbD1ZWQw 300w, https://cdn.accountdigital.net/FtfsjNccReQS7wrfGvu9EyUGjYcr 768w, https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T 1024w" sizes="(max-width: 561px) 100vw, 561px">Murdoch’s model steam carriage. It’s unclear why Trevithick decided to build a steam-powered vehicle—he may have been trying to develop a portable engine that could be moved between work sites under its own power. It is possible that Trevithick got the idea for a steam carriage from Murdoch, but, as we have seen, the idea was commonplace. In the execution of that idea, Trevithick went far beyond his predecessor. He began work on his steam carriage in late 1800, with the help of his cousin Andrew Vivian and several other local craftsmen. He already had in hand his high-pressure engine design, with a very favorable power-to-weight ratio compared to a Watt engine. A small and light engine was advantageous in a steamboat, but it was crucial in a land vehicle that had to rest on wheels and fit on narrow roads. He used the same return-flue boiler design as Oliver Evans had; given the distance and timing, they almost certainly arrived at this idea independently. Many wise men of the time doubted that a self-driving wheel was even possible, arguing that it would simply spin in place without an animal with traction to pull it. Trevithick therefore felt it necessary to first disprove this theory (in an experiment probably devised by Giddy) by sitting in a chaise with his compatriots, and moving the vehicle by turning the wheels with their hands.[17] In December 1801 they went for their first steam-powered ride. What exactly the first carriage looked like is unknown, but it was likely a simple wheeled platform with engine and boiler mounted atop it and a crude lever for steering. Years later one “old Stephen Williams” (not so old at the time) would recall: I was a cooper by trade, and when Captain Dick [Trevithick] was making his first-steam carriage I used to go every day into John Tyack’s blacksmiths’ shop at the Weith, close by here, where they were putting it together. …In the year of 1801, upon Christmas-eve, coming on evening, Captain Dick got up steam, out in the high road… we jumped up as many as could; may be seven or eight of us. ‘Twas a stiffish hill going from the Weith up to Cambourne Beacon, but she went off like a little bird.[18] Within days, this first carriage quite literally crashed and burned (though the burning was apparently caused by leaving the carriage unattended with the firebox lit, not by the crash itself).[19] Nonetheless, Trevithick formed a partnership with his cousin Vivian to develop both the high-pressure engine and its use in carriages, and they went to London to seek a patent and additional backers and advisers, including such scientific luminaries as Humphrey Davy and Count Rumford. They had a second carriage built, this one designed as a true passenger vehicle with a compartment to accommodate eight. Giddy nicknamed it “Trevithick’s Dragon.” It worked better than the first attempt, running a good eight miles-per-hour on level ground, but the ride was rough. For some decades, steel spring suspensions had been standard on carriages, but the direct geared linkage between the drive wheels and the engine on Trevithick’s carriage did not allow for them to move independently.[20] The steering mechanism also worked poorly. In one early trial Trevithick tore the rail from a garden wall, and Vivian’s relative Captain Joseph Vivian (actually a sea captain) reported after a drive that he “thought he was more likely to suffer shipwreck on the steam-carriage than on board his vessel…”[21] It offered no obvious advantages over a horse carriage to offset the loss of comfort and control, not to mention the risk of fire and explosion. The Dragon attracted some curious onlookers, but no investors. Steam Railway If steam-powered vehicles on water found success first in the U.S. because alternative modes of inland transportation were lacking, steam-powered vehicles on land found success first in Britain because the transportation medium to support them already existed. The railways offered the perfect solution for the problems of Trevithick’s steam carriage: a road without cobbles or ruts to jounce on, a road that steered the carriage for you, and a road with no passengers to annoy or endanger. But Trevithick was not positioned to see it, because Cornwall did not have railways of any kind (its first, the Portreath Tramroad was not constructed until 1812). It would take a new connection to link the engine born out of the struggle with Watt over the mines of Cornwall to the rails created to solve the problems of northern coalfields. On business in Bristol in 1803, Trevithick made that connection, when he met a Welsh ironmaster named Samuel Homfray, who provided him with fresh capital in exchange for a share in his patent, and solicited his aid in building steam engines for his ironworks, called Penydarren. It also happened that Homfray also had part ownership of a railway, and the opportunity thus arose to marry high-pressure steam to rails. For Homfray this was also an opportunity to show up a rival. He and several other ironmasters had invested in a canal to carry their wares down to the port at Cardiff, but the controlling partner, Richard Crawshay, demanded exclusive privileges over the waterway. Homfray and several of the other partners exploited a loophole to bypass Crawshay. At the time, any public thoroughfare (on land or water) required an act of Parliament to approve its construction. The act approving the Cardiff canal also allowed for the construction of railways within four miles of the canal. The intent of this was to allow for feeder lines. Rails, at the time, were a strictly secondary transportation system. They provided “last-mile” service from mining centers to a navigable waterway. A boom in canal building that began in the later eighteenth century extended and interconnect those waterways, which offered far lower transportation costs than any form of land transportation. If a horse could pull several times the weight on a railway that it could on an ordinary road, it could pull several times more again when hitched to a canal barge.[22] (The plummeting transportation costs brought about by the ability to float cargo to the coast from nearly any town in England by horse-drawn barge account for the lack of British interest in riverine steamboats.) So the goal was almost always to get goods to water as quickly as possible. The trick that Homfray and his allies pulled was to build a railway as a primarytransportation link in its own right, paralleling the canal for over nine miles, rather than connecting directly to it, and thereby neutering Crawshay’s privileges.[23] It was on this railway that Homfray (or perhaps Trevithick, which partner initiated the idea is unknown) proposed to replace horse power with steam power. Crawshay found the concept laughable. Like many of his contemporaries, he believed that the smooth wheels would find no purchase on smooth rails, and would simply spin in place. The ironmasters placed a not-so-friendly wager of 500 guineas over whether Trevithick could build a locomotive to haul ten tons of iron the length of the railway. On February 21st, 1804, Crawshay lost. As Trevithick reported to Giddy: Yesterday we proceeded on our journey with the engine; we carry’d ten tons of Iron, five waggons, and 70 Men riding on them the whole of the journey. Its above 9 miles which we perform’d in 4 hours & 5 Mints, but we had to cut down som trees and remove some Large rocks out of road. The engine, while working, went nearly 5 miles pr hour; …We shall continue to work on the road, and shall take forty tons the next journey. The publick untill now call’d mee a schemeing fellow but now their tone is much alter’d.[24] We should not picture the Penydarren engine in the mind’s eye as the iconic, fully-developed steam locomotive of the mid-19th century. The railbed itself looked very different than what we might imagine: the cast-iron rails were outward-facing Ls, whose vertical stroke kept the wheels from leaving the track. Nails driven into two parallel rows of stone blocks held the rails in place. This arrangement avoided having perpendicular rail ties (or sleepers, as the British call them) that could trip up the horses, who walked between the rails as they pulled their cargo. Trevithick’s locomotive resembled a stationary engine jury-rigged to a wheeled platform. A crosshead and large gears carried power from the cylinder down to the left-hand wheels (only, the right side received no power), and a flywheel kept the vehicle from lurching each time the piston reached the dead center position. Trevithick’s goal was to show off the versatility of high-pressure steam, not to launch a railroad revolution. A replica showing what the Penydarren locomotive may have looked like. Note the fixed gearing system for delivering power to the two wheels in the foreground, the flywheel in the background, and the L-shaped rails. Notice also how much it resembles Trevithick’s stationary steam engine, with additional mechanisms to transmit power to the wheels. The Penydarren locomotive performed several more trial runs; on at least one, the rails cracked under the engine’s weight: a portent of a major technical obstacle yet to be overcome before steam railways could find lasting success. Trevithick then seems to have removed the engine and put it to work running a hammer in the ironworks; what became of the rest of the vehicle is unknown.[25] Many other endeavors captured Trevithick’s attention in the following years; among them stationary engines at Penydarren and elsewhere, steam dredging experiments, and a scheme to use a steam tug to drag a fireship into the midst of Napoleon’s putative invasion fleet at Bolougne (as we have seen, Robert Fulton was at this time trying to sell the British government on his “torpedoes” to serve the same purpose). In 1808, he made once last stab at steam locomotion, a demonstration vehicle called the Catch-me-who-can that ran over a temporary circular track in London. Again, rail breakage proved a problem. Trevithick hoped to earn some money from paying riders and to attract the interest of investors, but he failed on both accounts.[26] The reasons for the lack of interest are clear. Trevithick’s locomotives were neither much faster nor obviously cheaper than a team of horses, and they came with a host of new, unsolved technical problems. Twenty more years would elapse before rails would begin to seriously challenge canals as major transport arteries for Britain, not mere peripheral capillaries. To make that happen would require improvements in locomotives, better rails, and a new way of thinking about the comparative economics of transportation. Trevithick himself had twenty-five more years of restless, peripatetic life ahead of him, much of it spent on fruitless mining ventures in South and Central America. In an irresistible historical coincidence, in 1827, at the end of a financially ruinous trip to Costa Rica, he crossed paths with another English engineer named Robert Stephenson. Stephenson gave the downtrodden older man fifty pounds to help him get home. After a spate of mostly failed or abortive projects, Trevithick died in 1833. The one item of real wealth remaining to him, a gold watch brought back from South America, went to defray his funeral expenses.[27] Young Stephenson, however, returned to much brighter prospects in England. He and his father would soon redeem the promise hinted at by the trials at Penydarren.

Read more
Internet Ascendant, Part 2: Going Private and Going Public

In the summer of 1986, Senator Al Gore, Jr., of Tennessee introduced an amendment to the Congressional Act that authorized the  budget of the National Science Foundation (NSF). He called for the federal government to study the possibilities for “communications networks for supercomputers at universities and Federal research facilities.” To explain the purpose of this legislation, Gore called on a striking analogy: One promising technology is the development of fiber optic systems for voice and data transmission. Eventually we will see a system of fiber optic systems being installed nationwide. America’s highways transport people and materials across the country. Federal freeways connect with state highways which connect in turn with county roads and city streets. To transport data and ideas, we will need a telecommunications highway connecting users coast to coast, state to state, city to city. The study required in this amendment will identify the problems and opportunities the nation will face in establishing that highway.1In the following years, Gore and his allies would call for the creation of an “information superhighway”, or, more formally, a national information infrastructure (NII). As he intended, Gore’s analogy to the federal highway system summons to mind a central exchange that would bind together various local and regional networks, letting all American citizens communicate with one another. However, the analogy also misleads – Gore did not propose the creation of a federally-funded and maintained data network. He envisioned that the information superhighway, unlike its concrete and asphalt namesake, would come into being through the action of market forces, within a regulatory framework that would ensure competition, guarantee open, equal access to any service provider (what would later be known as “net neutrality”), and provide subsidies or other mechanisms to ensure universal service to the least fortunate members of society, preventing the emergence of a gap between the information rich and information poor.2Over the following decade, Congress slowly developed a policy response to the growing importance of computer networks to the American research community, to education, and eventually to society as a whole. Congress’ slow march towards an NII policy, however, could not keep up with the rapidly growing NSFNET, overseen by the neighboring bureaucracy of the executive branch. Despite its reputation for sclerosis, bureaucracy was created exactly because of its capacity, unlike a legislature, to respond to events immediately, without deliberation. And so it happened that, between 1988 and 1993, the NSF crafted the policies that would determine how the Internet became private, and thus went public. It had to deal every year with novel demands and expectations from NSFNET’s users and peer networks. In response, it made decisions on the fly, decisions which rapidly outpaced Congressional plans for guiding the development of an information superhighway. These decisions rested largely in the hands of a single man – Stephen Wolff.Acceptable UseWolff earned a Ph.D. in electrical engineering at Princeton in 1961 (where he would have been a rough contemporary of Bob Kahn), and began what might have been a comfortable academic career, with a post-doctoral stint at Imperial College, followed by several years teaching at Johns Hopkins. But then he shifted gears, and took a position  at the Ballistics Research lab in Aberdeen, Maryland. He stayed there for most of the 1970s and early 1980s, researching communications and computing systems for the U.S. Army. He introduced Unix into the lab’s offices, and managed Aberdeen’s connection to the ARPANET.3In 1986, the NSF recruited him to manage the NSF’s supercomputing backbone – he was a natural fit, given his experience connecting Army supercomputers to ARPANET. He became the principal architect of NSFNET’s evolution from that point until his departure in 1994, when he entered the private sector as a manager for Cisco Systems. The original intended function of the net that Wolff was hired to manage had been to connect researchers across the U.S. to NSF-funded supercomputing centers. As we saw last time, however, once Wolff and the other network managers saw how much demand the initial backbone had engendered, they quickly developed a new vision of NSFNET, as a communications grid for the entire American research and post-secondary education community.However, Wolff did not want the government to be in the business of supplying network services on a permanent basis. In his view, the NSF’s role was to prime the pump, creating the initial demand needed to get a commercial networking services sector off the ground. Once that happened, Wolff felt it would be improper for a government entity to be in competition with viable for-profit businesses. So he intended to get NSF out of the way by privatizing the network, handing over control of the backbone to unsubsidized private entities and letting the market take over.This was very much in the spirit of the times. Across the Western world, and across most of the political spectrum, government leaders of the 1980s touted privatization and deregulation as the best means to unleash economic growth and innovation after the relative stagnation of the 1970s. As one example among many, around the same time that NSFNET was getting off the ground, the FCC knocked down several decades-old constraints on corporations involved in broadcasting. In 1985, it removed the restriction on owning print and broadcast media in the same locality, and two year later it nullified the fairness doctrine, which had required broadcasters to present multiple views on public-policy debates. From his post at NSF, Wolff had several levers at hand for accomplishing his goals. The first lay in the interpretation and enforcement of the network’s acceptable use policy (AUP). In accordance with NSF’s mission, the initial policy for the NSFNET backbone, in effect until June 1990, required all uses of the network to be in support of “scientific research and other scholarly activities.” This is quite restrictive indeed, and would seem to eliminate any possibility of commercial use of the network. But Wolff chose to interpret the policy liberally. Regularly mailing list postings about new product releases from a corporation that sold data processing software – was that not in support of scientific research? What about the decision to allow MCI’s email system to connect to the backbone, at the urging of Vint Cerf, who had left government employ to oversee the development of MCI Mail. Wolff rationalized this – and other later interconnections to commercial email systems such as CompuServe’s – as in support of research by making it possible for researchers to communicate digitally with a wider range of people that they might need to contact in the pursuit of their work. A stretch, perhaps. But Wolff saw that allowing some commercial traffic on the same infrastructure that was used for public NSF traffic would encourage the private investment needed to support academic and educational use on a permanent basis. Wolff’s strategy of opening the door of NSFNET as far as possible to commercial entities got an assist from Congress in 1992, when Congressman Rick Boucher, who helped oversee NSF as chair of the Science Subcommittee, sponsored an amendment to the NSF charter which authorized any additional uses of NSFNET that would “tend to increase the overall capabilities of the networks to support such research and education activities.” This was an ex post facto validation of Wolff’s approach to commercial traffic, allowing virtually any activity as long as it produced profits that encouraged more private investment into NSFNET and its peer networks.  Dual-Use NetworksWolff also fostered the commercial development of networking by supporting the regional networks’ reuse of their networking hardware for commercial traffic. As you may recall, the NSF backbone linked together a variety of not-for-profit regional nets, from NYSERNet in New York to Sesquinet in Texas to BARRNet in northern California. NSF did not directly fund the regional networks, but it did subsidize them indirectly, via the money it provided to labs and universities to offset the costs of their connection to their neighborhood regional net. Several of the regional nets then used this same subsidized infrastructure to spin off a for-profit commercial enterprise, selling network access to the public over the very same wires used for the research and education purposes sponsored by NSF. Wolff encouraged them to do so, seeing this as yet another way to accelerate the transition of the nation’s research and education infrastructure to private control. This, too, accorded neatly with the political spirit of the 1980s, which encouraged private enterprise to profit from public largesse, in the expectation that the public would benefit indirectly through economic growth. One can see parallels with the dual-use regional networks in the 1980 Bayh-Dole Act, which defaulted ownership of patents derived from government-funded research to the organization performing the work, not to the government that paid for it. The most prominent example of dual-use in action was PSINet, a for-profit company initially founded as Performance Systems International in 1988. William Schrader and Martin Schoffstall, the co-founder of NYSERNet and one of vice presidents’, respectively, created the company. Schofstall, a former BBN engineer and co-author of the Simple Network Management Protocol (SNMP) for managing the devices on an IP network, was the key technical leader. Schrader, an ambitious Cornell biology major and MBA who had helped his alma mater set up its supercomputing center and get it connected to NSFNET, provided the business drive. He firmly believed that NYSERNet should be selling service to businesses, not just educational institutions. When the rest of the board disagreed, he quit to found his own company, first contracting with NYSERNet for service, and later raising enough money to acquire its assets. PSINet thus became one of the earliest commercial internet service providers, while continuing to provide non-profit service to colleges and universities seeking access to the NSFNET backbone.4Wolff’s final source of leverage for encouraging a commercial Internet lay in his role as manager of the contracts with the Merit-IBM-MCI consortium that operated the backbone. The initial impetus for change in this dimension came not from Wolff, however, but from the backbone operators themselves.  A For-Profit BackboneMCI and its peers in the telecommunications industry had a strong incentive to find or create more demand for computer data communications. They had spent the 1980s upgrading their long-line networks from coaxial cable and microwave – already much higher capacity than the old copper lines – to fiber optic cables. These cables, which transmitted laser light through glass, had tremendous capacity, limited mainly by the technology in the transmitters and receivers on either end, rather than the cable itself. And that capacity was far from saturated. By the early 1990s, many companies had deployed OC-48 transmission equipment with 2.5 Gbps of capacity, an almost unimaginable figure a decade earlier. An explosion in data traffic would therefore bring in new revenue at very little marginal cost – almost pure profit.5The desire to gain expertise in the coming market in data communications helps explains why MCI was willing to sign on to the NSFNET bid proposed by Merit, which massively undercut the competing bids (at $14 million for five years, versus the $40 and $25 millions proposed by their competitors6), and surely implied a short-term financial loss for MCI and IBM. But by 1989, they hoped to start turning a profit from their investment. The existing backbone was approaching the saturation point, with 500 million packets a month, a 500% year-over-year increase.7 So, when NSF asked Merit to upgrade the backbone from 1.5 Mbps T1 lines to 45Mbps T3, they took the opportunity to propose to Wolff a new contractual arrangement.T3 was a new frontier in networking – no prior experience or equipment existed for digital networks of this bandwidth, and so the companies argued that more private investment would be needed, requiring a restructuring that would allow IBM and Merit to share the new infrastructure with for-profit commercial traffic – a dual-use backbone. To achieve this, the consortium would from a new non-profit corporation, Advanced Network & Services, Inc. (ANS), which would supply T3 networking services to NSF. A subsidiary called ANS CO+RE systems would sell the same services at a profit to any clients willing to pay. Wolff agreed to this, seeing it as just another step in the transition of the network towards commercial control. Moreover, he feared that continuing to block commercial exploitation of the backbone would lead to a bifurcation of the network, with suppliers like ANS doing an end-run around NSFNET to create their own, separate, commercial Internet. Up to that point, Wolff’s plan for gradually getting NSF out of the way had no specific target date or planned milestones. A workshop on the topic held at Harvard in March 1990, in which Wolff and many other early Internet leaders participated, considered a variety of options without laying out any concrete plans.8 It was ANS’ stratagem that triggered the cascade of events that led directly to the full privatization and commercialization of NSFNET.It began with a backlash. Despite Wolff’s good intentions, IBM and MCI’s ANS maneuver created a great deal of disgruntlement in the networking community. It became a problem exactly because of the for-profit networks attached to the backbone that Wolff had promoted. So far they had gotten along reasonably with one another, because they all operated as peers on the same terms. But with ANS, a for-profit company held a de-facto monopoly on the backbone at the center of the Internet.9 Moreover, despite Wolff’s efforts to interpret the AUP loosely, ANS chose to interpret it strictly, and refused to interconnect the non-profit portion of the backbone (for NSF traffic) with any of their for-profit networks like PSI, since that would require a direct mixing of commercial and non-commercial traffic. When this created an uproar, they backpedaled, and came up with a new policy, allowing interconnection for a fee based on traffic volume.PSINet would have none of this. In the summer of 1991, they banded together with two other for-profit Internet service providers – UUNET, which had begun by selling commercial access to Usenet before adding Internet service; and the California Education and Research Federation Network, or CERFNet, operated by General Atomics – to form their own exchange, bypassing the ANS backbone. The Commercial Internet Exchange (CIX) consisted at first of just a single routing center in Washington D.C. which could transfer traffic among the three networks. They agreed to peer at no charge, regardless of the relative traffic volume, with each network paying the same fee to CIX to operate the router. New routers in Chicago and Silicon Valley soon followed, and other networks looking to avoid ANS’ fees also joined on.DivestitureRick Boucher, the Congressman whom we met above as a supporter of NSF commercialization, nonetheless requested an investigation of the propriety of Wolff’s actions in the ANS affair by the Office of the Inspector General. It found NSF’s actions precipitous, but not malicious or corrupt. Nevertheless, Wolff saw that the time had come to divest control of the backbone. With ANS + CORE and CIX privatization and commercialization had begun in earnest, but in a way that risked splitting the unitary Internet into multiple disconnected fragments, as CIX and ANS refused to connect with one another. NSF therefore drafted a plan for a new, privatized network architecture in the summer of 1992, released it for public comment, and finalized it in May of 1993. NSFNET would shut down in the spring of 1995, and its assets would revert to IBM and MCI. The regional networks could continue to operate, with financial support from the NSF gradually phasing out over a four year period, but would have to contract with a private ISP for internet access.But in a world of many competitive internet access providers, what would replace the backbone? What mechanism would link these opposed private interests into a cohesive whole? Wolff’s answer was inspired by the exchanges already built by cooperatives like CIX – NSF would contract out the creation of four Network Access Points (NAPs), routing sites where various vendors could exchange traffic. Having four separate contracts would avoid repeating the ANS controversy, by preventing a monopoly on the points of exchange. One NAP would reside at the pre-existing, and cheekily named, Metropolitan Area Ethernet East (MAE-East) in Vienna, Virginia, operated by Metropolitan Fiber Systems (MFS). MAE-West, operated by Pacific Bell, was established in San Jose, California; Sprint operated another NAP in Pennsauken, New Jersey, and Ameritech one in Chicago. The transition went smoothly10, and NSF decommissioned the backbone right on schedule, on April 30, 1995.11The Break-upThough Gore and others often invoked the “information superhighway” as a metaphor for digital networks, there was never serious consideration in Congress of using the federal highway system as a direct policy model. The federal government paid for the building and maintenance of interstate highways in order to provide a robust transportation network for the entire country. But in an era when both major parties took deregulation and privatization for granted as good policy, a state-backed system of networks and information services on the French model of Transpac and Minitel was not up for consideration.12Instead, the most attractive policy model for Congress as it planned for the future of telecommunication was the long-distance market created by the break-up of the Bell System between 1982 and 1984. In 1974, the Justice Department filed suit against AT&T, its first major suit against the organization since the 1950s, alleging that it had engaged in anti-competitive behavior in violation of the Sherman Antitrust Act. Specifically, they accused the company of using its market power to exclude various innovative new businesses from the market – mobile radio operators, data networks, satellite carriers, makers of specialized terminal equipment, and more. The suit thus clearly drew much of its impetus from the ongoing disputes since the early 1960s (described in an earlier installment), between AT&T and the likes of MCI and Carterfone.When it became clear that the Justice Department meant business, and intended to break the power of AT&T, the company at first sought redress from Congress. John de Butts, chairman and CEO since 1972, attempted to push a “Bell bill” – formally the Consumer Communications Reform Act – through Congress. It would have enshrined into law AT&T’s argument that the benefits of a single, universal telephone network far outweighed any risk of abusive monopoly, risks which in any case the FCC could already effectively check. But the proposal received stiff opposition in the House Subcommittee on Communications, and never reached a vote on the floor of either Congressional chamber. In a change of tactics, in 1979 the board replaced the combative de Butts – who had once declared openly to an audience of state telecommunications regulators the heresy that he opposed competition and espoused monopoly – with the more conciliatory Charles Brown. But it was too late by then to stop the momentum of the antitrust case, and it became increasingly clear to the company’s leadership that they would not prevail. In January 1982, therefore, Brown agreed to a consent decree that would have the presiding judge in the case, Harold Greene, oversee the break-up of the Bell System into its constituent parts.The various Bell companies that brought copper to the customer’s premise, which generally operated by state (New Jersey Bell, Indiana Bell, and so forth) were carved up into seven blocks called Regional Bell Operating Companies (RBOCs). Working clockwise around the country, they were NYNEX in the northeast, Bell Atlantic, Bell South, Southwestern Bell, Pacific Telesis, US West, and Ameritech. All of them remained regulated entities with an effective monopoly over local traffic in their region, but were forbidden from entering other telecom markets. AT&T itself retained the “long lines” division for long-distance traffic. Unlike local phone service, however, the settlement opened this market to free competition from any entrant willing and able to pay the interconnection fees to transfer calls in and out of the RBOCs. A residential customer in Indiana would always have Ameritech as their local telephone company, but could sign up for long-distance service with anyone.However, splitting apart the local and long-distance markets meant forgoing the subsidies that AT&T had long routed to rural telephone subscribers, under-charging them by over-charging wealthy long-distance users. A sudden spike in rural telephone prices across the nation was not politically tenable, so the deal preserved these transfers via a new organization, the non-profit National Exchange Carrier Association, which collected fees from the long-distance companies and distributed them to the RBOCS.   The new structure worked. Two major competitors entered the market in the 1980s, MCI and Sprint, and cut deeply into AT&T’s market share. Long-distance prices fell rapidly. Though it is arguable how much of this was due to competition per se, as opposed to the advent of ultra-high-bandwidth fiber optic networks, the arrangement was generally seen as a great success for de-regulation and a clear argument for the power of market forces to modernize formerly hidebound industries. This market structure, created ad hoc by court fiat but evidently highly successful, provided the template from which Congress drew in the mid-1990s to finally resolve the question of what telecom policy for the Internet era would look like. Second Time Isn’t The CharmPrior to the main event, there was one brief preliminary. The High Performance Computing Act of 1991 was important tactically, but not strategically. It advanced no new major policy initiatives. Its primary significance lay in providing additional funding and Congressional backing for what Wolff and the NSF already were doing and intended to keep doing – providing networking services for the research community, subsidizing academic institutions’ connections to NSFNET, and continuing to upgrade the backbone infrastructure.  Then came the accession of the 104th Congress in January 1995. Republicans took control of both the Senate and the House for the first time in forty years, and they came with an agenda to fight crime, cut taxes, shrink and reform government, and uphold moral righteousness. Gore and his allies had long touted universal access as a key component of the National Information Infrastructure, but with this shift in power the prospects for a strong universal service component to telecommunications reform diminished from minimal to none. Instead, the main legislative course would consist of regulatory changes to foster competition in telecommunications and Internet access, with a serving of bowdlerization on the side. The market conditions looked promising. Circa 1992, the major players in the telecommunications industry were numerous. In the traditional telephone industry there were the seven RBOCs, GTE, and three large long distance companies – AT&T, MCI, and Sprint – along with many smaller ones. The new up-and-comers included Internet service providers, such as UUNET, and PSINET as well as the IBM/MCI backbone spin-off, ANS; and other companies trying to build out their local fiber networks, such as Metropolitan Fiber Systems (MFS). BBN, the contractor behind ARPANET, had begun to build its own small Internet empire, snapping up some of the regional networks that orbited around NSFNET – Nearnet in New England, BARRNet in the Bay area, and SURANet in the southeast of the U.S. To preserve and expand this competitive landscape would be the primary goal of the 1996 Telecommunications Act, the only major rewrite of communications policy since the Communications Act of 1934. It intended to reshape telecommunications law for the digital age. The regulatory regime established by the original act siloed industries by their physical transmission medium – telephony, broadcast radio and television, cable TV; in each in its own box, with its own rules, and generally forbidden to meddle in each other’s business. As we have seen, sometimes regulators even created silos within silos, segregating the long-distance and local telephone markets. This made less and less sense as media of all types were reduced to fungible digital bits, which could be commingled on the same optical fiber, satellite transmission, or ethernet cable. The intent of the 1996 Act, shared by Democrats and Republicans alike, was to tear down these barriers, these “Berlin Walls of regulation”, as Gore’s own summary of the act put it.13 A complete itemization of the regulatory changes in this doorstopper of a bill is not possible here, but a few examples provide a taste of its character. Among other things it:allowed the RBOCs to compete in long-distance telephone markets,lifted restrictions forbidding the same entity from owning both broadcasting and cable services,axed the rules that prevented concentration of radio station ownership.The risk, though, of simply removing all regulation, opening the floodgates and letting any entity participate in any market, was to recreate AT&T on an even larger scale, a monopolistic megacorp that would dominate all forms of communication and stifle all competitors. Most worrisome of all was control over the so-called last mile – from the local switching office to the customer’s home or office. Building an inter-urban network connecting the major cities of the U.S. was expensive but not prohibitive, several companies had done so in recent decades, from Sprint to UUNET. To replicate all the copper or cable to every home in even one urban area, was another matter. Local competition in landline communications had scarcely existed since the early wildcat days of the telephone, when tangled skeins of iron wire criss-crossed urban streets. In the case of the Internet, the concern centered especially on high-speed, direct-to-the-premises data services, later known as broadband. For years, competition had flourished among dial-up Internet access providers, because all the end user required to reach the provider’s computer was access to a dial tone. But this would not be the case by default for newer services that did not use the dial telephone network. The legislative solution to this conundrum was to create the concept of the “CLEC” – competitive local exchange carrier. The RBOCs, now referred to as “ILECs” (incumbent local exchange carriers), would be allowed full, unrestricted access to the long-distance market only once the had unbundled their networks by allowing the CLECs, which would provide their own telecommunications services to homes and businesses, to interconnect with and lease the incumbents’ infrastructure. This would enable competitive ISPs and other new  service providers to continue to get access to the local loop even when dial-up service became obsolete – creating, in effect, a dial tone for broadband. The CLECs, in this model, filled the same role as the long-distance providers in the post-break-up telephone market. Able to freely interconnect at reasonable fees to the existing local phone networks, they would inject competition into a market previously dominated by the problem of natural monopoly. Besides the creation of the CLECS, the other major part of the bill that affected the Internet addressed the Republicans’ moral agenda rather than their economic one. Title V, known as the Communications Decency Act, forbade the transmission of indecent or offensive material – depicting or describing “sexual or excretory activities or organs”, on any part of the Internet accessible to minors. This, in effect, was an extension of the obscenity and indecent rules that governed broadcasting into the world of interactive computing services. How, then, did this sweeping act fare in achieving its goals? In most dimensions it proved a failure. Easiest to dispose with is the Communications Decency Act, which the Supreme Court struck down quickly (in 1997) as a violation of the First Amendment. Several parts of Title V did survive review however, including Section 230, the most important piece of the entire bill for the Internet’s future. It allows websites that host user-created content to exist without the fear of constant lawsuits, and protects the continued existence of everything from giants like Facebook and Twitter to tiny hobby bulletin boards. The fate of the efforts to promote competition within the local loop took longer to play out, but proved no more successful than the controls on obscenity. What about the CLECs, given access to the incumbent cable and telephone infrastructure so that they could compete on price and service offerings? The law required FCC rulemaking to hash out the details of exactly what kind of unbundling had to be offered. The incumbents pressed the courts hard to dispute any such ruling that would open up their lines to competition, repeatedly winning injunctions on the FCC, while threatening that introducing competitors would halt their imminent plans for bringing fiber to the home. Then, with the arrival of the Bush Administration and new chairman Michael Powell in 2001, the FCC became actively hostile to the original goals of the Telecommunications Act. Powell believed that the need for alternative broadband access would be satisfied by intermodal competition among cable, telephone, power communications networks, cellular and wireless networks. No more FCC rules in favor of CLECs would be forthcoming. For a brief time around the year 2000, it was possible to subscribe to third-party high-speed internet access using the infrastructure of your local telephone or cable provider. After that, the most central of the Telecom Act’s  pro-competitive measures became, in effect, a dead letter. The much ballyhooed fiber-to-the home only began to actually reach a significant number of homes after 2010, and the only with reluctance on the part of the incumbents.14 As author Fred Goldstein put it, the incumbents had “gained a fig leaf of competition without accepting serious market share losses.”15During most of the twentieth century, networked industries in the U.S. had sprouted in a burst of entrepreneurial energy and then been fitted into the matrix of a regulatory framework as they grew large and important enough to affect the public interest. Broadcasting and cable television had followed this pattern. So had trucking and the airlines. But with the CLECs all but dead by the early 2000s, the Communications Decency Act revoked, and other attempts to control the Internet such as the Clipper chip16 stymied, the Internet would follow an opposite course. Having come to life under the guiding hand of the state, it would now be allowed to develop in an almost entirely laissez-faire fashion. The NAP framework established by the NSF at the hand-off of the backbone would be the last major government intervention in the structure of the Internet. This was true at both the transport layer – the networks such as Verizon and AT&T that transported raw data, and the applications layer – software services from portals like Yahoo! to search engines like Google to online stores like Amazon.  In our last chapter, we will look at the consequences of this fact, briefly sketching the evolution of the Internet in the U.S. from the mid-1990s onward. [Previous] [Next]Quoted in Richard Wiggins, “Al Gore and the Creation of the Internet” 2000.“Remarks by Vice President Al Gore at National Press Club“, December 21, 1993.Biographical details on Wolff’s life prior to NSF are scarce – I have recorded all of them that I could find here. Notably I have not been able to find even his date and place of birth.Schrader and PSINet rode high on the Internet bubble in the late 1990s, acquiring other businesses aggressively, and, most extravagantly, purchasing the naming rights to the football stadium of the NFL’s newest expansion team, the Baltimore Ravens. Schrader tempted fate with a 1997 article entitled “Why the Internet Crash Will Never Happen.” Unfortunately for him, it did happen, bringing about his ouster from the company in 2001 and PSINet’s bankruptcy the following year.To get a sense of how fast the cost of bandwidth was declining – in the mid-1980s, leasing a T1 line from New York to L.A. would cost $60,000 per month. Twenty years later, a OC-3 circuit with 100 times the capacity cost only $5,000, more than a thousand-fold reduction in price per capacity. See Fred R. Goldstein, The Great Telecom Meltdown, 95-96. Goldstein states that the 1.55 mpbs T1/DS1 line has 1/84th the capacity of OC-3, rather than 1/100th, a discrepancy I can’t account for. But this has little effect on the overall math.Office of Inspector General, “Review of NSFNET,” March 23, 1993.Fraser, “NSFNET: A Partnership for High-Speed Networking, Final Report”, 27.Brian Kahin, “RFC 1192: Commercialization of the Internet Summary Report,” November 1990.John Markoff, “Data Network Raises Monopoly Fear,” New York Times, December 19, 1991.Though many other technical details had to be sorted out, see  Susan R. Harris and Elise Gerich, “Retiring the NSFNET Backbone Service: Chronicling the End of an Era,” ConneXions, April 1996.The most problematic part of privatization proved to have nothing to do with the hardware infrastructure of the network, but instead with handing over control over the domain name system (DNS). For most of its history, its management had depended on the judgment of a single man – Jon Postel. But businesses investing millions in a commercial internet would not stand for such an ad hoc system. So the government handed control of the domain name system to a contractor, Network Solutions. The NSF had no real mechanism for regulatory oversight of DNS (though they might have done better by splitting the control of different top-level domains (TLDs) among different contractors), and Congress failed to step in to create any kind of regulatory regime. Control changed once again in 1998 to the non-profit ICANN (Internet Corporation for Assigned Names and Numbers), but the management of DNS still remains a thorny problem.The only quasi-exception to this focus on fostering competition was a proposal by Senator Daniel Inouye to reserve 20% of Internet traffic for public use: Steve Behrens, “Inouye Bill Would Reserve Capacity on Infohighway,” Current, June 20, 1994. Unsurprisingly, it went nowhere. Al Gore, “A Short Summary of the Telecommunications Reform Act of 1996”.Jon Brodkin, “AT&T kills DSL, leaves tens of millions of homes without fiber Internet,” Ars Technica, October 5, 2020.Goldstein, The Great Telecom Meltdown, 145.The Clipper chip was a proposed hardware backdoor that would give the government the ability to bypass any U.S.-created encryption software.Further ReadingJanet Abatte, Inventing the Internet (1999)Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996)Shane Greenstein, How the Internet Became Commercial (2015)Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (2018)Rajiv Shah and Jay P. Kesan, “The Privatization of the Internet’s Backbone Network,” Journal of Broadcasting & Electronic Media (2007)Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

Read more
Interactive Computing: A Counterculture

In 1974, Ted Nelson self-published a very unusual book. Nelson lectured on sociology at the University of Illinois at Chicago to pay the bills, but his true calling was as a technological revolutionary. In the 1960s, he had dreamed up a computer-based writing system which would preserve links among different documents. He called the concept “hypertext” and the system to realize it (always half-completed and just over the horizon) “Project Xanadu.” He had become convinced in the process that his fellow radicals had computers all wrong, and he wrote his book to explain why. Among the activist youth of the 1960s counterculture, the computer had a wholly negative image as a bureaucratic monster, the most advanced technology yet for allowing the strong to dominate the weak. Nelson agreed that computers were mostly used in a brutal way, but offered an alternative vision for what the computer could be: an instrument of liberation. His book was really two books bound together, each with its own front cover—Computer Lib and Dream Machines—allowing the book to be read from either side until the two texts met in the middle. Computer Lib explained what computers are and why it is important for everyone to understand them, and Dream Machines explained what they could be, when fully liberated from the tyranny of the “priesthood” that currently controlled not only the machines themselves, but all knowledge about them. “I have an axe to grind,” Nelson wrote, I want to see computers useful to individuals, and the sooner the better, without necessary complication or human servility being required. …THIS BOOK IS FOR PERSONAL FREEDOM AND AGAINST RESTRICTION AND COERCION. … A chant you can take to the streets: COMPUTER POWER TO THE PEOPLE! DOWN WITH CYBERCRUD![1] If the debt Nelson’s cri de coeur owed to the 1960s counterculture wasn’t clear enough, Nelson made it explicit by listing his “Counterculture Credentials” as a writer, showman, “Onetime seventh-grade dropout,” “Attendee of the Great Woodstock Festival,” and more, including his astrological sign.[2] The front covers of Ted Nelson’s “intertwingled” book, Computer Lib / Dream Machines. Nelson’s manifesto is the most powerful piece of evidence of one popular way to tell the story of the rise of the personal computer: as an outgrowth of the 1960s counterculture. Surely more than geographical coincidence accounts for the fact that Apple Computer was born on the shores of the same bay where, not long before, Berkeley radicals had protested and Haight-Ashbury deadheads had partied? The common through line of personal liberation is clear, and Nelson was not the only countercultural figure who wanted to bring computer power to the people. Lee Felsenstein, a Berkeley engineering drop-out (and then eventual graduate) with much stronger credentials in radical politics than Nelson, invested much of his time in the 1970s on projects to make computers more accessible such as Community Memory, which offered a digital bulletin board via public computer terminals set up at several locations in the Bay Area. In Menlo Park, likewise, anyone off the street could come in and use a computer at Bob Albrecht’s People’s Computer Company. Both Felsenstein and Albrecht had clear and direct ties to the early personal computer industry, Felsenstein as a hardware designer and Albrecht as a publisher. The two most seminal early accounts of the personal computer’s history, Steven Levy’s Hackers: Heroes of the Computer Revolution, andPaul Freiberger and Michael Swaine’s, Fire in the Valley: The Making of The Personal Computer, both argued that the personal computer came into existence because of people like Felsenstein and Albrecht (whom Levy called long-haired, West Coast, “hardware hackers”), and their emphasis on personal liberation through technology. John Markoff extended this argument to book length with What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer. Stewart Brand put it succinctly in a 1995 article in Time magazine: “We Owe it All to the Hippies.”[3] This story is appealing, but not quite right. The influence of countercultural figures in promoting personal computing was neither necessary, nor sufficient, to explain the sudden explosion of interest in the personal computer caused by the Altair. Not necessary, because the Altair existed primarily because of two people who had nothing to do with the radical left or hippie idealism: the Albuquerque Air Force veteran and electronics lover Ed Roberts, and the New York hobby magazine editor Les Solomon. Not sufficient because it addresses only supply, not demand: why, when personal computers did become available, were there many thousands of takers out there looking to buy the personal liberation that men like Nelson and Albrecht were selling? These people were not, for the most part, hippies or radicals either. The countercultural narrative seems plausible when one zooms in on the activities happening around the San Francisco Bay, but the personal computer was a national phenomenon; orders for Altairs poured in to Albuquerque from across the country. Where did all of these computer lovers come from? Getting Hooked In the 1950s, researchers working at a laboratory affiliated with MIT synthesized an electronic concoction in their labs that, in the decades to come, transformed the world. The surprising byproduct of work on an air defense system, it proved to be highly addictive, at least to those of a certain personality type: inquisitive and creative, but also fascinated by logic and mathematics. The electronic computer, as originally conceived in the 1940s, emulated a room full of human computers. You provided it with a set of instructions for performing a complex series of calculations—a simulation of an atomic explosion, say, or the proper angle and explosive charge required to get an artillery piece to hit a target at a given distance, and then came back later to pick up the result. A “batch-processing” culture of computing developed around this model, where computer users brought a computer program and data to the computer’s operators in the form of punched cards. These operators fed in batches of these cards and fed them to the computer for processing, and then later extracted the results on a new set of punched cards. The user then picked up the results and then either walked away happy or (more often), noticed an error, scrutinized their program for bugs, made adjustments, and tried again. By the early 1960s, this batch-processing culture had become strongly associated with IBM, which had parlayed its position as the leader in mechanical data-processing equipment into dominance of electronic computing as well. However, the military faced many problems that could not be pre-calculated, and required an instantaneous decision, calling for a “real-time” computer that could provide an answer to one question after another, with seconds or less between each response. The first fusion of real-time problem solving with the electronic computer came in the form of a flight simulator project at MIT under the leadership of electrical engineer Jay Forrester, which, through a series of twists and turns and the stimulus of the Cold War, evolved into an air defense project with the backronym of Semi-Automated Ground Environment (SAGE). Housed at Lincoln Laboratory, a government facility about thirty miles to the northwest of MIT, SAGE became a mammoth project that spawned an entirely new form of computing as an accidental side effect. An operator interacting with a SAGE terminal with a light gun. The SAGE system demanded a series of powerful computers (to be constructed by IBM), two for each of the air defense centers to be built across North America (one acted as a back-up in case the other failed). Each would serve multiple cathode-ray screen terminals showing an image of incoming radar blips, which the operator could select to learn more information and possibly marshal air defense assets against them. At first, the project leads assumed these computer centers would use vacuum tubes, the standard logic component for almost all computers throughout the 1950s. But the invention of the transistor offered the opportunity to make a smaller and more reliable solid-state computer. So, in 1955-56, Wesley Clark and Ken Olsen oversaw the design and construction of a small, experimental transistor-based computer, TX-0, as a proof-of-concept for a future SAGE computer. Another, larger test machine called TX-2 followed in 1957-58.[4] The most historically significant feature of these computers, however, was the fact that, after being completed, they had no purpose. Having proved that they could be built, their continued existence was superfluous to the SAGE project, so these very expensive prototypes became Clark’s private domain, to be used more or less as he saw fit. Most computers operated in batch-processing mode because it was the most efficient way to use a very expensive piece of capital equipment, keeping it constantly fed with work to do. But Clark didn’t particularly care about that. Lincoln Lab computers had a tradition of hands-on use, going all the way back to the original flight simulator design which was intended for real-time interaction with a pilot, and Clark believed that real-time access to a computer assistant could be a powerful means for advancing scientific research.[5] The TX-0 at MIT, likely taken in the late 1950s. And so, a number of people at MIT and Lincoln Lab got to have the experience of simply sitting down and conversing directly with the TX-0 or TX-2 computer. Many of them got hooked on this interactive mode of computing. The process of instant feedback from the computer when trying out a program, which could then be immediately adjusted and tried again, felt very much like playing a game or solving a puzzle. Unlike the batch-processing mode of computing that was standard by the late 1950s, in interactive computing the speed at which you got a response from the computer was limited primarily by the speed at which you could think and type. When a user got into the flow, hours could disappear like minutes. J.C.R. Licklider was a psychologist employed to help with SAGE’s interface with its human operators. The experience of interacting with the TX-0 at Lincoln Lab struck him with the force of revelation. He thereafter became an evangelist for the power of interactive computers to multiply human intellectual power via what he called “man-computer symbiosis”: Men will set the goals and supply the motivations, of course, at least in the early years. They will formulate hypotheses. They will ask questions. They will think of mechanisms, procedures, and models. … The equipment will answer questions. It will simulate the mechanisms and models, carry out the procedures, and display the results to the operator. It will transform data, plot graphs …In general, it will carry out the routinizable, clerical operations that fill the intervals between decisions.[6] Ivan Sutherland was another convert: he developed a drafting program called Sketchpad on the TX-2 at Lincoln Lab for his MIT doctoral thesis and later moved to the University of Utah, where he became the founding father of the field of computer graphics. Lincoln also shipped the TX-0, entirely surplus to its needs after the arrival of TX-2, to the MIT Research Laboratory for Electronics (RLE), where it became the foundation—the temple, the dispensary—for a new “hacker” subculture of computer addicts, who would finagle every spare minute they could on the machine, roaming the halls of the RLE well past midnight. The hackers compared the experience of being in total control of a computer to “getting in behind the throttle of a plane,” “playing a musical instrument,” or even “having sex for the first time”: hyperbole, perhaps, similar to Arnold Schwarzenegger’s famous claim about the pleasures of pumping iron.[7]   It is worth pausing to note here the extreme maleness of this group: not a single woman is mentioned among the MIT hackers in Steven Levy’s eponymous book the topic. This is unsurprising since very few women attended MIT; until 1960 they were technically allowed but not encouraged to enroll. But this severe imbalance of the sexes did not change much with time. Almost all the people who got hooked on computers as interactive computing spread beyond MIT were also men. It was certainly not the case that the computing profession as a whole was overwhelmingly male circa 1960: at that time women probably occupied a third or more of all programming jobs. But at the time, almost all of those jobs involved neatly coiffed business people running data processing workloads in large corporate or government offices, not disheveled hackers clacking away at a console into the wee hours. For whatever reason, men showed a much greater predilection than women to get lost in the rational yet malleable corridors of the digital world, to enjoy using computers for the sake of using computers. This fact likely produced the eventual transformation of computer science into an overwhelmingly male field, a development we may revisit later in this story. But for now, back to the topic at hand.[8] Minicomputers: The DIY Computer While Clark was exploring the potential of computers as a scientific instrument, his engineering partner, Ken Olsen, saw the market potential for selling small computers like the TX-0. Having worked closely with IBM on the SAGE contract, he came away unimpressed with their bureaucratic inefficiency. He thought he could do better, and, with help from one of the first venture capital firms and Harlan Anderson, another Lincoln alum, he went into business. Warned by the head of the firm to avoid the term “computer,” which would frighten investors with the prospects of an expensive uphill struggle against established players like IBM, Olsen called his company Digital Equipment Corporation, or DEC.[9] In 1957, Olsen set up shop in an old textile mill on the Assabet River about a half-hour west of Lincoln Lab. There the company remained until the early 1990s, at the end of Olsen’s tenure and the beginning of the company’s terminal decline. Olsen, an abstemious, church-going Scandinavian, stayed in suburban Massachusetts for nearly all of his adult life; he and his wife lived out their last years with a daughter in Indiana. It is hard to imagine someone who less embodies the free-wheeling sixties counterculture than Ken Olsen. But his business became the vanguard for and symbol of a computer counterculture; one that would raise a black flag of rebellion against the oppressive regime of IBM-ism and spread the joy of interactive computing far beyond MIT, sprinkling computer addicts across the country. DEC began selling its first computer, the PDP-1 (for Programmed Data Processor) in 1959. Its design bore a fair resemblance to that of the TX-0, and proved similarly addictive to young hackers when one was donated to MIT in 1961. A whole series of other models followed, but the most ground-breaking was the PDP-8, released in 1965: a computer about the size of a blue USPS collection box for just $18,000 dollars.  Not long after, someone (certainly not the straightlaced Olsen), began calling this kind of small computer a minicomputer, by analogy to the newly-popular miniskirt. A DEC ad campaign described PDP-8 computers as “approachable, variable, easy to talk to, personal machines.” A 1966 advertisement depicting various PDP-8 models juxtaposed with cuddly teddy bears. [Datamation, October 1966] Up to that point, the small, relatively inexpensive computers that did exist typically stored their short-term memory on the magnetized surface of a spinning mechanical drum. This put a hard ceiling on how fast they could calculate. But the PDP-8 used fast magnetic core memory, bringing high-speed electronic computing within reach of even quite small science and engineering firms, departments and labs. PDP-8s were also deployed as control systems on factory floors, and even placed on a tractor. They sold in large numbers, for a computer—50,000, all told, over a fifteen-year lifespan—and became hugely influential, spawning a whole industry of competing minicomputer makers, and later inspiring the design for Intel’s 4004 microprocessor.[10] In the early 1960s, IBM, under Thomas Watson, Jr., established itself as the dominant manufacturer of mainframe computers in the United States (and therefore, in effect, the world). Its commissioned sales force cultivated deep relationships with customers, which lasted well beyond the closing of the deal. IBM users leased their machines on a monthly basis, and in return they got access to an extensive support and service network, a wide array of peripheral devices (many of which derived from IBM’s pre-existing business as a maker of punched-card processing machinery), system software, and even application software for common business needs like payroll and inventory tracking. IBM expected their mainframe customers to have a dedicated data processing staff, independent from the actual end users of the computer, people responsible for managing the computer’s hardware and software and their firm’s ongoing relationship with IBM.[11] DEC culture dispensed with all of that; it became a counter-culture, representing everything that IBM was not. Olsen expected end users take full ownership of their machine in every sense. The typical buyer was expected to be an engineer or scientist; an expert on their own needs, who could customize the system for their application, write their own software, and administer the machine themselves. IBM had technical staff with the interest and skills needed to build interactive systems. Andy Kinslow, for example, led a time-sharing project (more on time-sharing shortly) at IBM in the mid-1960s; he wanted to give engineers like himself that hands-on-the-console experience that the MIT hackers had fallen in love with.  But the eventual product, TSS/360, had serious technical limitations at launch in 1967, and was basically ignored by IBM afterwards.[12] This came down to culture: IBM’s product development and marketing focused on the needs of their core data-processing customers who wanted more powerful batch-processing systems with better software and peripheral support, not by the interests of techies and academics who wanted hands-on computer systems and didn’t mind getting their hands dirty. And so, the latter bought from DEC and other smaller outfits. As an employee of Scientific Data Systems (another successful computer startup of the 1960s) put it: There was, of course, heavy spending on scientific research throughout the sixties, and researchers weren’t like the businessmen getting out the payroll. They wanted a computer, they were enchanted with what we had, they loved it like Ferrari or a woman. They were very forgiving. If the computer was temperamental you’d forgive it, the way you forgive a beautiful woman.[13] DEC customers included federally-funded laboratories, engineering firms, technical divisions of major corporate conglomerates, and, of course, universities. They worked predominantly onreal-time projects in which a computer interacted directly with human users or some kind of industrial or scientific equipment: doing on-demand engineering calculations for a chemical manufacturer, controlling tracing machinery for physics data analysis, administering experiments for psychological research, and more.[14] They shared knowledge and software through a community organization called DECUS, the Digital Equipment Computer Users’ Society. IBM users had founded a similar organization, SHARE, in 1955, but it had a different culture from the start, one that derived from the data-processing orientation of IBM. SHARE’s structure assumed that each participating organization had a computing center, distinct from its other operational functions, and it was the head of that computing center who would participate in SHARE and collaborate with other sites on building systems software (operating systems, assemblers, and the like). The end users of computers, who worked outside the computing center, could not participate in SHARE at all, in the beginning. At most DEC sites, no such distinction between users and operators existed.[15] My father, a researcher specializing in computerized medical records, was part of the DEC culture, and co-authored at least one paper for DECUS, CJ McDonald and B Bhargava, “Ambulatory Care Information Systems Written in BASIC-Plus,” DECUS Proceedings (Fall 1973). Here he is pictured at top left, in 1973, in the terminal room for his research institute’s PDP-11 [Regenstrief Institute] DECUS, like SHARE, maintained an extensive program library: for reading and writing to peripheral devices, assembling and compiling human-readable code into machine language, debugging running programs, calculating math functions not supported by hardware (e.g., trigonometric functions, logarithms, and exponents), and more. Maintaining the library required procedures for reviewing and distributing software: In 1963, for example, users contributed fifty programs, most of which were reviewed by at least two other users, and seventeen of which were certified by the DECUS Programming Committee.[16] Aflame with the possibilities of interactive computing to revolutionize their fields of expertise, from education to clinical medicine, the reach of the DEC devotee sometimes exceeded their grasp: at one DECUS meeting, Air Force doctor Joseph Mundie reminded “the computer enthusiasts,” with gentle understatement, “that even the PDP computer had a few shortcomings when making medical diagnoses.”[17] Though none achieved the market share of DEC, a number of competing minicomputer makers also flourished in the late 1960s in the wake of the PDP-8. They included start-ups like Data General (founded by defectors from DEC, just up the Assabet river in Hudson, Massachusetts), but also established electronics firms like Honeywell, Hewlett-Packard, and Texas Instruments. Many thousands of units were sold, exposing many more thousands of scientists and engineers to the thrill of getting their hands dirty on a computer in their own lab or office. Even among the technical elite at MIT, administrators had considered the hackers’ playful antics with the TX-0 and PDP-1 in the late 1950s and early 1960s a grotesque “misappropriation of valuable machine time.” But department heads acquiring a small ten- or twenty-thousand-dollar computer had much less reason to worry about wastage of spare cycles, and even if they did, most lacked a dedicated operational staff to oversee the machine and ensure its efficient use. Users were left to decide for themselves how to use the computer, and they generally favored their own convenience: hands on, interactive, at the terminal. But even while minis were allowing thousands of ordinary scientists and engineers to enjoy the thrill of having an entire computer at their disposal, another technological development began spreading a simulacrum of that experience among an even wider audience.[18] Time-Sharing: Spreading The Love As we have already seen, a number of people got hooked on interactive computing in and around MIT by 1960, well before the PDP-8 and other cheaper computers became available. Electronic computers could perform millions of operations per second, but in interactive mode, all of that capacity sat unused while the human at the console was thinking and typing. Most administrators—those with the responsibility for allocating limited organizational budgets—recoiled at the idea of allowing a six- or seven-figure machine to sit around idle, wasting that potential processing power, just to make the work of engineers and scientists a bit more convenient. But what if it wasn’t wasted? If you attached four, or forty, or four hundred, terminals to the same computer, it could process the input from one user while waiting for the input from the others, or even process offline batch jobs in the interim between interactive requests. From the point-of-view of a given terminal user, as long as the computer was not overloaded with work, it would still feel as if they had interactive access to their own private machine. The strongest early proponent of this idea of time-sharing a computer was John McCarthy, a mathematician and a pioneer in artificial intelligence who came from Dartmouth College to MIT primarily to get closer access to a computer (Dartmouth had no computer of its own at the time). Unsatisfied with the long turnaround that batch-processing imposed on his exploratory programming, he proposed time-sharing as a way of squaring interactive computing with the other demands on MIT’s computation center.[19] McCarthy’s campaigning eventually led an MIT group led by Fernando “Corby” Corbató to develop the Compatible Time-Sharing System (CTSS)—so-called because it could operate concurrently with the existing batch-processing operations on the Computation Center’s IBM computer. McCarthy also directed the construction of a rudimentary time-sharing system on a PDP-1 at Bolt, Beranek, and Newman, a consulting firm with close ties to MIT. This proved that a less powerful computer than an IBM mainframe could also support time-sharing (albeit on a smaller scale), and indeed even PDP-8s would later host their own time-sharing systems: a PDP-8 could support up to twenty-four separate terminals, if configured with sufficient memory.[20] The most important next steps taken to extend the reach of time-sharing specifically, and interactive computing generally, occurred at McCarthy’s former employer, Dartmouth. John Kemeny, head of the Dartmouth math department, enlisted Thomas Kurtz, a fellow mathematician and liaison to MIT’s Computation Center, to build a computing center of their own at Dartmouth. But they would do it in a very different style. Kemeny was one of several brilliant Hungarian Jews who fled to the U.S. to avoid Nazi persecution. Though of a younger generation than his more famous counterparts such as John von Neumann, Eugene Wigner, and Edward Teller, he stood out enough as a mathematician to be hired onto the Manhattan Project as a mere Princeton undergraduate in 1943. His partner, Kurtz, came from the Chicago suburbs, but also passed through Princeton’s elite math department, as a graduate student. He began doing numerical analysis on computers right out of college in the early 1950s, and his loyalties lay more with the nascent field of computer science than with traditional mathematics. Kurtz (left) and Kemeny (right), inspecting a GE flyer for a promotional shot. The pair started in the early 1960s with a small drum-based Librascope LGP-30 computer, operated in a hands-on, interactive mode. By this time both men were convinced that computers had acquired a civilizational import that would only grow. Having now also seen undergraduates write successful programs in LGP-30 assembly, they also became convinced that understanding and programming computers should be a required component of a liberal education. This kind of expansive thinking about the future of computing was not unusual at the time: other academics at the time were writing about the impact of computers on libraries, education, commerce, privacy, politics, and law. As early as 1961, John McCarthy was giving speeches about how time-sharing would lead to an all-encompassing computer utility that would offer a wide variety of electronic services served up from computers to home and office terminals via the medium of the telephone network.[21] Kurtz proposed that a new, more powerful computer by brought to Dartmouth that would be time-shared (at the suggestion of McCarthy), with terminals directly accessible to all undergraduates: the computer equivalent of an open-stack library. Kemeny applied his political skills (which would eventually bring him the presidency of the university), to sway Dartmouth’s leaders while Kurtz secured grants from the NSF to cover the costs of a new machine. General Electric, which was trying to elbow its way into IBM’s market, agreed to a 60% discount on the two computers Kemeny and Kurtz wanted: a GE-225 mainframe for executing user programs and a Datanet-30 (designed as a message-switching computer for communication networks) for exchanging data between the GE-225 and the user terminals. They called the combined system the Dartmouth Time-Sharing System (DTSS). It did not only benefit Dartmouth students: the university became a regional time-sharing hub via which students at other New England colleges and even high schools got access to computing via remote terminals connected to DTSS by telephone: by 1971 this included fifty schools in all, encompassing a total user population of 13,000[22] Kemeny teaching Dartmouth students about the DTSS system in a terminal room. Beyond this regional influence, DTSS made two major contributions of wider significance to the later development of the personal computer. First was a new programming language called BASIC. Though some students had proved apt with machine-level assembly language, it was certainly too recondite for most. Both Kemeny and Kurtz agreed that to serve all undergraduates, DTSS would need a more abstract, higher-level language that students could compile into executable code. But even FORTRAN, the most popular language of the time in science and engineering fields, lacked the degree of accessibility they strove for. As Kurtz later recounted, by way of example, it had an “almost impossible-to-memorize convention for specifying a loop: ‘DO 100, I = 1, 10, 2’. Is it ‘1, 10, 2’ or ‘1, 2, 10’, and is the comma after the line number required or not?” They devised a more approachable language, implemented with the help of some exceptional undergraduates. The equivalent BASIC loop syntax, FOR I = 1 TO 10 STEP 2, demonstrates the signature feature of the language, the use of common English words to create a syntax that reads somewhat like natural language.[23] The second contribution was DTSS’ architecture itself, which General Electric borrowed to set up its own time-sharing services, not once, but twice: The GE-235 and Datanet-30 architecture became GE’s Mark I time-sharing system, and a later DTSS design based on the GE-635 became GE’s Mark II time-sharing system. By 1968, many firms had set up time-sharing computer centers to which customers could connect computer terminals over the telephone network, paying for time by the hour. Over 40% of this $70 million dollar market (comprising ten of thousands of users) belonged to GE and its Dartmouth-derived systems. The paying customers included Lakeside School in Seattle, whose Mother’s Club raised the funds in 1968 to purchase a terminal with which to access a GE time-sharing center. Among the students exposed to programming BASIC at Lakeside were eighth-grader Bill Gates and tenth-grader Paul Allen.[24] Architecture of the second-generation DTSS system at Dartmouth, circa 1971. GE’s marketing of BASIC through its time-sharing network accelerated the language’s popularity, and BASIC implementations followed for other manufacturers’ hardware, including DEC and even IBM. By the 1970s, helped along by GE, BASIC had established itself as the lingua franca of the interactive computing world. And what BASIC users craved, above all, were games.[25] A Culture of Play Everywhere that the culture of interactive computing went, play followed. This came in the obvious form of computer games, but also in a general playful attitude towards the computer, with users treating the machine as a kind of toy and the act of programming and using it as an end in itself, rather than a means towards accomplishing serious business.   The most famous instance of this culture of play in the early years of MIT hacking came in the form of the contest of reflexes and wills known as Spacewar!. The PDP-1 was unusual for its time in having a two-dimensional graphical display in the form of a circular cathode-ray-tube (CRT) screen. Until the mid-1970s, most people who interacted with computers did so via a teletype. Originally invented for two-way telegraphic messaging, these machines could take in user input like a normal typewriter, send that input over the wire to a remote recipient (the computer in this case), and then automatically type out the characters received over the wire in response. Because of its origins in the SAGE air defense program, however, the MIT PDP-1 also came equipped with a screen designed for radar displays. The MIT hackers had already exercised their playfulness in the form of several earlier games and graphical demos on the TX-0, but it was a hanger-on with no official university affiliation named Stephen “Slug” Russell who created the initial version of Spacewar!, inspired by the space romances of E.E. “Doc” Smith. The game reached a useable form by about February 1962, allowing two players controlling rocket ships to battle across the screen, hurling torpedoes at one another’s spaceships. Other hackers quickly added enhancements, however: a star background that matched Earth’s actual night sky, a sun with gravity, hyperspace warps to escape danger, a score counter, and more. The resulting game was visually exciting, tense, and skill-testing, encouraging the MIT hackers to spend many late nights blasting each other out of the cosmos.[26] Spacewar!’s dependence on a graphical display limited its audience, but Stanford became a hotbed of Spacewar! after John McCarthy moved there in 1962, and its use is also well-attested at the University of Minnesota. In 1970, Nolan Bushnell started his video game business (originally called Syzygy, later Atari), to create an arcade console version of the game, which he called Computer Space. The game’s influence lasted into the 1990s, with the release of the game Star Control and its epic sequel (The Ur-Quan Masters), which introduced the classic duel around a star to my generation of hobbyists.[27] The large majority of minicomputers users who lacked a screen did not, however, lack for games. Teletype games relied on text input and output, but could be just as compelling, ranging from simple guessing games up to rich strategy games like chess. Enthusiasts exchanged paper tapes among themselves, but DECUS also helped to spread information about games and game programs among the DEC user base. The very first volume of the DECUS newsletter, DECUSCOPE, from 1962, contains an homage to SpaceWar!, and a simple dice game appeared in the program library available to all members in 1964. By November 1969, the DECUS software catalog listed thirty-seven games and demos, including simple games like hangman and blackjack, but also more sophisticated offerings like SpaceWar! and The Sumer Game, a Bronze Age resource-management simulation. The catalog of scientific and engineering applications, the primary reason for most owners to have a minicomputer in the first place, numbered fifty-eight.[28] Playfulness could also be expressed in forms other than actual games. The MIT hackers, for example, wrote a variety of programs simply for the fun of it: a tinny music generator, an Arabic to Roman numeral converter, an “Expensive Desk Calculator” for doing simple arithmetic on the $120,000 PDP-1, an “Expensive Typewriter” for composing essays. Using the computer to efficiently achieve some real-world outcome did not necessarily enter their minds: many worked on tools for writing and debugging programs without much thought to using the tools for anything other than more play; often “the process of debugging was more fun than using a program you’d debugged.” As the interactive computing culture expanded from minicomputers to time-sharing systems, fewer and fewer of its acolytes had the heightened taste and technical skill required to extract joy from the creation of compilers and debuggers; but many of these new users could create computer games in BASIC, and all could play them. By about 1970, BASIC gaming had become by far the most widespread culture of computer-based play (though not the only one; the University of Illinois / Control Data Corporation PLATO system, for example, constituted its own, distinct sub-culture). As with the earlier minicomputer teletype games, almost all of these BASIC games had textual interfaces, because hardly anyone yet had access to a graphical display. Dave Ahl, who worked at DEC as an educational marketing manager, began including code listings for BASIC games in his promotional newsletter, EDU. Some were of his own creation (like a conversion of The Sumer Game called Hammurabi), others were contributed by high school and college students using DEC systems at school. They proved so popular that DEC published a compilation in 1973, 101 BASIC Computer Games, which went through three printings. After leaving the company, Ahl wisely retained the rights, and went on to sell over a million copies to computer buyers in the 1980s.[29] While many of these games were derivative of existing board or card games, others, like SpaceWar!, created whole new forms of play, unique to the computer. Unlike SpaceWar!, most of these were single-player experiences that relied on the computer to hide information, gradually revealing a novel world to the user as they explored. Hide and Seek, for example, a simple game written by high school students about searching a grid for a group of hiders, evolved into a more complex searching game called Hunt the Wumpus, with many later variants. Computer addicts overlapped substantially with Star Trek fans, and so a genre of Star Trek strategy games also emerged. The most popular version, in which the player hunts Klingons across the randomly-populated quadrants of the galaxy, originated with Mike Mayfield, an engineer who originally wrote it for a Hewlett-Packard (HP) minicomputer (presumably the one he used at work). DECUS was not the only organization sharing program libraries, and Mayfield’s Star Trek became part of the HP library, from whence it found its way to Ahl, who converted it to BASIC. Other versions followed, such as Bob Leedom’s 1974 Super Star Trek.[30] The practices of the BASIC gaming community made it very easy for gaming lineages to evolve in this way, because every game was distributed textually, as BASIC code. If you were lucky, you got a paper or magnetic tape from which you could automatically read the code into your computer’s memory. If not (if you wanted to try out a game from Ahl’s book, for example), you were in for hours of tedious and error-prone typing. But in either case, you had total access to the raw source code. You could read it, understand it, and modify it. If you wanted to make Ahl’s Star Trek slightly easier, you could modify the phaser subroutine on line 3790 to do more damage. If you were more ambitious, you could go to line 1270 and add a new command to the main menu—make an inspiring speech to the crew, perhaps? A selection of the code listing for Civil War, a simulation game created by high school students in Lexington, Massachusetts in 1968, and included in Ahl’s 101 BASIC Computer Games book. Typing something like this into your own computer required a great deal of patience. [Ahl, 101 Basic Computer Games, 81] Perhaps the most prolific game author of the era, Don Daglow, got hooked on a DEC PDP-10 in 1971 through a time-sharing terminal installed in his dorm at Pomona College, east of Los Angeles. Over the ensuing years he authored his own version of Star Trek, a baseball game, a dungeon-exploration game based on Dungeons & Dragons, and more. His extended career owed to his extended time at Pomona where he had consistent access to the computer: nine years in total as an undergraduate, graduate student, and then instructor.[31] By the early 1970s, many thousands of people like Daglow had discovered the malleable digital world that lived inside of computers. If you could master its rules, it became an infinite erector set, out of which you could reconstruct an ancient long-dead civilization, or fashion a whole galaxy full of hostile Klingons. But unlike Daglow, most of these computer lovers were kept at arm’s length from the object of their desire. Perhaps they could use the university computer at night while they were an undergraduate, but lost that privilege upon graduation a few years later. Perhaps they could afford to rent a few hours of access to a time-sharing service each week, perhaps they could visit a community computing center (like Bob Albrecht’s in Menlo Park), perhaps, like Mike Mayfield, they could cadge a few hours on the office computer for play after hours. But best of all would be a computer at home, to call their own, to use whenever the impulse struck. Out of such longings came the demand for the personal computer. Next time we will look in detail at the story of how that demand was satisfied, and by whom.

Read more
Coda: Steam’s Last Stand

In the year 1900, automobile sales in the United States were divided almost evenly among three types of vehicles: automakers sold about 1,000 cars powered by internal combustion engines, but over 1,600 powered by steam engines, and almost as many by batteries and electric motors. Throughout all of living memory (at least until the very recent rise of electric vehicles), the car and the combustion engine have gone hand in hand, inseparable. Yet, in 1900, this type claimed the smallest share.For historians of technology, this is the most tantalizing fact in the history of the automobile, perhaps the most tantalizing fact in the history of the industrial age. It suggests a multiverse of possibility, a garden of forking, ghostly might-have-beens. It suggests that, perhaps, had this unstable equilibrium tipped in a different direction, many of the negative externalities of the automobile age—smog, the acceleration of global warming, suburban sprawl—might have been averted. It invites the question, why did combustion win? Many books and articles, by both amateur and professional historians, have been written to attempt to answer this question.However, since the electric car, interesting as its history certainly is, has little to tell us about the age of steam, we will consider here a narrower question—why did steam lose? The steam car was an inflection point where steam power, for so long an engine driving technological progress forward, instead yielded the right-of-way to a brash newcomer. Steam began to look like relic of the past, reduced to watching from the shoulder as the future rushed by. For two centuries, steam strode confidently into one new domain after another: mines, factories, steamboats, railroads, steamships, electricity. Why did it falter at the steam car, after such a promising start?The Emergence of the Steam CarThough Germany had given birth to experimental automobiles in the 1880s, the motor car first took off as successful industry in France. Even Benz, the one German maker to see any success in the early 1890s, sold the majority of its cars and motor-tricycles to French buyers. This was in large part due to the excellent quality of French cross-country roads – though mostly gravel rather than asphalt, they were financed by taxes and overseen by civil engineers, and well above the typical European or American standard of the time. These roads…made it easier for businessmen [in France] to envisage a substantial market for cars… They inspired early producers to publicize their cars by intercity demonstrations and races. And they made cars more practical for residents of rural areas and small towns.[1] The first successful motor car business arose in Paris, in the early 1890s. Émile Levassor and René Panhard (both graduates of the École centrale des arts et manufactures, an engineering institute in Paris), met as managers at a machine shop that made woodworking and metal-working tools. They became the leading partners of the firm and took it into auto making after becoming licensors for the Daimler engine.The 1894 Panhard & Levassor Phaeton already shows the beginning of the shift from horseless carriages with an engine under the seats to the modern car layout with a forward engine compartment. [Jörgens.mi / CC BY-SA 3.0]Before making cars themselves, they looked for other buyers for their licensed engines, which led them to a bicycle maker near the Swiss border, Peugeot Frères Aînés, headed by Armand Peugeot. Though bicycles seem very far removed from cars today, they made many contributions to the early growth of the auto industry. The 1880s bicycle boom (stimulated by the invention of the chain-driven “safety” bicycle) seeded expertise in the construction of high-speed road vehicles with ball bearings and tubular metal frames. Many early cars resembled bicycles with an additional wheel or two, and chain drives for powering the rear wheels remained popular throughout the first few decades of automobile development. Cycling groups also became very effective lobbyists for the construction of smooth cross-country roads on which to ride their machines, literally paving the way for the cars to come.[2]Armand Peugeot decided to purchase Daimler engines from Panhard et Levassor and make cars himself. So, already by 1890 there were two French firms making cars with combustion engines. But French designers had not altogether neglected the possibility of running steam vehicles on ordinary roads. In fact, before ever ordering a Daimler engine, Peugeot had worked on a steam tricycle with the man who would prove to be the most persistent partisan of steam cars in France, Léon Serpollet.A steam-powered road vehicle was not, by 1890, a novel idea. It had been proposed countless times, even before the rise of steam locomotives: James Watt himself had first developed an interest in engines, all the way back in the 1750s, after his friend John Robison suggested building a steam carriage. But those who had tried to put the idea into practice had always found the result wanting. Among the problems were the bulk and weight of the engine and all its paraphernalia (boiler, furnace, coal), the difficulty of maintaining a stoked furnace and controlling steam levels (including preventing the risk of boiler explosion), and the complexity of operating the engine. The only kinds of steam road vehicles to find any success, were those that inherently required a lot of weight, bulk, and specialized training to operate—fire engines and steamrollers—and even those only appeared in the second half of the nineteenth century.[3]Consider Serpollet’s immediate predecessor in steam carriage building, the debauched playboy Comte Albert de Dion. He commissioned two toymakers, George Bouton and Charles Trépardoux to make several small steam cars in the 1880s. These coal-fueled machines took thirty minutes or more to build up a head of steam. In 1894 a larger Dion steam tractor finished first in one of the many cross-country auto races that had begun to spring up to help carmakers promote their vehicles. But the judges disqualified Dion’s vehicle on account of its impracticality: requiring both a driver and a stoker for its furnace, it was in a very literal sense a road locomotive. A discouraged Comte de Dion gave up the steam business, but De Dion-Bouton went on to be a successful maker of combustion automobiles and automobile engines.[4]This De Dion-Bouton steam tractor was disqualified from an auto race in 1894 as impractical.Coincidentally enough, Léon Serpollet and his brother Henri were, like Panhard and Levassor, makers of woodworking machines, and like Peugeot, they came from the Swiss borderlands in East-central France. Also like Panhard and Levassor, Léon studied engineering in Paris, in his case at the Conservatoire national des arts et métiers. But by the time he reached Paris, he and his brother had already concocted the invention that would lead them to the steam car: a “flash” boiler that instantly turned water to steam by passing it through a hot metal tube. This would allow the vehicle to start more quickly (though it still took time to heat the tube before the boiler could be used) and also alleviate safety concerns about a boiler explosion.The most important step to the (relative) success of the Serpollets’ vehicles, however, was when they replaced the traditional coal furnace with a burner for liquid, petroleum-based fuel. This went a long way towards removing the most disqualifying objections to the practicality of steam cars. Kerosene or gasoline weighed less and took up less space than an energy-equivalent amount of coal, and an operator could more easily throttle a liquid-fuel burner (by supplying it with more or less fuel) to control the level of steam.Figure 68: A 1902 Gardner-Serpollet steam car.With early investments from Peugeot and a later infusion of cash from Frank Gardner, an American with a mining fortune, the Serpollets built a business, first selling steam buses in Paris, then turning to small cars. Their steam powerplants generated more power than the combustion vehicles of the time, and Léon promoted them by setting speed records. In 1902, he surpassed seventy-five miles-per-hour along the promenade in Nice. At that time, a Gardner-Serpollet factory in eastern Paris was turning out about 100 cars per year. Though impressive numbers by the standards of the 1890s, already this was becoming small potatoes. In 1901 7,600 cars were produced in France, and 14,000 in 1903; the growing market left Gardner-Serpollet behind as a niche producer. Léon Serpollet made one last pivot back to buses, then died of cancer in 1907 at age forty-eight. The French steam car did not survive him.[5]Unlike in the U.S., steam car sales barely took off in France, and never had parity with the total sales of combustion engine cars from the likes of Panhard et Levassor, Peugeot, and many other makes. There was no moment of balance when it appeared that the future of automotive technology was up for grabs. Why this difference? We’ll have more to say about that later, after we consider the American side of the story.The Acme of the Steam CarAutomobile production in the United States lagged roughly five years behind France; and so it was in 1896 that the first small manufacturers began to appear. Charles and George Duryea (bicycle makers, again), were first off the block. Inspired by an article about Benz’ car, they built their own combustion-engine machine in 1893, and, after winning several races, they began selling vehicles commercially out of Peoria, Illinois in 1896. Several other competitors quickly followed.[6]Steam car manufacturing came slightly later, with the Whitney Motor Wagon Company and the Stanley brothers, both in the Boston area. The Stanleys, twins named Francis and Freelan (or F.E. and F.O.), were successful manufacturers of photographic dry plates, which used a dry emulsion that could be stored indefinitely before use, unlike earlier “wet” plates. They fell into the automobile business by accident, in a similar way to many others—by successfully demonstrating a car they had constructed as a hobby, drawing attention and orders. At an exhibition at the Charles River Park Velodrome in Cambridge, F.E. zipped around the field and up an eighty-foot ramp, demonstrating greater speed and power than any other vehicle present, including an imported combustion-engine De Dion tricycle, which could only climb the ramp halfway.[7]The Stanley brothers mounted in their 1897 steam car.The rights to the Stanley design, through a complex series of business details, ended up in possession of Amzi Barber, the “Asphalt King,” who used tar from Trinidad’s Pitch Lake to pave several square miles worth of roads across the U.S.[8] It was Barber automobiles, sold under the Locomobile brand, that formed the plurality of the 1,600 steam cars sold in the U.S. in 1900: the company sold 5,000 total between 1899 and 1902, at the quite-reasonable price of $600. Locomobiles were quiet and smooth in operation, produced little smoke or odor (though they did breathe great clouds of steam), had the torque required to accelerate rapidly and climb hills, and could smoothly accelerate by simply increasing the speed of the piston, without any shifting of gears. The rattling, smoky, single-cylinder engines of their combustion-powered competitors had none of these qualities.[9]Why then, did the steam car market begin to collapse after 1902? Twenty-seven makes of steam car first appeared in the U.S. in 1899 or 1900, mostly concentrated (like the Locomobile) in the Northeast—New York, Pennsylvania, and (especially) Massachusetts. Of those, only twelve continued making steam cars beyond 1902, and only one—the Lane Motor Vehicle Company of Poughkeepsie, New York—lasted beyond 1905. By that year, the Madison Square Garden car show had 219 combustion models on display, as compared to only twenty electric and nine steam.[10]Barber, the Asphalt King, was interested in cars, regardless of what made them go. As the market shifted to combustion, so did he, abandoning steam at the height of his own sales in 1902. But the Stanleys loved their steamers. Their contractual obligations to Barber being discharged in 1901, they went back into business on their own. One of the longest lasting holdouts, Stanley sold cars well into the 1920s (even after the death of Francis in a car accident in 1918), and the name became synonymous with steam. For that reason, one might be tempted to ascribe the death of the steam car to some individual failing of the Stanleys: “Yankee Tinkerers,” they remained committed to craft manufacturing and did not adopt the mass-production “Fordist” methods of Detroit. Already wealthy from their dry plate business, they did not commit themselves fully to the automobile, allowing themselves to be distracted by other hobbies, such as building a hotel in Colorado so that people could film scary movies there.[11]Some of the internal machinery of a late-model Stanley steamer: the boiler at top left, burner at center left, engine at top right, and engine cutaway at bottom right. [Stanley W. Ellis, Smogless Days: Adventures in Ten Stanley Steamers (Berkeley: Howell-North Books, 1971), 22]But, as we have seen, there were dozens of steam car makers, just as there were dozens of makers of combustion cars; no idiosyncrasies of the Stanley psychology or business model can explain the entire market’s shift from one form of power train to another—if anything it was the peculiar psychology of the Stanleys that kept them making steam cars at all, rather than doing the sensible thing and shifting to combustion. Nor did the powers that be put their finger on the scale to favor combustion engines.[12] How, then, can we explain both the precipitous rise of steam in the U.S. (as opposed to its poor showing in France) as well as its sudden fall?The steam car’s defects were as obvious as its advantages. Most annoying was the requirement to build up a head of steam before you could go anywhere: this took about ten minutes for the Locomobile. Whether starting or going, the controls were complex to manage. Scientific American described the “quite simple” steps required to get a Serpollet car going:A small quantity of alcohol is used to heat the burner, which takes about five minutes; then by the small pump a pressure is made in the oil tank and the cock opened to the burner, which lights up with a blue flame, and the boiler is heated up in two or three minutes. The conductor places the clutch in the middle position, which disconnects the motor from the vehicle and regulates the motor to the starting position, then puts his foot on the admission pedal, starting the motor with the least pressure and heating the cylinders, the oil and water feed working but slightly. When the cylinders are heated, which takes but a few strokes of the piston, the clutch is thrown on the full or wean speed and the feed-pumps placed at a maximum, continuing to feed by hand until the vehicle reaches a certain speed by the automatic feed, which is then regulated as desired.[13]Starting a combustion car of that era also required procedures long-since streamlined away—cranking the engine to life, adjusting the carburetor choke and spark plug timing—but even at the time most writers considered steamers more challenging to operate. Part of the problem was that the boilers were intentionally small (to allow them to build steam quickly and reduce the risk of explosion), which meant lots of hands-on management to keep the steam level just right. Nor had the essential thermodynamic facts changed – internal combustion, operating over a larger temperature gradient, was more efficient than steam. The Model T could drive fifteen to twenty miles on a gallon of fuel, the Stanley could go only ten, not to mention its constant thirst for water, which added another “fueling” requirement.[14]The rather arcane controls of a 1912 Stanley steamer. [Ellis, Smogless Days: Adventures in Ten Stanley Steamers, 26]The steam car overcame these disadvantages to achieve its early success in the U.S. because of the delayed start of the automobile industry there. American steam car makers, starting later, skipped straight to petroleum-fueled burners, bypassing all the frustrations of dealing with a traditional coal-fueled firebox, and banishing all associations between that cumbersome appliance and the steam car.At the same time, combustion automobile builders in the U.S. were still early in their learning curve compared to those in France. A combustion engine was a more complex and temperamental machine than a steam engine, and it took time to learn how to build them well, time that gave steam (and electric) cars a chance to find a market. The builders of combustion engines, as they learned from experience, rapidly improved their designs, while steam cars improved relatively little year over year.Most importantly, they never could get up and running as quickly as a combustion engine. In one of those ironies which history graciously provides to the historian, the very impatience that the steam age had brough forth doomed its final progeny, the steam car. It wasn’t possible to start up a steam car and immediately drive; you always had to wait for the car to be ready. And so drivers turned to the easier, more convenient alternative, to the frustration of steam enthusiasts, who complained of “[t]his strange impatience which is the peculiar quirk of the motorist, who for some reason always has been in a hurry and always has expected everything to happen immediately.”[15] Later Stanleys offered a pilot light that could be kept burning to maintain steam, but “persuading motorists, already apprehensive about the safety of boilers, to keep a pilot light burning all night in the garage proved a hard sell.”[16] It was too late, anyway. The combustion-driven automotive industry had achieved critical mass.The Afterlife of the Steam CarThe Ford Model T of 1908 is the most obvious signpost for the mass-market success of the combustion car. But for the moment that steam was left in the dust, we can look much earlier, to the Oldsmobile “curved dash,” which first appeared in 1901 and reached its peak in 1903, when 4,000 were produced, three times the total output of all steam car makers in that pivotal year of 1900. Ransom Olds, son of a blacksmith, grew up in Lansing, Michigan, and caught the automobile bug as a young man in 1887. Like many contemporaries, he built steamers at first (the easier option), but after driving a Daimler car at the 1893 Chicago World’s Fair, he got hooked on combustion. His Curved Dash (officially the Model R) still derived from the old-fashioned “horseless carriage” style of design, not yet having adopted the forward engine compartment that was already common in Europe by that time. It had a modest single-cylinder, five-horsepower engine tucked under the seats, and an equally modest top speed of twenty miles-per-hour. But it was convenient and inexpensive enough to outpace all of the steamers in sales.[17]The Oldsmobile “Curved Dash” was celebrated in song.The market for steam cars was reduced to driving enthusiasts, who celebrated its near-silent operation (excepting the hiss of the burner), the responsiveness of its low-end torque, and its smooth acceleration without any need for clunky gear-shifting. (There is another irony in the fact that late-twentieth century driving enthusiasts, disgusted by the laziness of automatic transmissions, would celebrate the hands-on responsiveness of manual shifters.) The steam partisan was offended by the unnecessary complexity of the combustion automobile. They liked to point out how few moving parts the steam car had.[18] To imagine the triumph of steam is to imagine a world in which the car remained an expensive hobby for this type of car enthusiast.Several entrepreneurs tried to revive the steamer over the years, most notably the Doble brothers, who brought their steam car enterprise to Detroit in 1915, intent on competing head-to-head with combustion. They strove to make a car that was as convenient as possible to use, with a condenser to conserve water, key-start ignition, simplified controls, and a very fast-starting boiler.But, meanwhile, car builders were steadily scratching off all of the advantages of steam within the framework of the combustion car. Steam cars, like electric cars, did not require the strenuous physical effort to get running that early, crank-started combustion engines did. But by the second decade of the twentieth century, car makers solved this problem by putting a tiny electric car powertrain (battery and motor) inside every combustion vehicle, to bootstrap the starting of the engine. Steam cars offered a smoother, quieter ride than the early combustion rattletraps, but more precisely machined, multi-cylinder engines with anti-knock fuel canceled out this advantage (the severe downsides of lead as an anti-knock agent were not widely recognized until much later). Steam cars could accelerate smoothly without the need to shift gears, but then car makers created automatic transmissions. In the 1970s, several books advocated a return to the lower-emissions burners of steam cars for environmental reasons, but then car makers adopted the catalytic converter.[19]It’s not that a steam car was impossible, but that it was unnecessary. Every year more and more knowledge and capital flowed into the combustion status quo, the cost of switching increased, and no sufficiently convincing reason to do so ever appeared. The failure of the steam car was not due to accident, not due to conspiracy, and certainly not due to any individual failure of the Stanleys, but due to the expansion of auto sales to people who cared more about getting somewhere than about the machine that got them there. Impatient people, born, ironically, of the steam age.

Read more
The Era of Fragmentation, Part 3: The Statists

In the spring of 1981, after several smaller trials, The French telecommunications administration (Direction générale des Télécommunications, or DGT), began a large-scale videotex experiment in a region of Brittany called Ille-et-Vilaine, named after its two main rivers. This was the prelude to the full launch of the system across l’Hexagone in the following year. The DGT called their new system Télétel, but before long everyone was calling it Minitel, a synecdoche that derived from the name of the lovable little terminals that were distributed free of charge, by the hundreds of thousands, to French telephone subscribers. Among all the consumer-facing information service systems in this “era of fragmentation” Minitel deserves our special attention, and thus its own chapter in this series, for three particular reasons. First, the motive for its creation. Other post, telephone, and telegraph authorities (PTTs) built videotex systems, but no other state invested as heavily in making it a success, nor gave so much strategic weight to that success. Entangled with hopes for a French economic and strategic renaissance, Minitel was meant not just to produce new telecom revenues or generate more network traffic, but to prime the pump for the entire French technology sector. Second, the extent of its reach. The DGT provided Minitel terminals to subscribers free of charge, and levied all charges at time of use rather than requiring an up-front subscription. This meant that, although many of them used the system infrequently,  more people had access to Minitel than to even the largest American on-line services of the 1980s, despite France’s much smaller population. The comparison to its nearest direct equivalent, Britain’s Prestel, which never broke 100,000 subscribers, is even more stark. Finally, there is the architecture of its backend systems. Every other commercial purveyor of digital services was a monolith, with all services hosted on their own machines. While they may have collectively formed a competitive market, each of their systems were structured internally as a command economy. Minitel, despite being the product of a state monopoly, was ironically the only system of the 1980s that created a free market for information services. The DGT, acting as an information broker rather than information supplier, provided one possible model for exiting the era of fragmentation. Playing Catch Up It was not by happenstance that the Minitel experiments began in Brittany. In the decades after World War II, the French government had deliberately seeded the region, whose economy still relied heavily upon agriculture and fishing, with an electronics and telecommunications industry. This included two major telecom research labs: the Centre Commun d’Études de Télévision et Télécommunications (CCETT) in Rennes, the region’s capital, and a branch of the Centre National d’Études des Télécommunications (CNET) in Lannion, on the northern coast. The CCETT lab in Rennes Themselves a product of an effort to bring a lagging region into the modern era, by the late 1960s and early 1970s these research departments found themselves playing catch up with their peers in other countries. The French phone network of the late 1960s was an embarrassment for a country that, under de Gaulle, wished to see itself as a resurgent world power. It still relied heavily on switching infrastructure built in the first decades of the century, and only 75% of the network was automated by 1967. The rest still depended on manual operators, which had been all but eliminated in the U.S. the rest of Western Europe. There were only thirteen phones for every 100 inhabitants of France, compared to twenty-one in neighboring Britain, and nearly fifty in the countries with the most advanced telecommunications systems, Sweden and the U.S. France therefore began a massive investment program of rattrapage, or “catch up,” in the 1970s. Rattrapage ramped up steeply after the 1974 election of Valéry Giscard d’Estaing to the presidency of France, and his appointment of a new director for the DGT, Gérard Théry. Both were graduates of France’s top engineering school, l’École Polytechnique, and both believed in the power of technology to improve society. Théry set about making the DGT’s bureaucracy more flexible and responsive and Giscard secured 100 billion francs in funding from Parliament for modernizing the telephone network, money that paid for the installation of millions more phones and the replacement of old hardware with computerized digital switches. Thus France dispelled its reputation as a sad laggard in telephony. But in the meantime new technologies had appeared in other nations that took telecommunications in new directions – videophone, fax, and the fusion of computer services with communication networks. The DGT wanted to ride the crest of this new wave, rather than having to play catch up again. In the early 1970s, Britain announced two separate teletex systems, which would deliver rotating screens of data to television sets in the blanking intervals in television broadcasts. CCETT, DGT’s joint venture with France’s television broadcaster, the Office de radiodiffusion-télévision française (ORTF) launched two projects in response. DIDON1 was modeled closely on the the British television broadcasting model, but ANTIOPE2 took a more ambitious tack, to investigate the delivery of screens of text independently of the communications channel. Bernard Marti in 2007 Bernard Marti headed the ANTIOPE team in Rennes. He was yet another polytechnichien (class of 1963), and had joined CCETT from ORDF, where he specialized in computer animation and digital television. In 1977, Marti’s team merged the ANTIOPE display technology with ideas borrowed from CNET’s TIC-TAC3, a system for delivering interactive digital services over telephone. This fusion, dubbed TITAN4, was basically equivalent to the British Viewdata system that later evolved into Prestel. Like ANTIOPE it used a television to display screens of digital information, but it allowed users to interact with the computer rather than merely receiving data passively. Moreover, both the commands to the computer and the screen data it returned passed over a telephone line, not over the air. Unlike Viewdata, TITAN supported a full alphabetic keyboard, not just a telephone keypad. In order to demonstrate the system at a Berlin trade fair, the team used France’s Transpac packet-switching network to mediate between the terminals and the CCETT computer in Rennes. Théry’s lab had assembled an impressive tech demo, but as of yet none of it had left the lab, and it had no obvious path to public use. Télématique In the fall of 1977, DGT director Gerard Théry, satisfied with how the modernization of the phone network was progressing, turned his attention to the British challenge in videotex. To develop a strategic response, he first looked to CCETT and CNET, where he found TITAN and TIC-TAC prototypes ready to be put to use. He turned these experimental raw materials over to his development office (the DAII) to be molded into products with a clear path to market and business strategy. The DAIIn recommended pursuing two projects: first, a videotex experiment to test out a variety of services in a town near Versailles, and second, investment in an electronic phone directory, intended to replace the paper phone book. Both would use Transpac as the networking backbone, and TITAN technology for the frontend, with color imagery, character-based graphics, and a full keyboard for input. An early experimental Télétel setup, before the idea of using the TV as the display was abandoned. The strategy the DAII devised for videotex differed from Britain’s in three important ways. First, whereas Prestel hosted all of the videotex content themselves, the DGT planned to serve only as a switchboard from which users could reach any number of different privately-hosted service providers, running any type of computer that could connect to Transpac and serve valid ANTIOPE data. Second, they decided to abandon the television as the display unit and go with custom, all-in-one terminals. People bought TVs to watch TV, the DGT leadership reasoned, and would not want to tie up their screen with new services like the electronic phone book. Moreover, cutting the TV set out of the picture meant that the DGT would not have to negotiate over the launch with their counterparts at Télédiffusion de France (TDF), the successor to the ORDF5. Finally, and most audaciously, France cracked the chicken-and-egg problem (that a network without users was unattractive to service providers and vice versa) by planning to lease those all-in-one videotex terminals free of charge. Despite these bold plans, however, videotex remained a second-tier priority for Théry. When it came to ensuring DGT’s place at the forefront of communications technology, his focus was on developing the fax into a nationwide consumer service. He believed that fax messaging could take over a huge portion of the market for written communication from the post office, whose bureaucrats the DGT looked upon as hidebound fuddy-duddies.  Théry’s priorities changed within months, however, with the completion of a government report in early 1978 entitled The Computerization of Society. Released to bookstores in a paperback edition in May, it sold 13,500 copies in its first month, and a total of 125,000 copies over the following decade, quite a blockbuster for a government report6 How did such a seemingly recondite topic engender such excitement? The authors, Simon Nora and Alain Minc, officers in the General Inspectorate of Finance, had been asked to write the report by the Giscard government in order to consider the threat and the opportunity presented by the growing economic and cultural significance of the computer. By the mid-1970s, it was becoming clear to most technically-minded intellectuals that computing power could and likely would be democratized, brought to the masses in the form of new computer-mediated services. Yet for decades, the United States had led the way in all forms of digital technology, and American firms held a seemingly unassailable grip on the market for computer hardware. The leaders of France considered the democratization of computers a huge opportunity for French society, yet they did not want to see France become a dependent satellite of a dominating foreign power. Nora and Minc’s reported presented a synthesis that resolved this tension, proposing a project that would catapult France into the post-modern age of information. The nation would go directly from trailing the pack in computing to leading it, by building the first national infrastructure for digital services – computing centers, databases, standardized networks – all of which would serve as the substrate for an open, democratic marketplace in digital services. This would, in turn, stimulate native French expertise and industrial capacity in computer hardware, software, and networking. Nora and Minc called this confluence of computers and communications télématique, a fusion of telecommunications and informatique (the french word for computing or computer science). “Until recently,” they wrote, computing… remained the privilege of the large and the powerful. It is mass computing that will come to the fore from now on, irrigating society, as electricity did. La télématique, however, in contrast to electricity, will not transmit an inert current, but information, that is to say, power. The Nora-Minc report, and the resonance it had within the Giscard government, put the effort to commercialize TITAN in a whole new light. Before the report, the DGT’s videotex strategy had been a response to their British rivals, intended to avoid being caught unprepared and forced to operate under a British technical standard for videotex. Had it remained only that, France’s videotex efforts might well have languished, ending up much like Prestel, a niche service for a few curious early adopters and a handful of business sectors that it found it useful. After Nora-Minc, however, videotex could only be construed as a central component of télématique, the basis for building a new future for the whole French nation, and it would receive more attention and investment than it might otherwise ever have hoped for. The effort to launch Minitel on a grand scale gained backing from the French state that might otherwise have failed to materialize, as it did for Théry’s plans for a national fax service, which dwindled to a mere Minitel printer accessory. This support included the funding to provide millions of terminals to the populace, free of charge. The DGT argued that the cost of the terminals would be offset by the savings from no longer printing and distributing the phone book, and from new network traffic stimulated by the Minitel service. Whether they sincerely believed this or not, it provided at least a fig leaf of commercial rationale for a massive industrial stimulus program, starting with Alcatel (paid billions of francs to manufacture terminals) and running downstream to the Transpac network, Minitel service providers, the computers purchased by those providers, and the software services required to run an on-line business. Man in the Middle In purely commercial terms, Minitel did not in fact contribute much to the DGT’s bottom line. It first achieved profitability on an annual basis in 1989, and if it ever achieved overall net profitability, it was not until well into its slow but terminal decline in the later 1990s. Nor did it achieve Nora and Minc’s aspiration to create an information-driven renaissance of French industry and society. Alcatel and other makers of telecom equipment did benefit from the contracts to build terminals, and the French Transpac network benefited from a large increase in traffic – though, unfortunately, with the X.25 protocol they turned out to have bet on the wrong packet-switching technology in the long-term. The thousands of Minitel service providers, however, mostly got their hardware and systems software from American providers. The techies who set up their own online services eschewed both the French national champion, Bull, and the dreaded giant of enterprise sales, IBM, in favor scrappy Unix boxes from the likes of Texas Instruments and Hewlett-Packard. So much for Minitel as industrial policy, what about its role in enervating French society with new information services, which would reach democratically into both the most elite arrondissements of Paris and the plus petit village of Picardy? Here it achieved rather more, though still mixed, success. The Minitel system grew rapidly, from about 120,000 terminals at its initial large-scale deployment in 1983, to over 3 million in 1987 and 5.6 million in 1990.7 However, with the exception of the first few minutes of the electronic phonebook, actually using those terminals cost money on a minute-by-minute basis, and there’s no doubt that usage was distributed much more unequally than the equipment. The most heavily used services, the online chat rooms, could easily burn hours of call time in an evening, at a base rate of 60 francs per hour (equivalent to about $8, more than double the U.S. minimum wage at the time). Nonetheless, nearly 30 percent of French citizens had access to a Minitel terminal at home or work in 1990. France was undoubtedly the most online country (if I may use that awkward adjective) in the world at that time. In that same year, the largest two online services in the United States, that colossus of computer technology, totaled just over a million subscribers, in a population of 250 million8. And the catalog of services that one could dial into grew as rapidly as the number of terminals – from 142 in 1983 to 7,000 in 1987 and nearly 15,000 in 1990. Ironically, a paper directory was needed to index all of the services available on this terminal that was intended to supplant the phone book. By the late 1980s that directory, Listel, ran to 650 pages.9 A man using a Minitel terminal Beyond the DGT-provided phone directory, services ran the gamut from commercial to social, and covered many of the major categories we still associate today with being online – shopping and banking, travel booking, chat rooms, message boards, games. To connect to a service, a Minitel user would dial an access number, most often 3615, which connected his phone line to a special computer in his local telephone switching office called a point d’accès vidéotexte, or PAVI. Once connected to the PAVI, the user could then enter a further code to indicate which Minitel service they wished to connect to. Companies plastered their access code in a mnemonic alphabetic form onto posters and billboards, much as they would do with website URLs in later decades: 3615 TMK, 3615 SM, 3615 ULLA. The 3615 code connected users into the PAVI’s “kiosk” billing system, introduced in 1984, which allowed Minitel to operate much like a news kiosk, offering a variety of wares for sale from different vendors, all from a single convenient location. Of the sixty francs charged per hour for basic kiosk services, 40 went to the service itself, and twenty to the DGT to pay for the use of the PAVI and the Transpac network. All of this was entirely transparent to the user; the charges would appear automatically on their next telephone bill, and they never needed to provide payment information to establish a financial relationship with the service provider. As access to the open internet began to spread in the 1990s, it became popular for the cognoscenti to retrospectively deprecate the online services of the era of fragmentation – the CompuServes, the AOLs – as “walled gardens”10. The implied contrast in the metaphor is to the freedom of the open wilderness. If CompuServe is a carefully cultivated plot of land, the internet, from this point of view, is Nature itself. Of course the internet is no more natural than CompuServe, nor Minitel. There is more than one way to architect an online service, and all of them are based on human choices. But if we stick to this metaphor of the natural versus the cultivated, Minitel sits somewhere in between. We might compare it to a national park. Its boundaries are controlled, regulated, and tolled, but within them one can wander freely and visit whichever wonders might strike your interest. DGT’s position in the middle of the market between user and service, with a monopoly on the user’s entry point and the entire communications pathway between the two parties, offered advantages over both the monolithic, all-inclusive service providers like CompuServe and the more open architecture of the later Internet. Unlike the former, once past the initial choke point, the system opened out into a free market of services unlike anything else available at the time. Unlike the latter, there was no monetization problem. The user paid automatically for computer time used, avoiding the need for the bloated and intrusive edifice of ad-tech that supports the bulk of the modern Internet. Minitel also offered a secure end-to-end connection. Every bit traveled only over DGT hardware, so as long as you trusted both the DGT and the service to which you were connected, your communications were safe from attackers. This system also had some obvious disadvantages compared to the Internet that succeeded it, however. For all is relative openness, one could not just turn on a server, connect it to the net, and be open for business. It required government pre-approval to make your server accessible via a PAVI. More fatally, the Minitel’s technical structure was terribly rigid, tied to a videotex protocol that, while advanced for the mid-1980s, appeared dated and extremely restrictive within a decade.11 It supported pages of text, in twenty-four rows of forty characters each (with primitive character-based graphics) and nothing more. None of the characteristic features of the mid-1990s World wide Web – free-scrolling text, GIFs and JPEGs, streaming audio, etc. –  were possible on Minitel. Minitel offered a potential road out of the era of fragmentation, but, outside of France, it was a road not taken. The DGT, privatized as France Télécom in 1988, made a number of efforts to export the Minitel technology, to Belgium, Ireland, and even the U.S. (via a system in San Francisco called 101 Online). But without the state-funded stimulus of free terminals, none of them had anything like the success of the original. And, with France Télécom, and most other PTTs around the world, now expected to fend for themselves as lean businesses in a competitive international market, the era when such a stimulus was politically viable had passed. Though the Minitel system did not finally cease operation until 2012, usage went into decline from the mid-1990s onward. In its twilight years it still remained relatively popular for banking and financial services, due to the security of the network and the availability of terminals with an accessory that could securely read and transmit data from banking and credit cards. Otherwise, french online enthusiasts increasingly turned to the Internet. But before we return to that system’s story, we have one last stop to visit on our tour of the era of fragmentation. [Previous] [Next] Further Reading Julien Mailland and Kevin Driscoll, Minitel: Welcome to the Internet (2017) Marie Marchand, The Minitel Saga (1988)    

Read more