The Era of Fragmentation, Part 1: Load Factor

By the early 1980s, the roots of what we know now as the Internet had been established – its basic protocols designed and battle-tested in real use – but it remained a closed system almost entirely under the control of a single entity, the U.S. Department of Defense. Soon that would change, as it expanded to academic computer science departments across the U.S. with CSNET. It would continue to grow from there within academia, before finally opening to general commercial use in the 1990s.

But that the Internet would become central to the coming digital world, the much touted “information society,” was by no means obvious circa 1980. Even for those who had heard of it, it remained little more than a very promising academic experiment. The rest of the world did not stand still, waiting with bated breath for its arrival. Instead, many different visions for bringing online services to the masses competed for money and attention.

Personal Computing

By about 1975, advances in semiconductor manufacturing had made possible a new kind of computer. At few years prior, engineers had figured out how to pack the core processing logic of a computer onto a single microchip – a microprocessor. Companies such as Intel began to offer high-speed short-term memory on chips as well, to replace the magnetic core memory of previous generations of computers. This brought the most central and expensive parts of the computer under the sway of Moore’s Law, which, in turn, drove the unit price of chip-based computing and memory relentlessly downward for decades to come. By the middle of the decade, this process had already brought the price of these components low enough that a reasonably comfortable middle-class American might consider buying and building a computer of his or her own. Such machines were called microcomputers (or, sometimes, personal computers).

The claim to the title of the first personal computer been fiercely contested, with some looking back as far as Wes Clark’s LINC or the Lincoln Labs TX-0, which, after all, were wielded interactively by a single user at a time. Putting aside strict questions of precedence, any claimant to significance based on historical causality must concede to one obvious champion. No other machine had the catalytic effect that the MITS Altair 8800 had, in bringing about the explosion of microcomputing in the late 1970s.

The Altair 8800, atop optional 8-inch floppy disk unit

The Altair fell into the electronic hobbyist community like a seed crystal. It convinced hobbyists that it was possible for a person build and own their own computer at a reasonable price, and they coalesced into communities to discuss their new machines, like the Homebrew Computer Club in Menlo Park. Those hobbyist cells then launched the much wider wave of commercial microcomputing based on mass-produced machines that required no hardware skills to bring to life, such as the Apple II and Radio Shack TRS-80.

By 1984, 8% of U.S. households had their own computer, a total of some seven million machines1. Meanwhile, businesses were acquiring their own fleets of personal computers at the rate of hundreds of thousands per year, mostly the IBM 5150 and its clones2. At the higher end of the price range for single-user computers, a growing market had also appeared for workstations from the likes of Silicon Graphics and Sun Microsystems – beefier computers equipped standard with high-end graphical displays and networking hardware, intended for use by scientists, engineers and other technical specialists.

None of these machines would be invited to play in the rarefied world of ARPANET. Yet many of their users wanted access to the promised fusion of computers and communications that academic theorists had been talking up in the popular press since Taylor and Licklider’s 1968 “Computer As a Communication Device,” and even before. As far back as 1966, computer scientist John McCarthy had promised in Scientific American that “[n]o stretching of the demonstrated technology is required to envision computer consoles installed in every home and connected to public-utility computers through the telephone system.”  The range of services such a system could offer, he averred, would be impossible to enumerate, but he put forth a few examples: “Everyone will have better access to the Library of Congress than the librarian himself now has. …Full reports on current events, whether baseball scores, the smog index in Los Angeles or the minutes of the 178th meeting of the Korean Truce Commission, will be available for the asking. Income tax returns will be automatically prepared on the basis of continuous, cumulative annual records of income, deductions, contributions and expenses.”

Articles in the popular press described the possibilities for electronic mail, digital games, services of all kinds from legal and medical advice to online shopping. But how, practically, would all these imaginings take shape? Many answers were in the offing. In hindsight, this era bears the aspect of a broken mirror. All of the services and concepts that would characterize the commercial internet of the 1990s – and then some – were manifest in the 1980s, but in fragments, scattered piecemeal across dozens of different systems. With a few exceptions3, these systems did not interconnect, each stood isolated from the others, a “walled garden,” in later terminology. Users on one system had no way to communicate or interact with those on another, and the quest to attract more users was thus for the most part a zero-sum game.

In this installment, we’ll consider one set of participants in this new digital land grab, time-sharing companies looking to diversity into a new market with attractive characteristics.

Load Factor

In 1892, Samuel Insull, a protégé of Thomas Edison, headed west and to lead a new  branch of Edison’s electrical empire, the Chicago Edison Company. There he consolidated many of the core principles of modern utility management, among them the concept of the load factor – the average load on the electrical system divided by its highest load. The higher the load factor the better, because any deviation below 1/1 represents waste – expensive capital capacity that’s needed to handle the peak of demand, but left idle in the troughs. Insull therefore set out to fill in the troughs in the demand curve by developing new classes of customers that would use electricity at different times of day (or even in different seasons), even if it meant offering them discounted rates. In the early years of electrical power, the primary demand came from domestic lighting, with most demand in the evening. So Insull promoted its use for industrial machinery to increase daytime use. This still left dips in the morning and evening rush, so he convinced the Chicago streetcar systems convert to electrical traction. And so Insull maximized the value of his capital investments, even though it often meant offering lower prices[^hughes].

Insull in 1926, when he was pictured on the cover of Time magazine.

[^hughes]: Thomas P. Hughes, Networks of Power (1983), 216-225.

The same principles still applied to capital investments in computers nearly a century later, and it was exactly the desirability of a balanced load factor and the incentive for offering lower off-peak prices that made possible two new online services for microcomputers that launched nearly simultaneously in the summer of 1979: CompuServe and The Source.

CompuServe

In 1969, the newly-formed Golden United Life Insurance company of Columbus, Ohio created a subsidiary called the Compu-Serv Network. The founder of Golden United wanted to be a cutting-edge, high-tech company with computerized records, and so he had hired a young computer science grad named John Goltz to lead the effort. Goltz, however, was gulled by a DEC salesman into buying a PDP-10, an expensive machine with far more computer power than Golden United currently needed. The idea behind Compu-Serv was to turn that error into an opportunity, by selling the excess computer power to paying customers who would dial into the Compu-Serv PDP-10 via a remote terminal. In the late 1960s this time-sharing model for selling computer service was spreading rapidly, and Golden United wanted to get its own cut of the action. In the 1970s the time-sharing subsidiary spun off to operate independently, re-branded itself as CompuServe, and built its own packet-switching network in order to be able to offer affordable, nationwide access to its computer centers in Columbus.

A national market not only gave the company access to more potential customers, it also extended the demand curve for computer time, by spreading it across four time zones. Nonetheless, there were still a large gulf of time between the end of business hours in California and the start of business on the East Coast, not to mention the weekends. CompuServe CEO Jeff Wilkins saw an opportunity in the growing fleet of home computers, many of whose owners whiled away their evening and weekend hours on their electronic hobby. What if they were offered access to email, message boards, and games on CompuServe computers, at discounted rates for evening and weekend access ($5 an hour, versus $12 during the work day4)?

So Wilkins launched a trial of a service he called MicroNET (intentionally held at arms length from the main CompuServe brand) and after a slow start it gradually proved a resounding success. Because of CompuServe’s national data network, most users only had to dial a local number to reach MicroNET, and thus avoided long-distance telephone charges, despite the fact that the actual computers they were connecting to resided in Ohio. His experiment having proved itself, Wilkins dropped the MicroNET name and folded the service under the CompuServe brand. Soon the company began to offer services tailored to the needs of microcomputer users, such as games and other software available for sale on-line.

But by far the most popular services were the communications platforms. For long-lived public content and discussions there were the forums, ranging across every topic from literature to medicine, from woodworking to pop music. Forums were generally left to their own devices by CompuServe, being administered and moderated by ordinary users who took on the role of “sysops” for each forum. The other main communications platform was the “CB Simulator”, coded up over the weekend by Sandy Trevor, a CompuServe executive. Named after citizen band (CB) radio, a popular hobby at the time, it allowed users to have text-based chats in real-time in dedicated channels, a similar model to the ‘talk’ programs offered on many time-sharing systems. Many dedicated users would hang out for hours on CB Simulator, shooting the breeze, making friends, or even finding lovers.

The Source

Hot on the heels of MicroNET – launching just eight days later in July of 1979 – came another on-line service for microcomputers that arrived at essentially the same place as Jeff Wilkins, despite starting from a very different angle. William (Bill) Von Meister, a son of German immigrants, whose father had helped establish zeppelin service between Germany and the U.S., was a serial enterpreneur. He no sooner got some new enterprise off the ground than he lost interest, or was forced out by disgruntled financial backers. He could not have been more different than the steady Wilkins. As of the mid-1970s, his greatest successes to date were in electronic communications – Telepost, a service which sent messages across the country electronically to the switching center nearest its recipient, and then covered the last mile via next-day mail; and TDX, which used computers to optimize the routing of telephone calls, reducing the cost of long-distance telephone service within large businesses.

Having, predictably, lost interest in TDX, Von Meister’s newest enthusiasm in the late 1970s was Infocast, which he planned to launch in McClean, Virginia. In effect, it was an extension of the Telepost concept, except instead of using mail for the last mile delivery, he would use the FM radio sideband (basically the same mechanism that’s used to transmit station identification, artist, and song title to the screens of modern radios) to deliver digital data to computer terminals. In particular, he planned to target highly distributed business with lots of locations that needed regular information updates from their central office, such as banks, insurance companies, and grocery stores.

Bill Von Meister

But what Von Meister really wanted to build was a national network to deliver data into homes, to terminals by the millions, not thousands.  Convincing a business to spend $1000 on a special FM receiver and terminal was one thing, however, to ask the same of consumers was quite another matter. So Von Meister went casting about for another means to deliver news, weather, and other information into homes; and he found it, in the hundreds of thousands of microcomputers that were sprouting like mushrooms in american offices and dens, in homes ready-equipped with telephone connections. He partnered with Jack Taub, a deep-pocketed and well-connected businessman who loved the concept and wanted to invest. Taub and Von Meister initially called the new service CompuCom, a mix of truncation and compounding typical for a computer company of the day, but later settled on a much more abstract and visionary name – The Source.

The main problem they faced was a lack of any technical infrastructure with which to deliver this vision. To get it they partnered with two companies with, collectively, the same resources as CompuServe – time-shared computers and a national data communications network, both of which sat mostly idle on evenings and weekends. Dialcom, headquartered across the Potomac in Silver Springs, Maryland, provided the computing muscle. Like CompuServe, it had begun in 1970 as a time-sharing service5, though by the end of the decade it offered many other digital services. Telenet, the packet-switched network spun off by Bolt, Beranek and Newman earlier in the decade, provided the communications infrastructure. By paying discounted rates to Dialcom and Telenet for off-peak service, Taub and Von Meister were able to offer access to The Source for $2.75 an hour on nights and weekends, after an initial $100 membership fee6

Other than the pricing structure, the biggest difference between The Source and CompuServe was how they expected people to use their systems. The early services that CompuServe offered, such as email, the forums, CB, and the software exchange, generally assumed that users would form their own communities and build their own superstructures atop a basic hardware and software foundation, much like corporate users of time-sharing systems. Taub and Von Meister, however, had no cultural background in time-sharing. Their business plan centered around providing large amounts of information for the upscale, professional consumer: a New York Times database, United Press International news wires, stock information from Dow Jones, airline pricing, local restaurant guides, wine lists. Perhaps the single most telling detail was that Source users were welcomed by a menu of service options on log-in, CompuServe users by a command line.

In keeping with the personality differences between Wilkins and Von Meister, the launch of The Source was as grandiose as MicroNET’s was subtle, including a guest appearance by Isaac Asimov to announce the arrival of science fiction become science fact. Likewise in keeping with Von Meister’s personality and his past, his tenure at The Source would not be lengthy. The company immediately ran into financial difficulties due to his massive overspending. Taub and his brother had a large enough ownership share to oust Von Meister, and they did just that in October of 1979, just a few months after the launch party.

The Decline of Time-Sharing

The last company to enter the microcomputing market due to the logic of load factor was General Electric Information Services (GEIS), a division of the electrical engineering giant. Founded in the mid-1960s, when GE was still trying to compete in the computer manufacturing business, GEIS was conceived as a way to try to outflank IBM’s dominant position in computer sales. Why buy from them, GE pitched, when you can rent from us? The effort made little dent in IBM’s market share, but made enough money to receive continued investment into the 1980s, by which point GEIS owned a worldwide data network and two major computing centers one of them in Cleveland, Ohio and the other in Europe.

In 1984, someone at GEIS noticed the growth of The Source and CompuServe (the latter had, by that time, over 100,000 users), and saw a way to put their computing centers to work in off-peak hours. To build their own consumer offering they recruited a CompuServe veteran, Bill Louden. Louden, disgruntled with managers from the corporate sales side who began muscling in on the increasingly lucrative consumer business, had jumped ship with a group of fellow defectors to try to build their own online service in Atlanta, called Georgia OnLine. They tried to turn the lack of access to a national data network into a virtue, by offering services tailored for the local market, such as an events guide and classified ads, but the company went bust, so Louden was very receptive to the offer from GEIS.

Louden called the new service GEnie, a backronym for General Electric Network for Information Exchange. It offered all of the services that The Source and CompuServe had by now made table stakes in the market – a chat application (CB simulator), bulletin boards, news, weather, and sports information.

GEnie was the last personal computing service born out of the time-sharing industry and the logic of the load factor. By the mid-1980s, the entire economic balance of power had begun to shift. As small computers proliferated in the millions, offering digital services to the mass market became a more and more enticing business in its own right, rather than simply a way to leverage existing capital. In the early days, The Source and CompuServe were tiny, with only a few thousand subscribers each in 1980. A decade later, millions of subscribers paid monthly for on-line services in the U.S. – with CompuServe at the forefront of the market, having absorbed its erstwhile rival, The Source. The same process also made time-sharing less attractive to businesses – why pay all the telecommunications costs and overhead of accessing a remote computer owned by someone else, when it was becoming so easy to equip your own office with powerful machines? Not until fiber optics drove the unit cost of communications into the ground would this logic reverse direction again.

Time-sharing companies were not the only route to the consumer market, however. Rather than starting with mainframe computers and looking for places to put them to work, others started from the appliance that millions already had in their homes, and looked for ways to connect it to a computer.



Steam Revolution: The Turbine

Incandescent electric light did not immediately snuff out all of its rivals: the gas industry fought back with its own incandescent mantle (which used the heat of the gas to induce a glow in another material) and the arc lighting manufacturers with a glass-enclosed arc bulb.[1] Nonetheless, incandescent lighting grew at an astonishing pace: the U.S. alone had an estimated 250,000 such lights in use by 1885, three million by 1890 and 18 million by the turn of the century.[2] Edison’s electric light company expanded rapidly across the U.S. and into Europe, and its success encouraged the creation of many competitors. An organizational division gradually emerged between manufacturing companies that built equipment and supply companies that used it to generate and deliver power to customers. A few large competitors came to dominate the former industry: Westinghouse Electric and General Electric (formed from the merger of Edison’s company with Thomson-Houston) in the U.S., and the Allgemeine Elektricitäts-Gesellschaft (AEG) and Siemens in Germany. In a sign of its gradual relative decline, Britain produced only a few smaller firms, such as Charles Parsons’ C. A. Parsons and Company—of whom more later.  In accordance with Edison’s early imaginings, manufacturers and suppliers expanded beyond lighting to general-purpose electrical power, especially electric motors and electric traction (trains, subways, and street cars). These new fields opened up new markets for users: electric motors, for example, enabled small-scale manufacturers who lacked the capital for a steam engine or water wheel to consider mechanization, while releasing large-scale factories from the design constraints of mechanical power transmission. They also provided electrical supply companies with a daytime user base to balance the nighttime lighting load. The demands of this growing electric power industry pushed steam engine design to its limits. Dynamos typically rotated hundreds of times a minute, several times the speed of a typical steam engine drive shaft. Engineers overcame this with belt systems, but these gave up energy to friction. Faster engines that could drive a dynamo directly required new high-speed valve control machinery, new cooling and lubrication systems to withstand the additional friction, and higher steam pressures more typical of marine engines than factories. That, in turn, required new boiler designs like the Babcock and Wilcox, which could operate safely at pressures well over 100 psi.[3] A high-speed steam engine (made by the British firm Willans) directly driving a dynamo (the silver cylinder at left). From W. Norris and Ben. H. Morgan, High Speed Steam Engines, 2nd edition (London: P.S. King & Son, 1902), 13. But the requirement that ultimately did in the steam engine was not for speed, but for size. As the electric supply companies evolved into large-scale utilities, providing power and light to whole urban centers and then beyond, they demanded more and more output from their power houses. Even Edison’s Pearl Street station, a tiny installation when looking back from the perspective of the turn of the century, required multiple engines to supply it. By 1903, the Westminster Electric Supply Corporation, which supplied only a part of London’s power, required forty-nine Willans engines in three stations to provide about 9 megawatts of power (an average of about 250 horsepower an engine). But demand continued to grow, and engines grew in response. Perhaps the largest steam engines ever built were the 12,000 horsepower giants designed by Edwin Reynolds and installed in 1901 for the Manhattan Elevated Railway Company and in 1904 for the Interborough Rapid Transit (IRT) subway company. Each of these engines actually consisted of two compound engines grafted together, each with its own high- and low-pressure cylinder, set at right angles to give eight separate impulses per rotation to the spinning alternator (an alternating current dynamo). The combined unit, engine and alternator, weighed 720 tons. But the elevated railway required eight of these monsters, and the IRT expected to need eleven to meet its power needs. The IRT’s power house, with a Renaissance Revival façade designed by famed architect Stanford White, filled a city block near the Hudson River (where it still stands today).[4] The inside of the IRT power house, with five engines installed. Each engine consists of two towers, with a disc-shaped dynamo between them. From Scientific American, October 29th, 1904. How much farther the reciprocating steam engine might have been coaxed to grow is hard to say with certainty, because even as the IRT powerhouse was going up in Manhattan, it was being overtaken by a new power technology based on whirling rotors instead of cycling pistons, the steam turbine. This great advancement in steam power borrowed from developments that had been brewing for decades in its most long-standing rival, water power. Niagara The signature electrical project of the turn of the twentieth century was the Niagara Falls Power Company. The immense scale of its works, its ambitions to distribute power over dozens of miles, its variety of prospective customers, and its adoption of alternating current: all signaled that the era of local, Pearl Street-style direct-current electric light plants was drawing to a close. The tremendous power latent in Niagara’s roaring cataract as it dropped from the level of Lake Erie to that of Lake Ontario was obvious to any observer—engineers estimated its potential horsepower in the millions—the problem was how to capture it, and where to direct it. By the late nineteenth century, several mills had moved to draw off some of its power locally. But Niagara could power thousands of factories, with each having to dig its own canals, tunnels and wheel pits to draw off the small fraction of waterfall that it required. New York State law, moreover, forbid development in the immediate vicinity of the falls to protect its scenic beauty. The solution ultimately decided on was to supply power to users from a small number of large-scale power plants, and the largest nearby pool of potential users lay in Buffalo, about twenty miles away.[5] The Niagara project originated in the 1886 designs of New York State engineer Thomas Evershed for a canal and tunnel lined with hundreds of wheel pits to supply power to an equal number of local factories. But the plan took a different direction in 1889 after securing the backing of a group of New York financiers, headed once again by J.P. Morgan. The Morgan group consulted a wide variety of experts in North America and Europe before settling on an electric power system as the best alternative, despite the unproven nature of long-distance electric power transmission. This proved a good bet: by 1893, Westinghouse had proved in California that it could deliver high-voltage alternating current over dozens of miles, convincing the Niagara company to adopt the same model.[6] Cover of the July 22, 1899 issue of Scientific American with multiple views of the first Niagara Falls Power Company power house and its five-thousand-horsepower turbine-driven generators. By 1904, the company had completed canals, vertical shafts for the fall of water, two powerhouses with a total capacity of 110,000 horsepower, and a mile-long discharge tunnel. They supplied power to local industrial plants, the city of Buffalo, and a wide swath of New York State and Ontario.[7] The most important feature of the power plant for our story, however, were the Westinghouse generators driven by water turbines, each with a capacity of 5,000 horsepower each. As Terry Reynolds, a historian of the waterwheel, put it, this was “more than ten times [the capacity] of the most powerful vertical wheel ever built.”[8] Water turbines had made possible the exploitation of water power on a previously inconceivable scale; appropriately so, for they originated from a hunger on the European continent for a power that could match British steam. Water Turbines The exact point at which a water wheel becomes a turbine is somewhat arbitrary; a turbine is simply a kind of water wheel that has reached a degree of efficiency and power that earlier designs could not approach. But the distinction most often drawn is in terms of relative motion: the water in a traditional wheel pushes the vane along with the same speed and direction as its own flow (like a person pushing a box along the floor). A turbine, on the other hand, creates “motion of the water relative to the buckets or floats of the wheel” in order to extract additional energy: that is to say, it uses the kinetic energy of the water as well as its weight or pressure. That can occur through either impulse (pressing water against the turning vanes), or reaction (shooting water out from them to cause them to turn) but very often includes a combination of both.[9] The exact origins of the horizontal water wheel are unknown, but they had been used in Europe since at least the late Middle Ages. They offered by far the simplest way to drive a millstone, since it could be attached directly to the wheel without any gearing, and remained in wide use in poorer regions of the continent well into the modern period. For centuries, the manufacturers and engineers of Western Europe focused their attention on the more powerful and efficient vertical water wheel, and this type constitutes most of our written record of water technology. Going back to the Renaissance, however, descriptions and drawings can be found of horizontal wheels with curved vanes intended to capture more of the flow of water, and it was the application of rigorous engineering to this general idea that led to the modern turbine. The turbine was in this sense the revenge of the horizontal water wheel, transforming the most low-tech type of water wheel into the most sophisticated. All of the early development of the water turbine occurred in France, which could draw on a deep well of hydraulic theory but could not so easily access coal and iron to make steam as could their British neighbors. Bernard Forest de Belidor, an eighteenth-century French engineer, recorded in his 1737 treatise on hydraulic engineering the existence of some especially ingenious horizontal wheels, used to grind flour at Bascale on the Garonne. They had curved blades fitted inside a surrounding barrel and angled like the blades of a windmill, such that “the water that pushes it works it with the force of its weight composed with the circular motion given to it by the barrel…”[10] Nothing much came of this observation for another century, but Belidor had identified what we could call a proto-turbine, where water not only pushed on the vanes but also glided down through them like the breeze on the arms of a windmill, capturing more of its energy. The horizontal mill wheels observed on the Garonne by Belidor. From Belidor, Architecture hydraulique vol. 1, part 2, Plan 5. In the meantime, theorists came to an important insight. Jean-Charles de Borda, another French engineer (there will be a lot of them in this part of the story), was only a small child in a spa town just north of the Pyrenees when Belidor was writing about water wheels. He studied mathematics and wrote mathematical treatises, became an engineer for the Army and then the Navy, undertook several scientific voyages, fought in the American Revolutionary War, and headed the commission that established the standard length of the meter. In the midst of all this he found some time in 1767 to write up a study on hydraulics for the French Academy of Sciences, in which he articulated the principle that, to extract the most power from a water wheel, the water should enter the machine without shock and leave it without velocity. Lazare Carnot, father of Sadi, restated this principle some fifteen years later, in a treatise that reached a wider audience than de Borda’s paper.[11] Though it is obviously impossible for the water to literally leave the wheel without velocity (for after all without velocity it would never leave), it was through striving for this imaginary ideal that engineers developed the modern, highly efficient water turbine. First came Jean-Victor Poncelet (from now on, if I mention someone, just assume they are French), another military engineer who had accompanied Napoleon’s Grande Armée into Russia in 1812, where he ended up a prisoner of war for two years. After returning home to Metz he became the professor of mechanics at the local military engineering academy. While there he turned his mind to vertical water wheels, and a long-standing tradeoff in their design: undershot wheels, in which the water passed under the wheel, were cheaper to construct but not very efficient, while overshot wheels, where the water came to the top of the wheel and fell on its vanes or buckets, had the opposite attributes. Poncelet combined the virtues of both by applying the principle of de Borda and Carnot. The traditional undershot waterwheel had a maximum theoretical efficiency of 50%, because the ideal wheel turned at half the speed of the water current, allowing the water to leave the vanes of the wheel behind with half of its initial velocity. The appearance of cheap sheet iron had made it possible to substitute metal vanes for wooden, and iron vanes could easily be bent in a curve. By curving the vanes of the wheel just so towards the incoming water, Poncelet found that it would run up the cupped vane, expending all of its velocity, and then fall out of the bottom of the wheel.[12] He published his idea in 1825 to immediate acclaim: “no other paper on water-wheels… had proved so interesting and commanded such attention.”[13] The Poncelet water wheel. Poncelet’s advance hinted at the possibility of a new water-powered industrial future for France. His wheel design soon became a common sight in a France eager to develop its industrial might, and richer in falling water than in reserves of coal. It inspired the Société d’Encouragement pour l’Industrie Nationale, an organization founded in 1801 to push France to be more industrially competitive with Britain, to offer a prize of 6,000 francs to anyone who “would apply on a large scale, in a satisfactory manner, in factories and manufacturing works, the water turbines or wheels with curved blades of Belidor.” The revenge of the horizontal wheel was at hand.[14] Benoît Fourneyron, an engineer at a water-powered ironworks in the hilly country near the Swiss border, claimed the prize in 1833. Even before the announcement of the prize, he had, in fact, already undertaken a deep study of hydraulic theory, reading up on Borda and his successors. He had devised and tested an improved “Belidor-style” wheel, applying the curved metal vanes of Poncelet to a horizontal wheel situated in a barrel-shaped pit, which we can fairly call the first modern water turbine. He went on to install over a hundred of these turbines around Europe, but his signal achievement was the 1837 spinning mill amid the hills of the Black Forest in Baden, which took in a head of water falling over 350 feet and generated sixty horsepower at 80% efficiency. The spinning rotor of the turbine responsible for this power was a mere foot across and weighed only forty pounds. A traditional wheel could neither take on such a head of water nor derive so much power, so efficiently, from such a compact machine.[15] The Fourneyron turbine. The inflowing water, from the reservoir A drives the rotor before emptying from its radial exterior into the basin D. From Eugène Armengaud, Traité théorique et pratique des moteurs hydrauliques et a vapeur, nouvelle edition (Paris: Armengaud, 1858), 279. Steam Turbines The water turbine was thus a far smaller and more efficient machine than its ancestor, the traditional water wheel. Its basic form had existed since at least the time of Belidor, but to achieve an efficient, high-speed design like Fourneyron’s required a body of engineers deeply educated in mathematical physics and a surrounding material culture capable of realizing those mathematical ideas in precisely machined metal. It also required a social context in which there existed demand for more power than traditional sources could ever provide: in this case, a France racing to catch up with rapidly industrializing Britain. The same relation held between the steam turbine and the reciprocating steam engine: the former could be much more compact and efficient, but put much higher demands on the precision of its design and construction. It was no great leap to imagine that steam could drive a turbine in the same way that water did: through the reaction against or impulse from moving steam. One could even look to some centuries-old antecedents for inspiration: the steam-jet reaction propulsion of Heron’s of Alexandria’s whirling “engine” (mentioned much earlier in this history), or a woodcut in Giovanni Branca’s seventeenth-century Le Machine, which showed the impulse of a steam jet driving a horizontal paddlewheel.   But it is one thing to make a demonstration or draw a picture, and another to make a useful power source. A steam turbine presented a far harder problem than a water turbine, because steam was so much less dense than liquid water. Simply transplanting steam into a water turbine design would be like blowing on a pinwheel: it would spin, but generate little power.[16] The difficulty was clear even in the eighteenth century: when confronted in 1784 with reports of a potential rival steam engine driven by the reaction created by a jet of steam, James Watt calculated that, given the low relative density of steam, the jet would have to shoot from the ends of the rotor at 1,300 feet per second, and thus “without god makes it possible for things to move 1000 feet [per second] it can not do much harm.” As historian of steam Henry Dickinson epitomized Watt’s argument, “[t]he analysis of the problem is masterly and the conclusion irrefutable.”[17] Even when future generations of metal working made the speeds required appear more feasible, one could get nowhere with traditional “cut and try” techniques with ordinary physical tools; the problem demanded careful analysis with the precision tools offered by mathematics and physics.[18] Dozens of inventors took a crack at the problem, nonetheless, including another famed steam engine designer, Richard Trevithick. None found success. Though Fourneyron had built an effective water turbine in the 1830s, the first practical steam turbines did not appear until the 1880s: a time when metallurgy and machine tools had achieved new heights (with mass-produced steels of various grades and qualities available) and a time when even the steam engine was beginning to struggle to sate modern society’s demand for power. It first appeared in two places more or less at once: Sweden and Britain. Gustaf de Laval burst from his middle-class background in the Swedish provinces into the engineering school at Uppsala with few friends but many grandiose dreams: he was the protagonist in his own heroic tale of Swedish national greatness, the engineering genius who would propel Sweden into the first rank of great nations. He lived simultaneously in grand style and constant penury, borrowing from his visions for an ever more prosperous tomorrow to live beyond his means of today. In the 1870s, while working a day job at a glassworks, he developed two inventions based on centrifugal force generated by a rapidly spinning wheel. The first, a bottle-making machine, flopped, but the second, a cream separator, became the basis for a successful business that let him leave his day job behind.[19] Then, in 1882 he patented a turbine powered by a jet of steam directed at a spinning wheel. De Laval claimed that his inspiration came from seeing a nozzle used for sandblasting at the glassworks come loose and whip around, unleashing its powerful jet into the air; it is also not hard to see some continuity in his interest in high-speed rotation. De Laval used his whirling turbines to power his whirling cream separators, and then acquired an electric light company, giving himself another internal customer for turbine power.[20] Though superficially similar to de Branca’s old illustration, de Laval’s machine was far more sophisticated. As Watt had calculated a century earlier, the low density of steam demanded high rotational speeds (otherwise the steam would escape from the machine having given up very little energy to the wheel) and thus a very high-velocity jet: de Laval’s steel rotor spun at tens of thousands of rotations per minute in an enclosed housing. A few years later he invented an hourglass-shaped nozzle to propel the steam jet to supersonic speeds, a shape that is still used in rocket engines for the same purpose today. Despite the more advanced metallurgy of the late-nineteenth century, however, de Laval still ran up against its limits: he could not run his turbine at the most efficient possible speed without burning out his bearings and reduction gear, and so his turbines didn’t fully capture their potential efficiency advantage over a reciprocating engine.[21] Cutaway view of a de Laval turbine, from William Ripper, Heat Engines (London: Longmans, Green, 1909), 234. Meanwhile, the British engineer Charles Parsons came up with a rather different approach to extracting energy from the steam, which didn’t require such rapid rotation. Whereas De Laval strove up from the middle class, Parsons came from the highest gentry. Son of the third Earl of Rosse, he grew up in a castle in Ireland, with grounds that included a lake and a sixty-foot-long telescope constructed to his father’s specifications. He studied at home under, Robert Ball, who later became the Astronomer Royal of Ireland, then went on to graduate from Cambridge University in 1877 as eleventh wrangler—the eleventh best in his class on the mathematics exams.[22] Despite his noble birth, Parsons appeared determined to find his own way in the world. He apprenticed himself at Elswick Works, a manufacturer of heavy construction and mining equipment and military ordnance in Newcastle on Tyne. He spent a couple years with a partner in Leeds trying to develop rocket-powered torpedoes before taking up as a junior partner at another heavy engineering concern, Clarke Chapman in Gateshead (back on the River Tyne).[23] His new bosses directed Parsons away from torpedoes toward the rapidly growing field of electric lighting. He turned to the turbine concept in search of a high-speed rotor that could match the high rotational speeds of a dynamo. Parsons came up with a different solution for the density problem than Laval’s. Rather than try to extract as much power as possible from the steam jet with one extremely fast rotor, he would send the steam through a series of rotors arranged horizontally. They would then not have to spin so quickly (though Parson’s first prototype still ran at 18,000 rotations per minute), and each could extract a bit of energy from the steam as it flowed through the turbine, dropping in pressure. This design extended the two-or three- stages of pressure reduction in a multi-cylinder steam engine into a continuous flow across a dozen or more rotors. Parsons’ approach created some new challenges (keeping the long, rapidly spinning shaft from bowing too far in one direction or the other, for example) but ultimately most future steam turbines would copy this elongated form.[24] Parson’s original prototype turbine and dynamo, with the top removed. Steam entered at the center and exited from both ends, which eliminated the need to deal with “end thrust,” a force pushing on one end of the turbine. From Dickinson, A Short History of the Steam Engine, plate vii. The Rise of Turbines Parsons soon founded his own firm to exploit the turbine. Because it has far less inherent friction than the piston of a traditional engine, and because none of its parts have to touch both hot and cold steam, a turbine had the potential to be much more efficient, but they didn’t start out that way. So his early customers were those who cared mainly about the smaller size of turbines: shipbuilders looking to put in electric lighting without adding too much weight or using too much space in the hull. In other applications reciprocating engines still won out.[25] Further refinements, however, allowed turbines to start to supplant reciprocating engines in electrical systems more generally: more efficient blade designs, the addition of a regulator to ensure that steam entered the turbine only at full pressure, the superheating of steam at one end and the condensing of it at the other to maximize the fall in temperature across the entire engine. Turbo-generators—electrical dynamos driven by turbines—began to find buyers in the 1890s. By 1896, Parsons could boast that a two-hundred-horsepower turbine his firm constructed for a Scottish electric power station ran at 98% of its ideal efficiency, and Westinghouse had begun to develop turbines under license in the United States.[26] Cutaway view of a fully developed Parsons-style turbine. Steam enters at left (A) and passes through the rotors to the right. From Ripper, Heat Engines, 241. At the same time, Parsons was pushing for the construction of ships with turbine powerplants, starting with the prototype Turbinia, which drove nine propellers with three turbines and achieved a top speed of nearly forty miles-per-hour. Suitably impressed, the British Admiralty ordered turbine-powered destroyers (starting with Viper in 1897), but the real turning point came in 1906 with the completion of the first turbine-driven battleship (Dreadnought) and transatlantic steamers (Lusitania and Muaretania), all supplied with Parsons powerplants.[27] HMS Dreadnought was remarkable not only for her armament and armor, but also for her speed of 21 knots (24 miles per hour), made possible by Parsons turbines. The very first steam turbines had demonstrated their advantage over traditional engines in size; a further decade-and-a-half of development allowed them to realize their potential advantages in efficiency; and now these massive vessels made clear their third advantage: the ability to scale to enormous power outputs. As we saw, the monster steam engines at the subway power house in New York could generate 12,000 horsepower, but the turbines aboard Lusitiania churned out half again as much, and that was far from the limit of what was possible. In 1915, the Interborough Rapid Transit Company, facing ever-growing demand for power with the addition of a third (express) track to its elevated lines, installed three 40,000 horsepower turbines for electrical generation, obsoleting Reynolds’ monster engines of a decade earlier. By the 1920s, 40,000 horsepower turbines were being built in the U.S., and burning half as much coal per watt of power generated as the most efficient reciprocating engines.[28] Parsons lived to see the triumph of his creation. He spent his last years cruising the world, and preferred to spend the time between stops talking shop with the crew and engineers rather than lounging with other wealthy passengers. He died in 1931, at age 76, in the Caribbean while aboard ship on the (turbine-powered of course) Duchess of Richmond.[29] Meanwhile, power usage shifted towards electricity, made widely available by the growth of steam and water turbines and the development of long-distance power transmission, not by traditional steam engines. Niagara was just a foretaste of the large-scale water power projects made feasible by the newly found capacity to transmit that power wherever it was needed: the Hoover Dam and Tennessee Valley Authority in the U.S., the Rhine power dams in Europe, and later projects intended to spur the modernization of poorer countries, from the Aswan Dam on the Nile and the Gezhouba Dam on the Yangtze. In regions with easy access to coal, however, steam turbines provided the majority of all electric power until far in the twentieth century. Cheap electricity transformed industry after industry. By 1920, manufacturing consumed half of the electricity produced in the U.S., mainly through dedicated electric motors at each tool, eliminating the need for the construction and maintenance of a large, heavy steam engine and for bulky and friction-heavy shafts and belts to transmit power through the factory. The capital barriers to starting a new manufacturing plant thus dropped substantially along with the recurring cost of paying for power, and the way was opened to completely rethink how manufacturing plants were built and operated. Factories became cleaner, safer, and more pleasant to work in, and the ability to organize machines according to the most efficient work process rather than the mechanical constraints of power delivery produced huge dividends in productivity.[30] A typical pre-electricity factory power distribution system, based on line shafts and belts (in this case driving power looms). All the machines in the factory have to be organized around the driveshafts. [Z22, CC BY-SA 3.0] The 1910 Ford Highland Park plant represents a hybrid stage on the way to full electrification of every machine; the plant still had overhead line shafts (here for milling engine blocks), but each area was driven by a local electric motor, allowing for a much more flexible arrangement of machinery. By that time, the heyday of the piston-driven steam engine was over. For large-scale installations, it could no longer compete with turbines (whether powered by liquid water or steam). At the same time, feisty new competitors, diesel and gasoline engines, were gnawing away at its share of the lower horsepower market. The warning shot fired by the air engine had finally caught up to steam. It could not outrun thermodynamics, and the incredibly energy-dense new fuel source that had come bubbling up out of the ground: rock oil, or petroleum.

Read more
From ACS to Altair: The Rise of the Hobby Computer

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] The Early Electronics Hobby A certain pattern of technological development recurred many times in the decades around the turn of the twentieth century: a scattered hobby community, tinkering with a new idea, develops it to the point where those hobbyists can sell it as a product. This sets off a frenzy of small entrepreneurial firms, competing to sell to other hobbyists and early adopters. Finally, a handful of firms grow to the point where they can drive down costs through economies of scale and put their smaller competitors out of business. Bicycles, automobiles, airplanes, and radio broadcasting all developed more or less in this way. The personal computer followed this same pattern; indeed, it marks the very last time that a “high-tech” piece of hardware emerged from this kind of hobby-led development. Since that time, new hardware technology has typically depended on new microchips. That is a capital barrier far too high for hobbyists to surmount; but as we have seen, the computer hobbyists lucked into ready-made microchips created for other reasons, but already suited to their purposes. The hobby culture that created the personal computer was historically continuous with the American radio hobby culture of the early twentieth-century, and, to a surprising degree, the foundations of that culture can be traced back to the efforts of one man: Hugo Gernsback. Gernsback (born Gernsbacher, to well-off German Jewish parents) came to the United States from Luxembourg in 1904 at the age of nineteen, shortly after his father’s death. Already fascinated by electrical equipment, American culture, and the fiction of Jules Verne and H.G. Wells, he started a business, the Electro Importing Company, in Manhattan, that offered both retail and mail-order sales of radios and related equipment. His company catalog evolved into a magazine, Modern Electrics, and Gernsback evolved into a publisher and community builder (he founded the Wireless Association of America in 1909 and the Radio League of America in 1915), a role he relished for the rest of his working life.[1] Gernsback (foreground) giving an over-the-air lecture on the future of radio. From his 1922 book, Radio For All, p. 229. The culture that Gernsback nurtured valued hands-on tinkering and forward-looking futurism, and in fact viewed them as two sides of the same coin. Science fiction (“scientifiction,” as Gernsback called it) writing and practical invention went hand in hand, for both were processes for pulling the future into the present. In a May 1909 article in Modern Electrics, for example, Gernsback opined on the prospects for radio communication with Mars: “If we base transmission between the earth and Mars at the same figure as transmission over the earth, a simple calculation will reveal that we must have the enormous power of 70,000 K. W. to our disposition in order to reach Mars,” and went on to propose a plan for building such a transmitter within the next fifteen or twenty years. As science fiction emerged as its own genre with its own publications in the 1920s (many of them also edited by Gernsback), this kind of speculative article mostly disappeared from the pages of electronic hobby magazines. Gernsback himself occasionally dropped in with an editorial, such as a 1962 piece in Radio-Electronics on computer intelligence, but the median electronic magazine article had a much more practical focus. Readers were typically hobbyists looking for new projects to build or service technicians wanting to keep up with the latest hardware and industry trends.[2] Nonetheless, the electronic hobbyists were always on the lookout for the new, for the expanding edge of the possible: from vacuum tubes, to televisions, to transistors, and beyond. It’s no surprise that this same group would develop an early interest in building computers. Nearly everyone who we find building (or trying to build) a personal or home computer prior to 1977 had close ties to the electronic hobby community. The Gernsback story also highlights a common feature of hobby communities of all sorts. A subset of radio enthusiasts, seeing the possibility of making money by fulfilling the needs of their fellow hobbyists, started manufacturing businesses to make new equipment for hobby projects, retail businesses to sell that equipment, or publishing businesses to keep the community informed on new equipment and other hobby news. Many of these enterprises made little or no money (at least at first), and were fueled as much by personal passion as by the profit motive; they were the work of hobby-entrepreneurs. It was this kind of hobby-entrepreneur who would first make personal computers available to the public. The First Personal Computer Hobbyists The first electronic hobbyist to take an interest in building computers, whom we know of, was Stephen Gray. In 1966, he founded the Amateur Computer Society (ACS), an organization that existed mainly to produce a series of quarterly newsletters typed and mimeographed by Gray himself. Gray has little to say about his own biography in the newsletter or in later reflections on the ACS. He reveals that he worked as an editor of the trade magazine Electronics, that he lived in Manhattan and then Darien, Connecticut, that he had been trying to build a computer of his own for several years, and little else. But he clearly knew the radio hobby world. In the fourth, February 1967, number of his newsletter, he floated the idea of a “Standard Amateur Computer Kit” (SACK) that would provide an economical starting point for new hobbyists, writing that,[3] Amateur computer builders are now much like the early radio amateurs. There’s a lot of home-brew equipment, much patchwork, and most commercial stuff is just too expensive. The ACS can help advance the state of the amateur computer art by designing a standard amateur computer, or at least setting up the specs for one. Although the mere idea of a standard computer makes the true blue home-brew types shudder, the fact is that amateur radio would not be where it is today without the kits and the off-the-shelf equipment available.[4] By the Spring of 1967, Gray had found seventy like-minded members through advertisements in trade and hobby publications, most of them in the United States, but a handful in Canada, Europe, and Japan. We know little about the backgrounds or motivations of these men (and they were exclusively men), but when their employment is mentioned, they are found at major computer, electronics, or aerospace firms; at national labs; or at large universities. We can surmise that most worked with or on computers as part of their day job. A few letter writers disclose prior involvement in hobby electronics and radio, and from the many references to attempts to imitate the PDP-8 architecture, we can also guess that many members had some association with DEC minicomputer culture. It is speculative but plausible to guess that the 1965 release of the PDP-8 might have instigated Gray’s own home computer project and the later creation of the ACS. Its relatively low price, compact size, and simple design may have catalyzed the notion that home computers lay just out of reach, at least for Gray and his band of like-minded enthusiasts. Whatever their backgrounds and motivations, the efforts of these amateurs to actually builda computer proved mostly fruitless in these early years. The January 1968 newsletter reported a grand total of two survey respondents who possessed an actual working computer, though respondents as a whole had sunk an average of two years and $650 on their projects ($6,000 in 2024 dollars). The problem of assembling one’s own computer would daunt even the most skilled electronic hobbyist: no microprocessors existed, nor any integrated circuit memory chips, and indeed virtually no chips of any kind, at least at prices a “homebrewer” could afford. Both of the two complete computers reported in the survey were built from hand-wired transistor logic. One was constructed from the parts of an old nuclear power system control computer, PRODAC IV. Jim Sutherland took the PRODAC’s remains home from his work at Westinghouse after its retirement, and re-dubbed it the ECHO IV (for Electronic Computing Home Operator). Though technically a “home” computer, to borrow an existing computer from work was not a path that most would-be home-brewers could follow. This hardly had the makings of a technological revolution. The other complete “computer,” the EL-65 by Hans Ellenberger of Switzerland, on the other hand, was truly an electronic desktop calculator; it could perform arithmetic ably enough, but could not be programmed. [5] The Emergence of the Hobby-Entrepreneur As integrated circuit technology got better and cheaper, the situation for would-be computer builders gradually improved. By 1971, the first, very feeble, home computer kits appeared on the market, the first signs of Gray’s “SACK.” Though neither used a microprocessor, they took advantage of the falling prices of integrated circuits: the CPU of each consisted of dozens of small chips wired together. The first was the National Radio Institute (NRI) 832, the hardware accompaniment to a computer technician course disseminated by the NRI, and priced at about $500. Unsurprisingly, the designer, Lou Freznel, was a radio hobby enthusiast, and a subscriber to Stephen Gray’s ACS Newsletter. But the NRI 832 is barely recognizable as a functional computer: it had a measly sixteen 8-bit words of read-only memory, configured by mechanical switches (with an additional sixteen bytes of random-access memory available for purchase).[6] OLYMPUS DIGITAL CAMERA " data-medium-file="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=300" data-large-file="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=739" loading="lazy" width="1024" height="684" src="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=1024" alt="" class="wp-image-14940">The NRI 832. The switches on the left were used to set the values of the bits in the tiny memory. The banks of lights at the top left and right, showing the binary values of the program counter and accumulator, were the only form of output  [vintagecomputer.net]. The $750 Kenbak-1 that appeared the same year was nominally more capable, with 256 bytes of memory, though implemented with shift-register chips (accessible one bit at a time), not random-access memory. Indeed, the entire machine had a serial-processing architecture, processing only one bit at a time through the CPU, and ran at only about 1,000 instructions per second—very slow for an electronic computer. Like the NRI 832, it offered only switches as input and only a small panel of display lights for showing register contents as output. Its creator, John Blankenbaker, was a radio lover from boyhood before enrolling as an electronics technician in the Navy. He began working on computers in the 1950s, beginning with the Bureau of Standards SEAC. Intrigued by the possibility of bringing a computer home, he tinkered with spare parts for making his own computer for years, becoming his own private ACS. By 1971 he thought he had a saleable device that could be used for teaching programming, and he formed the eponymous “Kenbak” company to sell it.[7] Blankenbaker was the first of the amateur computerists to try to bring his passion to market; the first hobby-entrepreneur of the personal computer. He was not the most successful. I found no records of the sales of the NRI 832, but by Blankenbaker’s own testimony, only forty-four Kenbak-Is were sold. Here were home computer kits readily available at a reasonable price, four years before Altair. Why did they fall flat? As we have seen, most members of the Amateur Computer Society had aimed to make a PDP-8 or something like it; this was the most familiar computer of the 1960s and early 1970s, and provided the mental model for what a home computer could and should be. The NRI 832 and Kenbak-I came nowhere close to the capabilities of a PDP-8, nor were they designed to be extensible or expandable in any way that might allow them to transcend their basic beginnings. These were not machines to stir the imaginative loins of the would-be home computer owner. Hobby-Entrepreneurship in the Open These early, halting steps towards a home computer, from Stephen Gray to the Kenbak-I, took place in the shadows, unknown to all but a few, the hidden passion of a handful of enthusiasts exchanging hand-printed newsletters. But several years later, the dream of a home computer burst into the open in a series of stories and advertisements in major hobby magazines. Microprocessors had become widely available. For those hooked on the excitement of interacting one-on-one with a computer, the possibility of owning their own machine felt tantalizing close. A new group of hobby-entrepreneurs now tried to make their mark by providing computer kits to their fellow enthusiasts, with rather more success than NRI and Kenbak. The overture came in the fall of 1973, with Don Lancaster’s “TV Typewriter,” featured on the cover of the September issue of Radio-Electronics (a Gernsback publication, though Gernsback himself was, by then, several years dead). Lancaster, like most of the people we have met in this chapter, was an amateur “ham” radio operator and electronics tinkerer. Though he had a day job at Goodyear Aerospace in Phoenix, Arizona, he figured out how to make a few extra bucks from his hobby by publishing projects in magazines and selling pre-built circuit boards for those projects via a Texas hobby firm called Southwest Technical Products (SWTPC). The 1973 Radio-Electronics TV Typewriter cover. His TV Typewriter was, of course, not a computer at all, but the excitement it generated certainly derived from its association with computers. One of many obstacles to a useful home computer was the lack of a practical output device: something more useful than the handful of glowing lights that the Kenbak-I sported, but cheaper and more compact than the then-standard computer input/output device, a bulky teletype terminal. Lancaster’s electronic keyboard, which required about $120 in parts, could hook up to an ordinary television and turn it into a video text terminal, displaying up to sixteen lines of thirty-two characters each. Shift-registers continued to be the only cheap form of semiconductor memory, and so that was what Lancaster used for storing the characters to be displayed on screen. Lancaster gave the parts list and schematic to the TV Typewriter away for free, but made money by selling pre-built subassemblies via SWTPC that saved buyers time and effort, and by publishing guidebooks likethe TV Typewriter Cookbook.[8] The next major landmark appeared six months later in a ham radio magazine, QST, named after the three-letter ham code for “calling all stations.” A small ad touted the availability of “THE TOTALLY NEW AND THE VERY FIRST MINI-COMPUTER DESIGNED FOR THE ELECTRONIC/COMPUTER HOBBYIST” with kit prices as low as $440. This was the SCELBI 8-H, the first computer kit based around a microprocessor, in this case the Intel 8008. Its creator, Nat Wadsworth, lived in Connecticut, and became enthusiastic about the microprocessor after attending a seminar given by Intel in 1972, as part of his job as an electrical engineer at an electronics firm. Wadsworth was another ham radio enthusiast, and already enough of a personal computing obsessive to have purchased a surplus DEC PDP-8 at a discount for home use (he paid “only” $2,000, about $15,000 in 2024 dollars). Since his employer did not share his belief in the 8008, he looked for another outlet for his enthusiasm, and teamed up with two other engineers to develop what became the SCELBI-8H (for SCientific ELectronic BIological). Their ads drew thousands of responses and hundreds of orders over the following months, though they ended up losing money on every machine sold.[9] A similar machine appeared several months later, this time as a hobby magazine story, on the cover the July 1974 issue of Radio-Electronics: “Build the Mark-8 Minicomputer,” ran the headline (notice again the “minicomputer” terminology: a PDP-8 of one’s own remained the dream). The Mark-8 came from Jonathan Titus, a grad student from Virginia, who had built his own 8008-based computer and wanted to share the design with the rest of the hobby. Unlike SCELBI, he did not sell it as a complete machine or even a kit: he expected the Radio-Electronics reader to buy and assemble everything themselves. That is not to say that Titus made no money: he followed a hobby-entrepreneur business model similar to Don Lancaster’s, offering an instructional guidebook for $5, and making some pre-made boards available for sale through a retailer in New Jersey, Techniques, Inc. The 1974 Mark-8 Radio-Electronics cover. The SCELBI-8H and Mark-8 looked much more like a “real” minicomputer than the NRI 832 or Kenbak-I. A hobbyist hungry for a PDP-8-like machine of their own could recognize in this generation of machines something edible, at least. Both used an eight-bit parallel processor, not an antiquated bit-serial architecture, came with one kilobyte of random-access memory, and were designed to support textual input/output devices. Most importantly both could be extended with additional memory or I/O cards. These were computers you could tinker with, that could become an ongoing hobby project in and of themselves. A ham radio operator and engineering student in Austin, Texas named Terry Ritter spent over a year getting his Mark-8 fully operational with all of the accessories that he wanted, including an oscilloscope display and cassette tape storage.[10] In the second half of 1974, a community of hundreds of hobbyists like Ritter began to form around 8008-based computers, significantly larger than the tiny cadre of Amateur Computer Society members. In September 1974, Hal Singer began publishing the Mark-8 User Group Newsletter (later renamed the Micro-8 Newsletter) for 8008 enthusiastsout of his office at the Cabrillo High School Computer Center in Lompoc, California. He attracted readers from all across the country: California and New York, yes, but also Iowa, Missouri, and Indiana. Hal Chamberlain started the Computer Hobbyist newsletter two months later. Hobby entrepreneurship expanded around the new machines as well: Robert Suding formed a company in Denver called the Digital Group to sell a packet of upgrade plans for the Mark-8.[11] The first tender blossoms of a hobby computer community had begun to emerge. Then another computer arrived like a spring thunderstorm, drawing whole gardens of hobbyists up across the country and casting the efforts of the likes of Jonthan Titus and Hal Singer in the shade. It, too, came as a response to the arrival of the Mark-8, by a rival publication in search of a blockbuster cover story of their own. Altair Arrives Art Salsberg and Les Solomon, editors at Popular Electronics, were not oblivious to the trends in the hobby, and had been on the lookout for a home computer kit they could put on their cover since the appearance of the TV Typewriter in the fall of 1973. But the July 1974 Mark-8 cover story at rival Radio-Electronics threw a wrench in their plans: they had an 8008-based design of their own lined up, but couldn’t publish something that looked like a copy-cat machine. They needed something better, something to one-up the Mark-8. So, they turned to Ed Roberts. He had nothing concrete, but had pitched Solomon a promise that he could build a computer around the new, more powerful Intel 8080 processor. This pitch became Altair—named, according to legend, by Solomon’s daughter, after the destination of the Enterprise in the Star Trek episode “Amok Time”—and it set the hobby electronics world on fire when it appeared as the January 1975 Popular Electronics cover story. The famous Popular Electronics Altair cover story. Altair, it should be clear by now, was continuous with what came before: people had been dreaming of and hacking together home computers for years, and each year the process became easier and more accessible, until by 1974 any electronics hobbyist could order a kit or parts for a basic home computer for around $500. What set the Altair apart, what made it special, was the sheer amount of power it offered for the price, compared to the SCELBI-8H and Mark-8. The Altair’s value proposition poured gasoline onto smoldering embers, it was an accelerant that transformed a slowly expanding hobby community into a rapidly expanding industry. The Altair’s surprising power derived ultimately from the nerve of MITS founder Ed Roberts. Roberts, like so many of his fellow electronics hobbyists, had developed an early passion for radio technology that was honed into a professional skill by technical training in the U.S. armed forces—the Air Force, in Roberts’ case. He founded Micro Instrumentation and Telemetry Systems (MITS) in Albuquerque with fellow Air Force officer Forrest Mims to sell electronic telemetry modules for model rockets. A crossover hobby-entrepreneur business, this straddled two hobby interests of the founders, but did not prove very profitable. A pivot in 1971 to sell low-cost kits to satiate the booming demand for pocket calculators, on the other hand, proved very successful—until it wasn’t. By 1974 the big semiconductor firms had vertically integrated and driven most of the small calculator makers out of business. For Roberts, the growing hobby interest in home computers offered a chance to save a dying MITS, and he was willing to bet the company on that chance. Though already $300,000 in debt, he secured a loan of $65,000 from a trusting local banker in Albuquerque, in September 1974. With that money, he negotiated a steep volume discount from Intel by offering to buy a large quantity of “ding-and-dent” 8080 processors with cosmetic damage. Though the 8080 listed for $360, MITS got them for $75 each. So, while Wadsworth at SCELBI (and builders assembling their own Mark-8s) were paying $120 for 8008 processors, MITS was paying nearly half that for a far better processor.[12] It is hard to overstate what a substantial leap forward in capabilities the 8080 represented: it ran much faster than the 8008, integrated more capabilities into a single chip (for which the 8008 required several auxiliary chips), could support four times as much memory, and had a much more flexible 40-pin interface (versus the 18 pins on the 8008). The 8080 also referenced a program stack an external memory, while the 8008 had a strictly size-limited on-CPU stack, which limited the software that could be written for it. The 8080 represented such a large leap forward that, until 1981, essentially the entire personal and home computer industry ran on the 8080 and two similar designs: the Zilog Z80 (a processor that was software-compatible with the 8080 but ran at higher speeds), and the MOS Technology 6502 (a budget chip with roughly the same capabilities as the 8080).[13] The release of the Altair kit at a total price of $395 instantly made the 8008-based computers irrelevant. Nat Wadsworth of SCELBI reported that he was “devastated by appearance of Altair,” and “couldn’t understand how it could sell at that price.” Not only was the price right, the Altair also looked more like a minicomputer than anything before it. To be sure, it came standard with a measly 256 bytes of memory and the same “switches and lights” interface as the ancient kits from 1971. It would take quite a lot of additional money and effort to turn into a fully functional computer system. But it came full of promise, in a real case with an extensible card slot system for adding additional memory and input/output controllers. It was by far the closest thing to a PDP-8 that had ever existed at a hobbyist price point—just as the Popular Electronics cover claimed: “World’s First Minicomputer Kit to Rival Commercial Models.” It made the dream of the home computer, long cherished by thousands of computer lovers, seem not merely imminent, but immanent: the digital divine made manifest. And this is why the arrival of the MITS Altair, not of the Kenbak-I or the SCELBI-8H, is remembered as the founding event of the personal computer industry.[14] All that said, even a tricked-out Altair was hardly useful, in an economic sense. If pocket calculators began as a tool for business people, and then became so cheap that people bought them as a toy, the personal computer began as something so expensive and incapable that only people who enjoyed them as a toy would buy them. Next time, we will look at the first years of the personal computer industry: a time when the hobby computer producers briefly flourished and then wilted, mostly replaced and outcompeted by larger, more “serious” firms. But a time when the culture of the typical computer user remained very much a culture of play. Appendix: Micral N, The First Useful Microcomputer There is another machine sometimes cited as the first personal computer: the Micral N. Much like Nat Wadsworth, French engineer François Gernelle was smitten with the possibilities opened up by the Intel 8008 microprocessor, but could not convince his employer, Intertechnique, to use it in their products. So, he joined other Intertechnique defectors to form Réalisation d’Études Électroniques (R2E), and began pursuing some of their erstwhile company’s clients. In December 1972, R2E signed an agreement with one of those clients, the Institut National de la Recherche Agronomique (INRA, a government agronomical research center), to deliver a process control computer for their labs at fraction of the price of a PDP-8. Gernelle and his coworkers toiled through the winter in a basement in the Paris suburb of Châtenay-Malabry to deliver a finished system in April 1973, based on the 8008 chip and offered at a base price of 8,500 francs, about $2,000 in 1973 dollars (one fifth the going rate for a PDP-8).[15] The Micral N was a useful computer, not a toy or a plaything. It was not marketed and sold to hobbyists, but to organizations in need of a real-time controller. That is to say, it served the same role in the lab or factory floor that minicomputers had served for the previous decade. It can certainly be called a microcomputer by dint of its hardware. But the Altair lineage stands out because it changed how computers were used and by whom; the microprocessor happened to make that economically possible, but it did not automatically make every machine into which it was placed a personal computer. The Micral N looks very much like the Altair on the outside, but was marketed entirely differently [Rama, Cc-by-sa-2.0-fr]. Useful personal computers would come, in time. But the demand that existed for a computer in one’s own home or office in the mid-1970s came from enthusiasts with a desire to tinker and play on a computer, not to get serious business done on one. No one had yet written and published the productivity software that would even make a serious home or office computer conceivable. Moreover, it was still far too expensive and difficult to assemble a comprehensive office computer system (with a display, ample memory, and external mass storage for saving files) to attract people who didn’t already love working on computers for their own sake. Until these circumstances  changed, which would take several years, play reigned unchallenged among home computer users. The Micral N is an interesting piece of history, but it is an instructive contrast with the story of the personal computer, not a part of it.

Read more
Britain’s Steam Empire

The British empire of the nineteenth century dominated the world’s oceans and much of its landmass: Canada, southern and northeastern Africa, the Indian subcontinent, and Australia. At its world-straddling Victorian peak, this political and economic machine ran on the power of coal and steam; the same can be said of all the other major powers of the time, from also-ran empires such as France and the Netherlands, to the rising states of Germany and the United States. Two technologies bound the far-flung British empire together, steamships and the telegraph; and the latter, which might seem to represent a new, independent technical paradigm based on electricity, depended on the former. Only steamships, who could adjust course and speed at will regardless of prevailing winds, could effectively lay underwater cable.[1] A 1901 map of the cable network of the Eastern Telegraph Company (which later became Cable & Wireless) shows the pervasive commercial and imperial power of Victorian London. Not just an instrument of imperial power, the steamer also created new imperial appetites: the British empire and others would seize new territories just for the sake of provisioning their steamships and protecting the routes they plied. Within this world system under British hegemony, access to coal became a central economic and strategic factor. As the economist Stanley Jevons wrote in his 1865 treatise on The Coal Question: Day by day it becomes more obvious that the Coal we happily possess in excellent quality and abundance is the Mainspring of Modem Material Civilization. …Coal, in truth, stands not beside but entirely above all other commodities. It is the material energy of the country — the universal aid — the factor in everything we do. With coal almost any feat is possible or easy; without it we are thrown back into the laborious poverty of early times.[2] Steamboats and the Projection of Power As the states of Atlantic Europe—Portugal and Spain, then later the Netherlands, England, and France—began to explore and conquer along the coasts of Africa and Asia in the sixteenth and seventeenth centuries, their cannon-armed ships proved one of their major advantages. Though the states of India and Indonesia had access to their own gunpowder weaponry, they did not have the ship-building technology to build stable firing platforms for large cannon broadsides. The mobile fortresses that the Europeans brought with them allowed them to dominate the sea lanes and coasts, wresting control of the Indian Ocean trade from the local powers.[3] What they could not do, however, was project power inland from the sea. The galleons and later heavily armed ships of the Europeans could not sail upriver. In this era, Europeans rarely could dominate inland states. When it did happen, as in India, it typically required years or decades of warfare and politicking, with the aid of local alliances. The steamboat, however, opened the rivers of Africa and Asia to lightning attacks or shows of force: directly by armed gunboats themselves, or indirectly through armies moving upriver supplied by steam-powered craft. We already know, of course, how Laird used steamboats in his expedition up the Niger in 1832. Although his intent was purely commercial, not belligerent, he had demonstrated the that interior of Africa could be navigated with steam. When combined with quinine to protect European settlers from malaria, the steamboat would help open a new wave of imperial claims on African territory. But even before Laird’s expedition, the British empire had begun to experiment with the capabilities of riverine steamboats. British imperial policy in Asia still operated under the corporate auspices of the East India Company (EIC), not under the British government, and in 1824 the EIC went to war with Burma over control of territories between the Burmese Empire and British India, in what is now Bangladesh. It so happened that the company had several steamers on hand, built in the dockyards of Calcutta (now Kolkata), and the local commanders put them to work in war service (much as Andrew Jackson had done with Shreve’s Enterprise in 1814).[4] Most impressive was Diana, which penetrated 400 miles up the Irrawaddy to the Burmese imperial capital at Amarapura: “she towed sailing ships into position, transported troops, reconnoitered advance positions, and bombarded Burmese fortifications with her swivel guns and Congreve rockets.”[5] She also captured the Burmese warships, who could not outrun her and whose small cannons on fixed mounts could not effectively put fire on her either. A depiction of an attack on Burmese fortifications by the British fleet. The steamship Diana is at right. In the Burmese war, however, steamships had served as the supporting cast. In the First Opium War, the steamship Nemesis took a star turn. The East India Company traditionally made its money by bringing the goods of the East—mainly tea, spices, and cotton cloth—back west to Europe. In the nineteenth century, however, the directors had found an even more profitable way to extract money from their holdings in the subcontinent: by growing poppies and trading the extracted drug even further east, to the opium dens of China. The Qing state, understandably, grew to resent this trade that immiserated its citizens, and so in 1839 the emperor promulgated a ban on the drug. The iron-hulled Nemesis was built and dispatched to China by the EIC with the express purpose of carrying war up China’s rivers. Shemounted a powerful main battery of twin swivel-mount 32-pounders and numerous smaller weapons, and with a shallow draft was able to navigate not just up the Pearl River, but into the shallow waterways around Canton (Guangzhou), destroying fortifications and ships and wreaking general havoc. Later Nemesis and several other steamers, towing other battleships, brought British naval power 150 miles up the Yangtze to its junction with the Grand Canal. The threat to this vital economic lifeline brought the Chinese government to terms.[6] Nemesis and several British boats destroying a fleet of Chinese junks in 1841. Steamboats continued to serve in imperial wars throughout the nineteenth century. A steam-powered naval force dispatched from Hong Kong helped to break the Indian Rebellion of 1857. Steamers supplied Herbert Kitchener’s 1898 expedition up the Nile to the Sudan, with the dual purpose of avenging the death of Charles “Chinese” Gordon fourteen years earlier and of preventing the French from securing a foothold on the Nile. His steamboat force consisted of a mix of naval gunboats and a civilian ship requisitioned from the ubiquitous Cook & Son tourism and logistics firm.[7] Kitchener could only dispatch such an expedition because of the British power base in Cairo (from whence it ruled Egypt through a puppet khedive), and that power base existed for one primary reason: to protect the Suez Canal. The Geography of Steam: Suez In 1798, Napoleon’s army of conquest, revolution, and Enlightenment arrived in Egypt with the aim of controlling the Eastern half of the Mediterranean and cutting off Britain’s overland link to India. There they uncovered the remnants of a canal linking the Nile Delta to the Red Sea. Constructed in antiquity and restored several times after, it had fallen into disuse sometime in the medieval period. It’s impossible to know for certain, but when operable, this canal had probably served as a regional waterway connecting the Egyptian heartland around the Nile with the lands around the head of the Red Sea. By the eighteenth century, in an age of global commerce and global empires, however, a nautical connection between the Mediterranean and Red Sea had more far-reaching implications.[8] A reconstruction of the possible location of the ancient Nile-Suez canal. [Picture by Annie Brocolie / CC BY-SA 2.5] Napoleon intended to restore the canal, but before any work could commence, France’s forces in Egypt withdrew in the face of a sustained Anglo-Ottoman assault. Though British commercial and imperial interests presented a far stronger case for a canal than any benefits France might have hoped to get from it, the British government fretted about upsetting the balance of power in the Middle East and disrupting their textile industry’s access to the Egyptian cotton cloth. They contented themselves instead with a cumbrous overland route to link the Red Sea and the Mediterranean. Meanwhile, a series of French engineers and diplomats, culminating in Ferdinand de Lesseps, pressed for the concession required to build a sea-to-sea Suez Canal, and construction under French engineers finally began in 1861. The route formally opened in November, 1869 in a grand celebration that attracted most of the crowned heads of continental Europe.[9] It was just as well that the project was delayed: it allowed for the substitution, in 1865, of steam dredges for conscripted labor at the work site. Of the hundred million cubic yards of earth excavated for the canal, four-fifths were dug out with iron and steam rather than muscle, generating 10,000 horsepower at the cost of £20,000 of coal per month.[10] Without mechanical aid, the project would have dragged on well into the 1870s, if it were completed at all. Moreover, Napoleon’s precocious belief in the project notwithstanding, the canal’s ultimate fiscal health depended of the existence of ocean-going steamships as well. By sail, depending on the direction of travel and the season, the powerful trade winds on the southern route could make it the faster option, or at least the more efficient one given the tolls on the canal.[11] But for a steamship, the benefits of cutting off thousands of miles from the journey were three-fold: it didn’t just save time, it also saved fuel, which in turn freed more space for cargo. Given the tradeoffs, as historian Max Fletcher wrote, “[a]lmost without exception, the Suez Canal was an all-steamer route.”[12] The modern Suez Canal, with the Mediterranean Sea on the left and the Red Sea on the right. [Picture by Pierre Markuse / CC BY 2.0] Ironically, the British, too conservative in their instincts to back the canal project, would nonetheless derive far more obvious benefit from it than the French government or investors, who struggled to make their money back in the early years of the canal. The new canal became the lifeline to the empire in India and beyond. This new channel for the transit of people and goods was soon complemented by an even more rapid channel for the transmission of intelligence. The first great achievement of the global telegraph age was the transatlantic cable laid in 1866 by Brunel’s Great Eastern, whose cavernous bulk allowed it to lay the entire line from Ireland to Newfoundland in a single piece in 1866.[13] This particular connection served mainly commercial interests, but the Great Eastern went on to participate in the laying of a cable from Suez to Aden and on to Bombay in 1870, providing relatively instantaneous electric communication (modulo a few intermediate hops) from London to its most precious imperial possession.[14] The importance of the Suez for quick communications with India in turn led to further aggressive British expansion in 1882: the bombarding of Alexandria and the de facto conquest of an Egypt still nominally loyal to the Sultan in Istanbul. This was not the only such instance. Steam power opened up new ways for empires to exert their might, but also pulled them to new places sought out only because steam power itself had made them important. The Geography of Steam: Coaling Stations In that vein, coaling stations—coastal and island stations for restocking ships with fuel—became an essential component of global empire. In 1839, the British seized the port of Aden (on the gulf of the same name) from the Sultan of Lahej for exactly that purpose, to serve as a coaling station for the steamers operating between the Red Sea and India.[15] Other, pre-existing waystations waxed or waned in importance along with the shift from the geography of sail to that of steam. St. Helena in the Atlantic, governed by the East India Company since the 1650s, could only be of use to ships returning from Asia in the age of sail, due to the prevailing trade winds that pushed outbound ships towards South America. The advent of steam made an expansion of St. Helena’s role possible, but then the opening of Suez diverted traffic away from the South Atlantic altogether. The opening of the Panama Canal similarly eclipsed the Falkland Islands’ position as the gateway to the Pacific.[16] In the case of shore-bound stations such as Aden, the need to protect the station itself sometimes led to new imperial commitments in its hinterlands, pulling empire onward in the service of steam. Aden’s importance only multiplied with the opening of the Suez Canal, which now made it part of the seven-thousand-mile relay system between Great Britain and India. Aggressive moves by the Ottoman Empire seemed to imperil this lifeline, and so the existence of the station became the justification for Britain to create a protectorate (a collection of vassal states, in effect) over 100,000 square miles of the Arabian Peninsula.[17] Britain created the 100,000-square-mile Aden protectorate to safeguard its steamship route to India. Coaling stations acquired local coal where it was available—from North America, South Africa, Bengal, Borneo, or Australia—where it was not, it had to be brought in, ironically, by sailing ships. But although one lump of coal may seem as good as another, it was not, in fact, a single fungible commodity. Each seam varied in the ratio and types of chemical impurities it contained, which affected how the coal burned. Above all, the Royal Navy was hungry for the highest quality coal. By the 1850s, the British Admiralty determined that a hard coal from the deeper layers of certain coal measures in South Wales exceeded all others in the qualities required for naval operations: a maximum of energy and a minimum of residues that would dirty engines and black smoke that would give away the position of their ships over the horizon. In 1871 the Navy launched its first all-steam oceangoing warship, the HMS Devastation, which needed, at full bore, 150 tons of this top-notch coal per day, without which it would become “the verist hulk in the navy.” The coal mines lining a series of north-south valleys along the Bristol Channel, which had previously supplied the local iron industry, thus became part of a global supply chain. The Admiralty demanded access to imported Welsh coal across the globe, in every port where the Navy refueled, even where local supplies could be found.[18] The dark green area indicates the coal seams of South Wales, where the best  steam coal in the world could be found. The British supply network far exceeded that of any other nation in its breadth and reliability, which gave their navy a global operational capacity that no other fleet could match. When the Russians sent their Baltic fleet to attack Japan in 1905, the British refused it coaling service and pressured the French to do likewise, leaving the ships reliant on sub-par German supplies. It suffered repeated delays and quality shortfalls in its coal before meeting its grim fate in Tsushima Strait. Aleksey Novikov-Priboi, a sailor on one of the Russian ships, later wrote that “coal had developed into an idol, to which we sacrificed strength, health, and comfort. We thought only in terms of coal, which had become a sort of black veil hiding all else, as if the business of the squadron had not been to fight, but simply to get to Japan.”[19] Even the rising naval power of the United States, stoked by the dreams of Alfred Mahan, could scarcely operate outside its home waters without British sufferance. The proud Great White Fleet of the United States that circumnavigated the globe to show the flag found itself repeatedly humbled by the failures of its supply network, reliant on British colliers or left begging for low-quality local supplies.[20] But if British steam power on the oceans still outshone that of the U.S. even beyond the turn of the twentieth century, on land it was another matter, as we shall next time.

Read more
The Pursuit of Efficiency and the Science of Steam

On April 19th, 1866, Alfred Holt, a Liverpudlian engineer who had apprenticed on the Liverpool & Manchester railroad before taking up steamship design in the 1850s, launched a singular ship that he dubbed the Agamemnon. As the third soon of a prosperous banker, cotton broker, and insurer, he had access to far more personal capital to launch this new enterprise than the typical engineer. This was a lucky thing for him, because the typical investor of the time considered his ambition—to enter the China tea trade on the basis of steam power—foolhardy. A typical oceangoing steamship used five pounds of coal per horsepower per hour and could not compete with sail over such long distances: they would either have to fill most of their potential cargo space with coal or make repeated, costly stops to refuel.[1] A contemporary photograph of Holt’s SS Agamemnon. Yet, in the end, Holt pulled off his gamble. He benefited from good timing (perhaps a mix of luck and foresight): the opening of the Suez Canal in 1869 give steamships a tremendous leg up in trade between Europe to the Indian and Pacific Oceans. But in designing ships dainty enough in their coal consumption to pay their way to the Pacific, he also benefited from the late convergence of two complementary developments that had each begun in the early 1800s but did not intersect until the 1850s. First was a series of incremental, empirical improvements to steam engine design: After the massive leap forward from Newcomen to Watt, further increases in steam engine efficiency would be less dramatic. Simultaneously, a theory of heat gradually developed that could explain what made engines more or less efficient, and thus point engineers in the most fruitful direction. Double-Cylinder Engines Boulton & Watt erected most of its early pumping engines in Cornwall. Trevithick developed his high-pressure “puffer” there. So, it is only fitting that the last major architectural innovation in piston steam engine design—featuring an entirely new structural component—was Cornish, too. In that region, an ample supply of British engineering talent met an always-eager demand for efficient engines. The ever-deeper mines for extracting metal ore needed ever more pumping power, despite significantly higher coal prices than the coal-rich North. Joseph Hornblower, born in the 1690s, was one of the first engineers to build Newcomen engines for the mines of Cornwall in the 1720s. Sixty years later, his grandson Jonathan built the first known double-cylinder engine (later called a compound engine). Cornwall’s homegrown natural philosopher, Davies Giddy (later Gilbert), served in the same office he later served for Richard Trevithick, as Hornblower’s scientific advisor. In principle, the idea was quite simple: instead of immediately condensing the remaining steam after the expansion cycle of the piston, the still-warm steam was fed into another cylinder to let it do still more work. However, this added friction, complexity, and cost to the machine. In practice, therefore, Hornblower’s attempted improvement provide no more efficient than a traditional Watt engine.[2] Hornblower double-cylinder engine from Robert Thurston, A History of the Growth of the Steam-Engine, p. 136. A generation later, however, another Cornishman took up the idea and carried it further. Arthur Woolf, like many eighteenth-century engineers, got his start as a millwright, but by 1797 was working for the firm of Jabez Carter Hornblower (brother to Jonathan), at a brewery in London, erecting a steam engine. He continued to serve as engineer for the brewery for a decade afterward, and witnessed the operation of Trevithick’s steam carriage in the city in 1803. Woolf realized that he could combine the double-cylinder engine of his former employer’s brother with Trevithick’s truly high-pressure engines (operating at forty pounds per-square-inch or more). The higher-pressure steam, still quite hot after expanding in the first cylinder, would be able to do more work in the second cylinder rather than simply “puffing” out into the atmosphere. Both Watt and Trevithick had (from opposite points-of-view) seen low- and high-pressure steam as rivals, but in Woolf’s machine they complemented one another.[3] But, as Hornblower had already learned, the path did not always run straight and easy from idea to execution. Woolf led himself astray with an entirely unsound theoretical model for the inner workings of his engine: he believed that steam at twenty pounds per square inch (psi) would expand to twenty times its volume before equaling the pressure of the atmosphere, steam at thirty psi would expand thirty times, and so on ad infinitum. This turned out to be a substantially exaggerated expectation, and led him to begin with a drastically undersized high-pressure cylinder, which let off far too little steam to effectively work its low-pressure mate. Rather than leading him to doubt his theory, the failure of this engine led him into a wild goose chase for a non-existent leak in his pistons.[4] Woolf’s double-cylinder engine, unlike Hornblower’s, did at last succeed, after years of trial and error, in achieving better efficiency than a Watt engine. But because it was more expensive to build (and thus buy), and more complex to operate, it found favor only in markets without easy access to other, cheaper options. One such example was France, to which Woolf’s erstwhile partner Humphrey Edwards, decamped in 1815: there he sold at least fifteen engines and licensed twenty-five more to a French mining company.  Woolf meanwhile returned to Cornwall in 1811, where he found the advantages of his double-cylinder engine soon surpassed by the incremental improvements made by other local engineers to the Boulton and Watt design. He abandoned it after 1824 and built single-cylinder engines until 1833, when he retired to the island of Guernsey.[5] Meanwhile, steam engine builders carried on with tweaks to get yet one more increment of efficiency out of their engines. They extracted advantages from adjustments to the regulatory machinery of the engine: elements like “release mechanisms,” “dashpots,” and “wrist plates.” The Corliss engine, designed by George Corliss in 1849, became an icon of American industrial design after his company produced a gargantuan specimen to power the 1876 Centennial Exhibition in Philadelphia. Mighty as it was, however, it did not represent a great leap forward in steam engine architecture. Corliss’ design drew its relative advantages over prior engines from a clever combination of previous innovations in the valves that allowed steam to enter and leave the cylinder, and especially in the valve gear that controlled them.[6] Corliss engine valve gear from H.W. Dickinson, A Short History of the Steam Engine, p. 140. In the meantime, the double-cylinder engine, having failed to prove itself in the 1810s and 1820s, lay dormant. It would be restored to life decades later, by the engineers most desperate to eke as much power as possible out of every ounce of coal: the designers of ocean steamships. But to facilitate the consummation of that match, a solid theory of the steam engine was wanted, one that would dispel, once and for all, the confusions like Woolf’s that continued to trip up engineers’ efforts at improvement. Measuring Power The lack of a sound theoretical basis for steam power is evident in the fitful history of cylinder “lagging,” or insulation. Steam engineers borrowed the term lag (a barrel stave) from coopers, because they often insulated early steam boilers with such timbers, held in place with metal straps (this is evident in images of early locomotives like Rocket, with their distinctive wooden cladding). A contemporary lithograph of Robert Stephenson’s engine Northumbrian. Note the wooden lagging on the boiler. As early as 1769, Watt had recognized the value of insulating not just the boiler, but also the working cylinder of the engine (emphasis mine): My method of lessening the consumption of steam, and consequently fuel, in fire-engines, consists of the following principles:—First, That vessel in which the powers of steam are to be employed to work the engine, which is called the cylinder in common fire-engines, and which I call the steam-vessel, must, during the whole time the engine is at work, be kept as hot as the steam that enters it; first by enclosing it in a case of wood, or any other materials that transmit heat slowly; secondly, by surrounding it with steam or other heated bodies; and, thirdly, by suffering neither water nor any other substance colder than the steam to enter or touch it during that time.[7] Yet, despite Watt’s imprimatur, steam engine builders lagged their cylinders sporadically throughout the first half of the nineteenth century; it was a matter of whim, not principle.[8] In this era, engineers tended to think of the steam engine as analogous to its predecessor, the water wheel. Steam replaced liquid water as the mechanical working fluid, but just as water drove the wheel by pushing on its vanes, in their minds steam performed work by expanding and pushing on the piston. A typical description of the time stated that “[t]he force of the steam-engine is derived from the property of water to expand itself, in an amazing degree, when heated above the temperature at which it becomes steam.”[9] Engineers knew that the cylinder ought to be kept hot to prevent condensation of the steam inside, but within this framework it was not obvious that it ought to be kept as hot as possible. Watt, emphasizing the contrast between the hot cylinder and the cool condenser, had drawn attention to the role of heat in the engine, but the introduction and success of high-pressure engines with no condenser, where the primary factor seemed to be the expansive force of steam, muddled matters once again. The gradual development of a new, more robust theory began with a practical problem: how to measure the amount of power an engine generates. This became a particularly pressing problem for Boulton & Watt in the late eighteenth century, as they expanded from the traditional business of pumping engines into the new market of driving cotton mills. The traditional way of measuring the output of a steam engine, in terms of “duty” (the pounds of water lifted by one foot per bushel of coal burned) had gradually been supplemented with the concept of “power,” typically expressed in horsepower: pounds lifted over a given distance, but over a given period of time rather than with a given amount of fuel. Thomas Savery had begun to grope towards the concept in his 1702 book on the virtues of his steam pump, The Miner’s Friend: I have only this to urge, that water, in its fall from any determinate height, has simply a force answerable and equal to the force that raises it. So that an engine which will raise as much water as two horses working together at one time in such a work can do, and for which there must be constantly kept ten or twelve horses for doing the same, then, I say, such an engine will do the work or labour of ten or twelve horses…[10] Note here that Savery proposes to measure the muscular equivalent of the engine not in terms of the output of just the pair of horses running the machinery, but in terms of the total stock of horses that a mine owner would require to maintain the same power over a long period of time. This model of horsepower in terms of economic equivalency did not stick, however, and by the late eighteenth century horsepower became fixed to Watt’s figure of 33,000 foot-pounds per minute. Yet this remained a measure of power best suited to pumping work: if a mine needed to raise 20,000 pounds of water per hour from a 200-foot-deep shaft, one could readily calculate the engine horsepower required. Cotton spinning machinery—which varied in size, function, and design—did not lend itself to such simple arithmetic. In order to properly size engines to mills, Boulton & Watt needed some way measure the horsepower produced by an engine while driving various combinations of machinery. From the beginning, Watt had attached gauges to his engines to measure the pressure inside the engine, by connecting a small indicator cylinder to the main engine cylinder so that steam could flow between them. The level of pressure in the indicator could serve as a proxy for power output. But to actually capture the data was a maddening exercise, because the pressure varied constantly as the piston worked up and down. A means of capturing this continuous data came from a long-time Watt employee, John Southern. He had joined the company as a draftsman in 1782, and despite a predilection for music that the strait-laced Watt found suspicious, quickly became indispensable.[11] Southern’s indicator, as envisioned by Terrell Croft, Steam-Engine Principles and Practice, p. 40. In 1796, Southern devised a simple device to solve the power measurement problem. He attached a piece of paper above the indicator, rigged so that it would move back and forth as the main piston operated. Then he attached a pencil to the tip of the pressure gauge. As the pressure went up and down, so would the pencil, while the paper moved left and right beneath it with the cycle of the engine. The result, when running smoothly, would be a closed shape, which Southern called an indicator diagram, and the averagepressure during the operation of the engine could be computed from the average distance between the top and bottom lines of that shape, which would in turn be proportional to the power. By calibrating the diagramwhile an engine was pumping water, where the power output was well-defined, Boulton & Watt could then determine the power produced by the same engine while operating a given set of mill machinery.[12] An ideal indicator diagram from Terrell Croft, Steam-Engine Principles and Practice, p.60. Thermodynamics Engineers now had a tool at hand for diagnosing the internals of a running engine. That tool, in turn, provided the seed for the birth of the science of thermodynamics, which began as the science of the steam engine. The first great leap in that direction was made by Sadi Carnot. Carnot’s story carries more than a whiff of the tragic. Though later honored as a founding father of thermodynamics, he achieved no recognition in his lifetime, and died of cholera as a still-young man in 1832. His father Lazare was an accomplished engineer and a major political figure in revolutionary France, but what we know of the son comes almost entirely from a fifteen-page biography sketched decades after the fact by his younger brother Hippolyte, which begins, pathetically, with the statement that: “the life of Sadi Carnot was not marked by any notable event…”[13] Carnot as an École student in 1813. In fact, Carnot’s short life was remarkably eventful. He grew up in Napoleon’s court, attended the elite engineering school École polytechnique at age 16, and was at the Chateau Vincennes during the 1814 assault on Paris that ended Napoleon’s first reign. He returned to Paris as a staff lieutenant in 1819, filling his free time with his passions: music, art, and scientific studies. There, in 1824, he produced his seminal work, Réflexions sur la puissance motrice du feu (Reflections on the Motive Power of Fire). In it he endeavored to explain how heat produces motion. I will allow him to elaborate in his own words: Every one knows that heat can produce motion. That it possesses vast motive-power no one can doubt, in these days when the steam-engine is everywhere so well known. To heat also are due the vast movements which take place on the earth. It causes the agitations of the atmosphere, the ascension of clouds, the fall of rain and of meteors, the currents of water which channel the surface of the globe, and of which man has thus far employed but a small portion.[14] As we have seen, the tendency of engineers to conceive of steam hydraulically, as a fluid that generated work through pressure much like water in a water wheel, had engendered some confusion about how to build and operate an engine most efficiently. Ironically, Carnot moved the understanding of the steam engine forward by taking the analogy of a steam engine to a water wheel even more seriously than his contemporaries. However, for him the key power-generating agent was not the pressure of steam, but the fall of heat. Just as a waterwheel required a head from which water descended by gravity to turn the wheel, so the steam engine required a reservoir of high heat, which then flowed down to a cold body and thereby did work. For Carnot this fall of heat in a steam engine was quite literal: it consisted of an imponderable fluid called caloric, that drained out from the hot body to the cool one: The production of motion in steam-engines is always accompanied by a circumstance on which we should fix our attention. This circumstance is the re-establishing of equilibrium in the caloric; that is, its passage from a body in which the temperature is more or less elevated, to another in which it is lower. …The steam is here only a means of transporting the caloric.[15] This caloric theory of heat as a substance still predominated in Carnot’s day, despite subversives like Count Rumford who advocated for a mechanical theory of heat, which understood heat purely as a form of motion. If the flow of heat from the hot to the cold body produced all the work in the steam engine, then making an efficient engine meant minimizing any spillage of heat that did no useful work. It also implied that to maximize the work produced by the engine, one must maximize the difference between the source of high temperature and the sink of low temperature—the height through which the caloric fluid falls. Carnot’s book was largely ignored. But his insights had their first chance to be rescued from obscurity shortly after his death. Émile Clapeyron, just a few years younger than Carnot, was an accomplished engineer who specialized in locomotives, and a fellow-graduate of the École Polytechnique. In 1834, he published a paper in the school’s journal showing that Carnot’s heat engine theory could be expressed in the language of calculus and seen graphically in the indicator diagram: the area inside the diagram (which could be expressed as an integral) corresponded to the work performed by the heat transfer in the engine. Clapeyron’s work revived Carnot’s abstractions, put them on a firmer mathematical basis, and publicized them to the community of engine builders. Yet once again, they reached a dead end. Steeped in the traditions of their craft, neither Clapeyron nor his peers seem not to have understood the heat engine theory as having practical applications to real-life engineering.[16] Vindication for Carnot would have to wait another fifteen years, when a series of exchanges between William Thomson (later Lord Kelvin), Rudolf Clausius, and James Joule shortly before and after 1850 resolved various problems with the Carnot-Clapeyron heat engine, including reconciling it with the mechanical theory of heat: what flowed from the hot to the cold body was not a literal fluid but an abstraction called energy, which could take on many forms, but could only perform useful work over a fall in temperature. Through the medium of energy, a certain quantity of heat was directly equivalent to a certain amount of power.[17] The scientist who best synthesized this new science of heat for a wider engineering audience was Thomson’s colleague at the University of Glasgow, Macquorn Rankine. Perfecting the Marine Engine Rankine’s position was something of a novelty: he was only the second person to hold a chair of Civil Engineering at Glasgow, a position established by Queen Victoria in 1840. From the days of Watt and beyond, the University of Glasgow had been more practical-minded than the great Oxbridge schools of the South. But the establishment of a faculty chair in engineering did not just indicate that the university supported more hardheaded tasks than absorbing classical learning, it also signaled a desire to elevate engineering into a more theoretical, scientific discipline.[18] PGP R 2115.24 " data-medium-file="https://cdn.accountdigital.net/FnlrZNj8fQTsaPvT22_9gd-YHEEq" data-large-file="https://technicshistory.com/wp-content/uploads/2023/11/william_john_macquorn_rankine_by_thomas_annan.jpg?w=739" loading="lazy" width="778" height="1023" src="https://cdn.accountdigital.net/FrEoBHbrvJHyzrEXe93NzuOg6OWg" alt="" class="wp-image-14597" style="width:408px;height:auto" srcset="https://cdn.accountdigital.net/FrEoBHbrvJHyzrEXe93NzuOg6OWg 778w, https://cdn.accountdigital.net/FmzgsPbBW_cLTscowMscuC3n_cwa 1556w, https://cdn.accountdigital.net/Fp_NjcBxPV0wCxM3zCCtq6WnAxpV 114w, https://cdn.accountdigital.net/FnlrZNj8fQTsaPvT22_9gd-YHEEq 228w, https://cdn.accountdigital.net/FqppXmUOcVWA7m11yEWUApiqe2FB 768w" sizes="(max-width: 778px) 100vw, 778px">A leonine Rankine. Rankine, embodying this new spirit, straddling the worlds of theory and practice, preached thermodynamics to the engineering world: his 1859 A Manual of the Steam Engine and Other Prime Movers (1859), a 500-page, densely mathematical treatise, explicated the new theory and its applicability to practical matters in great detail and popularized the term “thermodynamics.” However he also knew how to reach a wider audience: in an 1854 address to the Liverpool meeting of the British Association for the Advancement of Science (BAAS) he concisely expressed the laws of thermodynamics in terms of ordinary English and simple arithmetic: “As the absolute temperature of receiving heat is to the absolute temperature of discharging heat, so is the whole heat received to the necessary loss of heat.” That is, the more precipitous the fall of temperature from the high (receiving) to the low (discharging) point of the engine cycle, the more efficient the engine could be.[19] Among those in Rankine’s circle of influence in the 1850s was an experienced builder of marine steam engines in Glasgow named John Elder, who became the first to incorporate a double-cylinder engine into a successful steamship. Elder had marine engines in his blood: his father David had joined Robert Napier’s engine building firm and began designing steamboat engines in 1821. In addition to family tradition and his natural talents, Elder had two other advantages in this undertaking. First, he had access to Glasgow’s “thermodynamic network” (as the historian Crosbie Smith put it); he had tutors in the new thermodynamic science and probably got specific advice from Rankine to introduce steam jacketing to prevent condensation in the cylinder. Second, he had an eager buyer.[20] An anonymous engraving of John Elder. The Pacific Steam Navigation Company (PSNC) of Liverpool had overextended itself in the South American Pacific-coast trade, where high-quality steam coal could arrive only by a 19,000-mile round-trip supplied by sail. Profit margins were slim to none, and venture stayed in the black only by virtue of a government mail contract. This made the company willing to wait out teething problems in order to get a more efficient engine. From the time Elder and his partner took out their engine patent in January 1853, it took four years before PSNC ratified the superiority of their ship Valparaiso, which consumed 25% less coal than an equivalent single-cylinder model.[21] Elder’s success set the stage for Holt’s further vault forward in the 1860s. Among the latter’s achievements was to convince the Board of Trade that marine engines could operate safely at higher pressures; allowing a greater fall of temperature and thus more efficient use of fuel. This, in turn, set the stage for triple-expansion engines later in the century, to extract still more work from the heat as it falls from boiler to condenser. This polyphonic fugue of machinery heralded the age of steam’s baroque period, which engendered the fantasias of steampunk a century later. By about 1890, a triple-expansion engine, running at 160 pounds-per-square-inch, could consume one-and-a-half pounds of coal per-horsepower per-hour, less than a third of the going rate a few decades before, and about five times less than Watt’s engine.[22] SONY DSC " data-medium-file="https://cdn.accountdigital.net/FtSQ8BekQNbv8kPS9uifApbwjKgt" data-large-file="https://technicshistory.com/wp-content/uploads/2023/11/tmw_677_-_triple_expansion_compound_steam_engine.jpg?w=739" loading="lazy" width="1024" height="975" src="https://cdn.accountdigital.net/FnvB-w_zECM1YvpFkdg-gYKhv1iR" alt="" class="wp-image-14600" srcset="https://cdn.accountdigital.net/FnvB-w_zECM1YvpFkdg-gYKhv1iR 1024w, https://cdn.accountdigital.net/FuWr2VLuwR94tPMUvE7sdYLm4x64 2046w, https://cdn.accountdigital.net/FvPIsIFR9yKXU_LcLXTgOMpJG8pE 150w, https://cdn.accountdigital.net/FtSQ8BekQNbv8kPS9uifApbwjKgt 300w, https://cdn.accountdigital.net/FuYz2_GUHfI2LpjeXYePQ7ky7D4- 768w" sizes="(max-width: 1024px) 100vw, 1024px">Cutaway of an 1888 Austrian triple-expansion engine, in the Vienna Technical Museum [Sandstein / Creative Commons Attribution 3.0 Unported]. Yet even as it thrust the age of steam up towards its apex, thermodynamics pointed out the weak spot that would lead to its downfall. In his 1854 speech to the BAAS, Rankine had touted the advantages of the air engine, a device devised by the Scotsman Robert Stirling that used hot air as its working fluid.  As Rankine pointed out, the laws of thermodynamics have nothing in particular to do with steam, but hold “true for all substances whatsoever in all conditions…” Air had a decided advantage over steam insofar as it could be driven to very high temperatures without creating very dangerous pressures: “For example, at the temperature of 650 ° Fahr. (measured from the ordinary zero,) a temperature up to which air engines have actually been worked with ease and safety, the pressure of steam is 2100 pounds upon the square inch; a pressure which plainly renders it impracticable to work steam engines with safety….”[23] The Stirling air engine did not, in the event, prove to be the slayer of steam. Its use never expanded beyond occasional low-power domestic applications. But it brought the first adumbration of the coming eclipse. Stirling air engine – harbinger of doom? [Paul U. Ehmer / CC-BY-SA-4.0]

Read more
High-Pressure, Part I: The Western Steamboat

The next act of the steamboat lay in the west, on the waters of the Mississippi basin. The settler population of this vast region—Mark Twain wrote that “the area of its drainage-basin is as great as the combined areas of England, Wales, Scotland, Ireland, France, Spain, Portugal, Germany, Austria, Italy, and Turkey”—was already growing rapidly in the early 1800s, and inexpensive transport to and from its interior represented a tremendous economic opportunity.[1] Robert Livingston scored another of his political coups in 1811, when he secured monopoly rights for operating steamboats in the New Orleans Territory. (It did not hurt his cause that he himself had negotiated the Louisiana Purchase, nor that his brother Edward was New Orleans’ most prominent lawyer.) The Fulton-Livingston partnership built a workshop in Pittsburgh to build steamboats for the Mississippi trade. Pittsburgh’s central position at the confluence of Monangahela and Allegheny made it a key commercial hub in the trans-Appalachian interior and a major boat-building center. Manufactures made there could be distributed up and down the rivers far more easily than those coming over the mountains from the coast, and so factories for making cloth, hats, nails, and other goods began to sprout up there as well.[2] The confluence of river-based commerce, boat-building and workshop know-how made Pittsburgh the natural wellspring for western steamboating. Figure 1: The Fulton-Livingston New Orleans. Note the shape of the hull, which resembles that of a typical ocean-going boat. From Pittsburgh, The Fulton-Livingston boats could ride downstream to New Orleans without touching the ocean. The New Orleans, the first boat launched by the partners, went into regular service from New Orleans to Natchez (about 175 miles to the north) in 1812, but their designs—upscaled versions of their Hudson River boats—fared poorly in the shallow, turbulent waters of the Mississippi. They also suffered sheer bad luck: the New Orleans grounded fatally in 1814, the aptly-named Vesuvius burnt to the waterline in 1816 and had to be rebuilt. The conquest of the Mississippi by steam power would fall to other men, and to a new technology: high-pressure steam. Strong Steam A typical Boulton & Watt condensing engine was designed to operate with steam below the pressure of the atmosphere (about fifteen pounds per square inch (psi)). But the possibility of creating much higher pressures by heating steam well above the boiling point was known for well over a century. The use of so-called “strong steam” dated back at least to Denis Papin’s steam digester from the 1670s. It even had been used to do work, in pumping engines based on Thomas Savery’s design from the early 1700s, which used steam pressure to push water up a pipe. But engine-builders did not use it widely in piston engines until well into the nineteenth century. Part of the reason was the suppressive influence of the great James Watt. Watt knew that expanding high-pressure steam could drive a piston, and laid out plans for high-pressure engines as early as 1769, in a letter to a friend: I intend in many cases to employ the expansive force of steam to press on the piston, or whatever is used instead of one, in the same manner as the weight of the atmosphere is now employed in common fire-engines. In some cases I intend to use both the condenser and this force of steam, so that the powers of these engines will as much exceed those pressed only by the air, as the expansive power of the steam is greater than the weight of the atmosphere. In other cases, when plenty of cold water cannot be had, I intend to work the engines by the force of steam only, and to discharge it into the air by proper outlets after it has done its office.[3] But he continued to rely on the vacuum created by his condenser, and never built an engine worked “by the force of steam only.” He went out of his way to ensure that no one else did either, deprecating the use of strong steam at every opportunity. There was one obvious reason why: high-pressure steam was dangerous. The problem was not the working machinery of the engine but the boiler, which was apt to explode, spewing shrapnel and superheated steam that could kill anyone nearby. Papin had added a safety valve to his digester for exactly this reason. Savery steam pumps were also notorious for their explosive tendencies. Some have imputed a baser motive for Watt’s intransigence: a desire to protect his own business from high-pressure competition. In truth, though, high-pressure boilers did remain dangerous, and would kill many people throughout the nineteenth century. Unfortunately, the best material for building a strong boiler was the most difficult from which to actually construct one. By the beginning of the nineteenth century copper, lead, wrought iron, and cast iron had all been tried as boiler materials, in various shapes and combinations. Copper and lead were soft, cast iron was hard, but brittle. Wrought iron clearly stood out as the toughest and most resilient option, but it could only be made in ingots or bars, which the prospective boilermaker would then have to flatten and form into small plates, many of which would have to be joined to make a complete boiler. Advances in two fields in the decades around 1800 resolved the difficulties of wrought iron. The first was metallurgical. In the late eighteenth century, Henry Cort invented the “puddling” process of melting and stirring iron to oxidize out the carbon, producing larger quantities of wrought iron that could be rolled out into plates of up to about five feet long and a foot wide.[4] These larger plates still had to be riveted together, a tedious and error-prone process, that produced leaky joints. Everything from rope fibers to oatmeal was tried as a caulking material. To make reliable, steam-tight joints required advances in machine tooling. This was a cutting-edge field at the time (pun intended). For example, for most of history craftsmen cut or filed screws by hand. The resulting lack of consistency meant that many of the uses of screws that we take for granted were unknown: one could not cut 100 nuts and 100 bolts, for example, and then expect to thread any pair of them together. Only in the last quarter of the eighteenth centuries did inventors craft sufficiently precise screw-cutting lathes to make it possible to repeatedly produce screws with the same length and pitch. Careful use of tooling similarly made it possible to bore holes of consistent sizes in wrought iron plates, and then manufacture consistently-sized rivets to fit into them, without the need to hand-fit rivets to holes.[5] One could name a few outstanding early contributors to the improvement of machine tooling in the first decades of the nineteenth century Arthur Woolf in Cornwall, or John Hall at the U.S. Harper’s Ferry Armory. But the steady development of improvements in boilers and other steam engine parts also involved the collective action of thousands of handcraft workers. Accustomed to building liquor stills, clocks, or scientific instruments, they gradually developed the techniques and rules of thumb needed for precision metalworking for large machines.[6] These changes did not impress Watt, and he stood by his anti-high-pressure position until his death in 1819. Two men would lead the way in rebelling against his strictures. The first appeared in the United States, far from Watt’s zone of influence, and paved the way for the conquest of the Western waters. Oliver Evans Oliver Evans was born in Delaware in 1755. He first honed his mechanical skills as an apprentice wheelwright. Around 1783, he began constructing a flour mill with his brothers on Red Clay Creek in northern Delaware. Hezekiah Niles, a boy of six, lived nearby. Niles would become the editor of the most famous magazine in America, from which post he later had occasion to recount that “[m]y earliest recollections pointed him out to me as a person, in the language of the day, that ‘would never be worth any thing, because he was always spending his time on some contrivance or another…’”[7] Two great “contrivances” dominated Evans’ adult life. The challenges of the mill work at Red Clay Creek led to his first great idea:  an automated flour mill. He eliminated most of the human labor from the mill by linking together the grain-processing steps with a series of water-powered machines (the most famous and delightfully named being the “hopper boy”). Though fascinating in its own right, for the purposes of our story the automated mill only matters in so far as it generated the wealth which allowed him to invest in his second great idea: an engine driven by high-pressure steam. Figure 2: Evans’ automated flour mill. In 1795, Evans published an account of his automatic mill entitled The Young Mill-Wright and Miller’s Guide. Something of his personality can be gleaned from the title of his 1805 sequel on the steam engine: The Abortion of the Young Steam Engineer’s Guide. A bill to extend the patent on his automatic flour mill failed to pass Congress in 1805, and so he published his Abortion as a dramatic swoon, a loud declaration that, in response this rebuff, he would be taking his ball and going home: His [i.e., Evans’] plans have thus proved abortive, all his fair prospects are blasted, and he must suppress a strong propensity for making new and useful inventions and improvements; although, as he believes, they might soon have been worth the labour of one hundred thousand men.[8] Of course, despite these dour mutterings, he failed entirely to suppress his “strong propensity,” in fact he was in the very midst of launching new steam engine ventures at this time. Like so many other early steam inventors, Evans’ interest in steam began with a dream of a self-propelled carriage. The first tangible evidence that we have of his interest in steam power comes from patents he filed in 1787 which included mention of a “steam-carriage, so constructed to move by the power of steam and the pressure of the atmosphere, for the purpose of conveying burdens without the aid of animal force.” The mention of “the pressure of the atmosphere” is interesting—he may have still been thinking of a low-pressure Watt-style engine at this point.[9] By 1802, however, Evans had a true high-pressure engine of about five horsepower operating at his workshop at Ninth and Market in Philadelphia. He had established himself in that city in 1792, the better to promote his milling inventions and millwright services. He attracted crowds to his shop with his demonstration of the engine at work: driving a screw mill to pulverize plaster, or cutting slabs of marble with a saw. Bands of iron held reenforcing wooden slats against the outside of the boiler, like the rim of a cartwheel or the hoops of a barrel. This curious hallmark testified to Evans’ background as a millwright and wheelwright [10] The boiler, of course, had to be as strong as possible to contain the superheated steam, and Evans’ later designs made improvements in this area. Rather than the “wagon” boiler favored by Watt (shaped like a Conestoga wagon or a stereotypical construction worker’s lunchbox), he used a cylinder. A spherical boiler being infeasible to make or use, this shape distributed the force of the steam pressure as evenly as practicable over the surface. In fact, Evans’ boiler consisted of two cylinders in an elongated donut shape, because rather than placing the furnace below the boiler, he placed it inside, to maximize the surface area of water exposed to the hot air. By the time of the Steam Engineer’s Guide, he no longer used copper braced with wood, he now recommended the “best” (i.e. wrought) iron “rolled in large sheets and strongly riveted together. …As cast iron is liable to crack with the heat, it is not to be trusted immediately in contact with the fire.”[11] Figure 3: Evan’s 1812 design, which he called the Columbian Engine to honor the young United States on the outbreak of the War of 1812. Note the flue carrying heat through the center of the boiler, the riveted wrought iron plates of the boiler, and the dainty proportions of the cylinder, in comparison to that of a Newcomen or Watt engine. Pictured in the corner is the Orukter Amphibolos. Evans was convinced of the superiority of his high-pressure design because of a rule of thumb that he had gleaned from the article “Steam” in the American edition of the Encylopedia Britannica: “…whatever the present temperature, an increase of 30 degrees doubles the elasticity and the bulk of water vapor.”[12] From this Evans concluded that heating steam to twice the boiling point (from 210 degrees to 420), would increase its elastic force by 128 times (since a 210 degree increase in temperature would make seven doublings). This massive increase in power would require only twice the fuel (to double the heat of the steam). None of this was correct, but it would not be the first or last time that faulty science would produce useful technology.[13] Nonetheless, the high-pressure engine did have very real advantages. Because the power generated by an engine was proportional to the area of the piston times the pressure exerted on that piston, for any given horsepower, a high-pressure engine could be made much smaller than its low-pressure equivalent. A high-pressure engine also did not require a condenser: it could vent the spent steam directly into the atmosphere. These factors made Evans’ engines smaller, lighter, and simpler and less expensive to build. A non-condensing high-pressure engine of twenty-four horsepower weighed half a ton and had a cylinder nine-inches across. A traditional Boulton & Watt style engine of the same power had a cylinder three times as wide and weighed four times as much overall.[14]   Such advantages in size and weight would count doubly for an engine used in a vehicle, i.e. an engine that had to haul itself around. In 1804 Evans sold an engine that was intended to drive a New Orleans steamboat, but it ended up in a sawmill instead. This event could serve as a metaphor for his relationship to steam transportation. He declared in his Steam Engineer’s Guide that: The navigation of the river Mississippi, by steam engines, on the principles here laid down, has for many years been a favourite object with the author and among the fondest wishes of his heart. He has used many endeavours to produce a conviction of its practicability, and never had a doubt of the sufficiency of the power.[15]   But steam navigation never got much more than his fondest wishes. Unlike a Fitch or a Rumsey, the desire to make a steamboat did not dominate his dreams and waking hours alike. By 1805, he was a well-established man of middle years. If he had ever possessed the Tookish spirit required for riverboat adventures, he had since lost it. He had already given up on the idea of a steam carriage, after failing to sell the Lancaster Turnpike Company on the idea in 1801. His most grandiosely named project, the Orukter Amphibolos, may briefly have run on wheels en route to serve as a steam dredge in the Philadelphia harbor. If it functioned at all, though, it was by no means a practical vehicle, and it had no sequel. Evans’ attention had shifted to industrial power, where the clearest financial opportunity lay—an opportunity that could be seized without leaving Philadelphia. Despite Evans’ calculations (erroneous, as we have said), a non-condensing high-pressure engine was somewhat less fuel-efficient than an equivalent Watt engine, not more. But because of its size and simplicity, it could be built at half the cost, and transported more cheaply, too. In time, therefore, the Evans-style engine became very popular as a mill or factory engine in the capital- and transportation-poor (but fuel-rich) trans-Appalachian United States.[16] In 1806, Evans began construction on his “Mars Works” in Philadelphia, to serve market for engines and other equipment. Evans engines sprouted up at sawmills, flour mills, paper factories, and other industrial enterprises across the West. Then, in 1811, he organized the Pittsburgh Steam Engine Company, operated by his twenty-three-year-old son George, to reduce transportation costs for engines to be erected west of the Alleghenies.[17] It was around that nexus of Pittsburgh that Evans’ inventions would find the people with the passion to put them to work, at last, on the rivers. The Rise of the Western Steamboat The mature Mississippi paddle steamer differed from its Eastern antecedents in two main respects. First, in its overall shape and layout: a roughly rectangular hull with a shallow draft, layer cake decks, and machinery above the water, not under it. This design was better adapted to an environment where snags and shallows presented a much greater hazard than waves and high winds. Second, in the use of a high-pressure engine, or engines, with a cylinder mounted horizontally along the deck. Many historical accounts attribute both of these essential developments to a keelboatman named Henry Miller Shreve. Economic historian Louis Hunter effectively demolished this legend in the 1940s, but more recent writers (for example Shreve’s 1984 biographer, Edith McCall), have continued to perpetuate it. In fact, no one can say with certainty where most of these features came from because no one bothered to document their introduction. As Hunter wrote: From the appearance of the first crude steam vessels on the western waters to the emergence of the fully evolved river steamboat a generation later, we know astonishingly little of the actual course of technological events and we can follow what took place only in its broad outlines. The development of the western steamboat proceeded largely outside the framework of the patent system and in a haze of anonymity.[18] Some documents came to light in the 1990s, however, that have burned away some of the “haze,” with respect to the introduction of high-pressure engines.[19] the papers of Daniel French reveal that the key events happened in a now-obscure place called Brownsville (originally known as Redstone), about forty miles up the Monongahela from that vital center of western commerce, Pittsburgh. Brownsville was the point where anyone heading west on the main trail over the Alleghenies—which later became part of the National Road—would first reach navigable waters in the Mississippi basin. Henry Shreve grew up not far from this spot. Born in 1785 to a father who had served as a Colonel in the Revolutionary War, he grew up on a farm near Brownsville on land leased from Washington: one of the general’s many western land-development schemes.[20] Henry fell in love with the river life, and in by his early twenties had established himself with his own keelboat operating out of Pittsburgh. He made his early fortune off the fur trade boom in St. Louis, which took off after Lewis and Clark returned with reports of widespread beaver activity on the Missouri River.[21] In the fall of 1812, a newcomer named Daniel French arrived in Shreve’s neighborhood—a newcomer who already had experience building steam watercraft, powered by engines based on the designs of Oliver Evans. French was born in Connecticut 1770, and started planning to build steamboats in his early 20s, perhaps inspired by the work of Samuel Morey, who operated upstream of him on the Connecticut River. But, discouraged from his plans by the local authorities, French turned his inventive energies elsewhere for a time. He met and worked with Evans in Washington, D.C., to lobby Congress to extend the length of patent grants, but did not return to steamboats until Fulton’s 1807 triumph re-energized him. At this point he adopted Evans’ high-pressure engine idea, but added his own innovation, an oscillating cylinder that pivoted on trunions as the engine worked. This allowed the piston shaft to be attached to the stern wheel with a simple (and light) crank, without any flywheel or gearing. The small size of the high-pressure cylinder made it feasible to put the cylinder in motion. In 1810, a steam ferry he designed, for a route from Jersey City to Manhattan, successfully crossed and recrossed the North (Hudson) River at about six miles per hour. Nonetheless, Fulton, who still held a New York state monopoly, got the contract from the ferry operators.[22] French moved to Philadelphia and tried again, constructing the steam ferry Rebecca to carry passengers across the Delaware. She evidently did not produce great profits, because a frustrated French moved west again in the fall of 1812, to establish a steam-engine-building business at Brownsville.[23] His experience with building high-pressure steamboats—simple, relatively low-cost, and powerful—had arrived at the place that would benefit most from those advantages, a place, moreover, where the Fulton-Livingston interests held no legal monopoly. News about the lucrative profits of the New Orleans on the Natchez run had begun to trickle back up the rivers. This was sufficient to convince the Brownsville notables—Shreve among them—to put up $11,000 to form the Monongahela and Ohio Steam Boat Company in 1813, with French as their engineer. French had their first boat, Enterprise, ready by the spring of 1814. Her exact characteristics are not documented, but based on the fragmentary evidence, she seems in effect to have been a motorized keelboat: 60-80’ long, about 30 tons, and equipped with a twenty-horsepower engine. The power train matched that of French’s 1810 steam ferry, trunions and all.[24] The Enterprise spent the summer trading along the Ohio between Pittsburgh and Louisville. Then, in December, she headed south with a load of supplies to aid in the defense of New Orleans. For this important voyage into waters mostly unknown to the Brownsville circle, they called on the experienced keelboatman, Henry Shreve. Andrew Jackson had declared martial law, and kept Shreve and the Enterprise on military dutyin New Orleans. With Jackson’s aid, Shreve dodged the legal snares laid for him by the Fulton-Livingston group to protect their New Orleans monopoly. Then in May, after the armistice, he brough the Enterprise on a 2,000-mile ascent back to Brownsville, the first steamboat ever to make such a journey. Shreve became an instant celebrity. He had contributed to a stunning defeat for the British at New Orleans, carried out an unprecedent voyage. Moreover, he had confounded the monopolists: their attempt to assert exclusive rights over the commons of the river was deeply unpopular west of the Appalachians. Shreve capitalized on his new-found fame to raise money for his own steamboat company in Wheeling, Virginia. The Ohio at Wheeling ran much deeper than the Monongahela at Brownsvile, and Shreve would put this depth to use: he had ambitions to put a French engine into a far larger boat than the Enterprise. Spurring French to scale up his design was probably Shreve’s largest contribution to the evolution of the western steamboat. French dared not try to repeat his oscillating cylinder trick on the larger cylinder that would drive Shreve’s 100-horsepower, 400-ton two-decker. Instead, he fixed the cylinder horizontally to the hull, and then attached the piston rod to a connecting rod, or “pitman,” that drove the crankshaft of the stern paddle wheel. He thus transferred the oscillating motion from the piston to the pitman, while keeping the overall design simple and relatively low cost.[25] Shreve called his steamer Washington, after his father’s (and his own) hero. Her maiden voyage in 1817, however, was far from heroic. Evans would have assured French that the high-pressure engine carried little risk: as he wrote in the Steam Engineer’s Guide, “we know how to construct [boilers] with a proportionate strength, to enable us to work with perfect safety.”[26] Yet on her first trip down the Ohio, with twenty-one passengers aboard, the Washington’s boiler exploded, killing seven passengers and three crew. The blast threw Shreve himself into the river, but he did not suffer serious harm.[27] Ironically, the only steamboat built by the Evans family, the Constitution (née Oliver Evans) suffered a similar fate in the same year, exploding and killing eleven on board. Despite Evans’ confidence in their safety, boiler accidents continued to bedevil steamboats for decades. Though the total numbers killed was not enormous—about 1500 dead across all Western rivers up to 1848—each event provided an exceptionally grisly spectacle. Consider this lurid account of the explosion of the Constitution: One man had been completely submerged in the boiling liquid which inundated the cabin, and in his removal to the deck, the skin had separated from the entire surface of his body. The unfortunate wretch was literally boiled alive, yet although his flesh parted from his bones, and his agonies were most intense, he survived and retained all his consciousness for several hours. Another passenger was found lying aft of the wheel with an arm and a leg blown off, and as no surgical aid could be rendered him, death from loss of blood soon ended his sufferings. Miss C. Butler, of Massachusetts, was so badly scalded, that, after lingering in unspeakable agony for three hours, death came to her relief.[28] In response to continued public outcry for an end to such horrors, Congress eventually stepped in, passing acts to improve steamboat safety in 1838 and 1852. Meanwhile, Shreve was not deterred by the setback. The Washington itself did not suffer grievous damage, so he corrected a fault in the safety valves and tried again. Passengers were understandably reluctant for an encore performance, but after the Washington made national news in 1817 with a freight passage from New Orleans to just twenty-five days, the public quickly forgot and forgave. A few days later, a judge in New Orleans refused to consider a suit by the Fulton-Livingston interests against Shreve, effectively nullifying their monopoly.[29] Now all comers knew that steamboats could ply the Mississippi successfully, and without risk of any legal action. The age of the western steamboat opened in earnest. By 1820, sixty-nine steamboats could be found on western rivers, and 187 a decade after that.[30] Builders took a variety of approaches to powering these boats: low-pressure engines, engines with vertical cylinders, engines with rocking beams or fly wheels to drive the paddles. Not until the 1830s did a dominant pattern take hold, but when it did it, it was that of the Evans/French/Shreve lineage, as found on the Washington: a high-pressure engine with a horizontal cylinder driving the wheel through an oscillating connecting rod.[31] Landscape " data-medium-file="https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L" data-large-file="https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI?w=739" loading="lazy" width="1024" height="840" src="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9" alt="" class="wp-image-14432" srcset="https://cdn.accountdigital.net/FlcvgMkpoTZBzNGghRfEDdhgLxQ9 1024w, https://cdn.accountdigital.net/FrglBM_683opK7_ejlnGd3iJ8vBT 150w, https://cdn.accountdigital.net/Fqj9zyQ7tvSfmY4P3YLVuPoZb3_L 300w, https://cdn.accountdigital.net/FoNGy7IhZD2VDImZqLwjumC3NT84 768w, https://cdn.accountdigital.net/FiCcqeZXo5DKAFtiEcm9PGRhsPTI 1247w" sizes="(max-width: 1024px) 100vw, 1024px">Figure 4: A Tennessee river steamboat from the 1860s. The distinctive features include a flat-bottomed hull with very little freeboard, a superstructure to hold passengers and crew, and twin smokestacks. The western steamboat had achieved this basic form by the 1830s and maintained it into the twentieth century. The Legacy of the Western Steamboat The Western steamboat was a product of environmental factors that favored the adoption of a shallow-drafted boat with a relatively inefficient but simple and powerful engine: fast, shallow rivers; abundant wood for fuel along the shores of those rivers; and the geographic configuration of the United States after the Louisiana Purchase, with a high ridge of mountains separating the coast from a massive navigable inland watershed. But, Escher-like, the steamboat then looped back around to reshape the environment from which it had emerged. Just as steam-powered factories had, steam transport flattened out the cycles of nature, bulldozing the hills and valleys of time and space. Before the Washington’s journey, the shallow grade that distinguished upstream from downstream dominated the life of any traveler or trader in the Mississippi. Now goods and people could move easily upriver, in defiance the dictates of gravity.[32] By the 1840s, steamboats were navigating well inland on other rivers of the West as well: up the Tombigbee, for example, over 200 miles inland to Columbus, Mississippi.[33] What steamboats alone could not do to turn the western waters into turnpike roads, Shreve and others would impose on them through brute force. Steamboats frequently sank or took major damage from snags or “sawyers”: partially submerged tree limbs or trunks that obstructed the water ways. In some places, vast masses of driftwood choked the entire river. Beyond Natchitoches, the Red River was obstructed for miles by an astonishing tangle of such logs known as the Great Raft.[34] Figure 5: A portrait of Shreve of unknown date, likely the 1840s. The scene outside the window reveals one of his snagboats, a frequently used device in nineteenth century portraits of inventors. Not only commerce was at stake in clearing the waterways of such obstructions; steamboats would be vital to any future war in the West. As early as 1814, Andrew Jackson had put Shreve’s Enterprise to good use, ferrying supplies and troops around the Mississippi delta region.[35] With the encouragement of the Monroe administration, therefore, Congress stepped in with a bill in 1824 to fund the Army’s corps of engineers to improve the western rivers. Shreve was named superintendent of this effort, and secured federal funds to build snagboats such as the Heliopolis, twin-hulled behemoths designed to drive a snag between its hulls and then winch it up onto the middle deck and saw it down to size. Heliopolis and its sister ships successfully cleared large stretches of the Ohio and Mississippi.[36] In 1833, Shreve embarked on the last great venture of his life: an assault on the Great Raft itself. It took six years and a flotilla of rafts, keelboats and steamboats to complete the job, including a new snagboat, Eradicator, built specially for the task.[37] The clearing of waterways, technical advancements in steamboat design, and other improvements (such as the establishment of fuel depots, so that time was not wasted stopping to gather wood), combined to drive travel times along the rivers down rapidly. In 1819, the James Ross completed the New Orleans to Louisville passage in sixteen-and-a-half days. In 1824 the President covered the same distance in ten-and-a-half days, and in 1833 the Tuscorora clocked a run of seven days, six hours. These ever-decreasing record times translated directly into ever-decreasing shipping rates. Early steamboats charged upstream rates equivalent to those levied by their keelboat competitors: about five dollars per hundred pounds carried from New Orleans to Louisville. By the early 1830s this had dropped to an average of about sixty cents per 100 pounds, and by the 1840s as low as fifteen cents.[38] By decreasing the cost of river trade, the steamboat cemented the economic preeminence of New Orleans. Cotton, sugar, and other agricultural goods (much of it produced by slave labor) flowed downriver to the port, then out to the wider world; manufactured goods and luxuries like coffee arrived from the ocean trade and were carried upriver; and human traffic, bought and sold at the massive New Orleans slave market, flowed in both directions.[39] In 1820 a steamboat arrived in New Orleans about every other day. By 1840 the city averaged over four arrivals a day; by 1850, nearly eight.[40] The population of the city burgeoned to over 100,000 by 1840, making it the third-largest in the country. Chicago, its big-shouldered days still ahead of it, remained a frontier outpost by comparison, with only 5,000 residents. Figure 6: A Currier & Ives lithograph of the New Orleans levee. This represents a scene from the late nineteenth century, way past the prime of New Orleans’ economic dominance, but still shows a port bustling with steamboats. But both New Orleans and the steamboat soon lost their dominance over the western economy. As Mark Twain wrote: Mississippi steamboating was born about 1812; at the end of thirty years, it had grown to mighty proportions; and in less than thirty more, it was dead! A strangely short life for so majestic a creature.[41] Several forces connived in the murder of the Mississippi steamboat, but a close cousin lurked among the conspirators: another form of transportation enabled by the harnessing of high-pressure steam. The story of the locomotive takes us back to Britain, and the dawn of the nineteenth century.

Read more
Internet Ascendant, Part 2: Going Private and Going Public

In the summer of 1986, Senator Al Gore, Jr., of Tennessee introduced an amendment to the Congressional Act that authorized the  budget of the National Science Foundation (NSF). He called for the federal government to study the possibilities for “communications networks for supercomputers at universities and Federal research facilities.” To explain the purpose of this legislation, Gore called on a striking analogy: One promising technology is the development of fiber optic systems for voice and data transmission. Eventually we will see a system of fiber optic systems being installed nationwide. America’s highways transport people and materials across the country. Federal freeways connect with state highways which connect in turn with county roads and city streets. To transport data and ideas, we will need a telecommunications highway connecting users coast to coast, state to state, city to city. The study required in this amendment will identify the problems and opportunities the nation will face in establishing that highway.1In the following years, Gore and his allies would call for the creation of an “information superhighway”, or, more formally, a national information infrastructure (NII). As he intended, Gore’s analogy to the federal highway system summons to mind a central exchange that would bind together various local and regional networks, letting all American citizens communicate with one another. However, the analogy also misleads – Gore did not propose the creation of a federally-funded and maintained data network. He envisioned that the information superhighway, unlike its concrete and asphalt namesake, would come into being through the action of market forces, within a regulatory framework that would ensure competition, guarantee open, equal access to any service provider (what would later be known as “net neutrality”), and provide subsidies or other mechanisms to ensure universal service to the least fortunate members of society, preventing the emergence of a gap between the information rich and information poor.2Over the following decade, Congress slowly developed a policy response to the growing importance of computer networks to the American research community, to education, and eventually to society as a whole. Congress’ slow march towards an NII policy, however, could not keep up with the rapidly growing NSFNET, overseen by the neighboring bureaucracy of the executive branch. Despite its reputation for sclerosis, bureaucracy was created exactly because of its capacity, unlike a legislature, to respond to events immediately, without deliberation. And so it happened that, between 1988 and 1993, the NSF crafted the policies that would determine how the Internet became private, and thus went public. It had to deal every year with novel demands and expectations from NSFNET’s users and peer networks. In response, it made decisions on the fly, decisions which rapidly outpaced Congressional plans for guiding the development of an information superhighway. These decisions rested largely in the hands of a single man – Stephen Wolff.Acceptable UseWolff earned a Ph.D. in electrical engineering at Princeton in 1961 (where he would have been a rough contemporary of Bob Kahn), and began what might have been a comfortable academic career, with a post-doctoral stint at Imperial College, followed by several years teaching at Johns Hopkins. But then he shifted gears, and took a position  at the Ballistics Research lab in Aberdeen, Maryland. He stayed there for most of the 1970s and early 1980s, researching communications and computing systems for the U.S. Army. He introduced Unix into the lab’s offices, and managed Aberdeen’s connection to the ARPANET.3In 1986, the NSF recruited him to manage the NSF’s supercomputing backbone – he was a natural fit, given his experience connecting Army supercomputers to ARPANET. He became the principal architect of NSFNET’s evolution from that point until his departure in 1994, when he entered the private sector as a manager for Cisco Systems. The original intended function of the net that Wolff was hired to manage had been to connect researchers across the U.S. to NSF-funded supercomputing centers. As we saw last time, however, once Wolff and the other network managers saw how much demand the initial backbone had engendered, they quickly developed a new vision of NSFNET, as a communications grid for the entire American research and post-secondary education community.However, Wolff did not want the government to be in the business of supplying network services on a permanent basis. In his view, the NSF’s role was to prime the pump, creating the initial demand needed to get a commercial networking services sector off the ground. Once that happened, Wolff felt it would be improper for a government entity to be in competition with viable for-profit businesses. So he intended to get NSF out of the way by privatizing the network, handing over control of the backbone to unsubsidized private entities and letting the market take over.This was very much in the spirit of the times. Across the Western world, and across most of the political spectrum, government leaders of the 1980s touted privatization and deregulation as the best means to unleash economic growth and innovation after the relative stagnation of the 1970s. As one example among many, around the same time that NSFNET was getting off the ground, the FCC knocked down several decades-old constraints on corporations involved in broadcasting. In 1985, it removed the restriction on owning print and broadcast media in the same locality, and two year later it nullified the fairness doctrine, which had required broadcasters to present multiple views on public-policy debates. From his post at NSF, Wolff had several levers at hand for accomplishing his goals. The first lay in the interpretation and enforcement of the network’s acceptable use policy (AUP). In accordance with NSF’s mission, the initial policy for the NSFNET backbone, in effect until June 1990, required all uses of the network to be in support of “scientific research and other scholarly activities.” This is quite restrictive indeed, and would seem to eliminate any possibility of commercial use of the network. But Wolff chose to interpret the policy liberally. Regularly mailing list postings about new product releases from a corporation that sold data processing software – was that not in support of scientific research? What about the decision to allow MCI’s email system to connect to the backbone, at the urging of Vint Cerf, who had left government employ to oversee the development of MCI Mail. Wolff rationalized this – and other later interconnections to commercial email systems such as CompuServe’s – as in support of research by making it possible for researchers to communicate digitally with a wider range of people that they might need to contact in the pursuit of their work. A stretch, perhaps. But Wolff saw that allowing some commercial traffic on the same infrastructure that was used for public NSF traffic would encourage the private investment needed to support academic and educational use on a permanent basis. Wolff’s strategy of opening the door of NSFNET as far as possible to commercial entities got an assist from Congress in 1992, when Congressman Rick Boucher, who helped oversee NSF as chair of the Science Subcommittee, sponsored an amendment to the NSF charter which authorized any additional uses of NSFNET that would “tend to increase the overall capabilities of the networks to support such research and education activities.” This was an ex post facto validation of Wolff’s approach to commercial traffic, allowing virtually any activity as long as it produced profits that encouraged more private investment into NSFNET and its peer networks.  Dual-Use NetworksWolff also fostered the commercial development of networking by supporting the regional networks’ reuse of their networking hardware for commercial traffic. As you may recall, the NSF backbone linked together a variety of not-for-profit regional nets, from NYSERNet in New York to Sesquinet in Texas to BARRNet in northern California. NSF did not directly fund the regional networks, but it did subsidize them indirectly, via the money it provided to labs and universities to offset the costs of their connection to their neighborhood regional net. Several of the regional nets then used this same subsidized infrastructure to spin off a for-profit commercial enterprise, selling network access to the public over the very same wires used for the research and education purposes sponsored by NSF. Wolff encouraged them to do so, seeing this as yet another way to accelerate the transition of the nation’s research and education infrastructure to private control. This, too, accorded neatly with the political spirit of the 1980s, which encouraged private enterprise to profit from public largesse, in the expectation that the public would benefit indirectly through economic growth. One can see parallels with the dual-use regional networks in the 1980 Bayh-Dole Act, which defaulted ownership of patents derived from government-funded research to the organization performing the work, not to the government that paid for it. The most prominent example of dual-use in action was PSINet, a for-profit company initially founded as Performance Systems International in 1988. William Schrader and Martin Schoffstall, the co-founder of NYSERNet and one of vice presidents’, respectively, created the company. Schofstall, a former BBN engineer and co-author of the Simple Network Management Protocol (SNMP) for managing the devices on an IP network, was the key technical leader. Schrader, an ambitious Cornell biology major and MBA who had helped his alma mater set up its supercomputing center and get it connected to NSFNET, provided the business drive. He firmly believed that NYSERNet should be selling service to businesses, not just educational institutions. When the rest of the board disagreed, he quit to found his own company, first contracting with NYSERNet for service, and later raising enough money to acquire its assets. PSINet thus became one of the earliest commercial internet service providers, while continuing to provide non-profit service to colleges and universities seeking access to the NSFNET backbone.4Wolff’s final source of leverage for encouraging a commercial Internet lay in his role as manager of the contracts with the Merit-IBM-MCI consortium that operated the backbone. The initial impetus for change in this dimension came not from Wolff, however, but from the backbone operators themselves.  A For-Profit BackboneMCI and its peers in the telecommunications industry had a strong incentive to find or create more demand for computer data communications. They had spent the 1980s upgrading their long-line networks from coaxial cable and microwave – already much higher capacity than the old copper lines – to fiber optic cables. These cables, which transmitted laser light through glass, had tremendous capacity, limited mainly by the technology in the transmitters and receivers on either end, rather than the cable itself. And that capacity was far from saturated. By the early 1990s, many companies had deployed OC-48 transmission equipment with 2.5 Gbps of capacity, an almost unimaginable figure a decade earlier. An explosion in data traffic would therefore bring in new revenue at very little marginal cost – almost pure profit.5The desire to gain expertise in the coming market in data communications helps explains why MCI was willing to sign on to the NSFNET bid proposed by Merit, which massively undercut the competing bids (at $14 million for five years, versus the $40 and $25 millions proposed by their competitors6), and surely implied a short-term financial loss for MCI and IBM. But by 1989, they hoped to start turning a profit from their investment. The existing backbone was approaching the saturation point, with 500 million packets a month, a 500% year-over-year increase.7 So, when NSF asked Merit to upgrade the backbone from 1.5 Mbps T1 lines to 45Mbps T3, they took the opportunity to propose to Wolff a new contractual arrangement.T3 was a new frontier in networking – no prior experience or equipment existed for digital networks of this bandwidth, and so the companies argued that more private investment would be needed, requiring a restructuring that would allow IBM and Merit to share the new infrastructure with for-profit commercial traffic – a dual-use backbone. To achieve this, the consortium would from a new non-profit corporation, Advanced Network & Services, Inc. (ANS), which would supply T3 networking services to NSF. A subsidiary called ANS CO+RE systems would sell the same services at a profit to any clients willing to pay. Wolff agreed to this, seeing it as just another step in the transition of the network towards commercial control. Moreover, he feared that continuing to block commercial exploitation of the backbone would lead to a bifurcation of the network, with suppliers like ANS doing an end-run around NSFNET to create their own, separate, commercial Internet. Up to that point, Wolff’s plan for gradually getting NSF out of the way had no specific target date or planned milestones. A workshop on the topic held at Harvard in March 1990, in which Wolff and many other early Internet leaders participated, considered a variety of options without laying out any concrete plans.8 It was ANS’ stratagem that triggered the cascade of events that led directly to the full privatization and commercialization of NSFNET.It began with a backlash. Despite Wolff’s good intentions, IBM and MCI’s ANS maneuver created a great deal of disgruntlement in the networking community. It became a problem exactly because of the for-profit networks attached to the backbone that Wolff had promoted. So far they had gotten along reasonably with one another, because they all operated as peers on the same terms. But with ANS, a for-profit company held a de-facto monopoly on the backbone at the center of the Internet.9 Moreover, despite Wolff’s efforts to interpret the AUP loosely, ANS chose to interpret it strictly, and refused to interconnect the non-profit portion of the backbone (for NSF traffic) with any of their for-profit networks like PSI, since that would require a direct mixing of commercial and non-commercial traffic. When this created an uproar, they backpedaled, and came up with a new policy, allowing interconnection for a fee based on traffic volume.PSINet would have none of this. In the summer of 1991, they banded together with two other for-profit Internet service providers – UUNET, which had begun by selling commercial access to Usenet before adding Internet service; and the California Education and Research Federation Network, or CERFNet, operated by General Atomics – to form their own exchange, bypassing the ANS backbone. The Commercial Internet Exchange (CIX) consisted at first of just a single routing center in Washington D.C. which could transfer traffic among the three networks. They agreed to peer at no charge, regardless of the relative traffic volume, with each network paying the same fee to CIX to operate the router. New routers in Chicago and Silicon Valley soon followed, and other networks looking to avoid ANS’ fees also joined on.DivestitureRick Boucher, the Congressman whom we met above as a supporter of NSF commercialization, nonetheless requested an investigation of the propriety of Wolff’s actions in the ANS affair by the Office of the Inspector General. It found NSF’s actions precipitous, but not malicious or corrupt. Nevertheless, Wolff saw that the time had come to divest control of the backbone. With ANS + CORE and CIX privatization and commercialization had begun in earnest, but in a way that risked splitting the unitary Internet into multiple disconnected fragments, as CIX and ANS refused to connect with one another. NSF therefore drafted a plan for a new, privatized network architecture in the summer of 1992, released it for public comment, and finalized it in May of 1993. NSFNET would shut down in the spring of 1995, and its assets would revert to IBM and MCI. The regional networks could continue to operate, with financial support from the NSF gradually phasing out over a four year period, but would have to contract with a private ISP for internet access.But in a world of many competitive internet access providers, what would replace the backbone? What mechanism would link these opposed private interests into a cohesive whole? Wolff’s answer was inspired by the exchanges already built by cooperatives like CIX – NSF would contract out the creation of four Network Access Points (NAPs), routing sites where various vendors could exchange traffic. Having four separate contracts would avoid repeating the ANS controversy, by preventing a monopoly on the points of exchange. One NAP would reside at the pre-existing, and cheekily named, Metropolitan Area Ethernet East (MAE-East) in Vienna, Virginia, operated by Metropolitan Fiber Systems (MFS). MAE-West, operated by Pacific Bell, was established in San Jose, California; Sprint operated another NAP in Pennsauken, New Jersey, and Ameritech one in Chicago. The transition went smoothly10, and NSF decommissioned the backbone right on schedule, on April 30, 1995.11The Break-upThough Gore and others often invoked the “information superhighway” as a metaphor for digital networks, there was never serious consideration in Congress of using the federal highway system as a direct policy model. The federal government paid for the building and maintenance of interstate highways in order to provide a robust transportation network for the entire country. But in an era when both major parties took deregulation and privatization for granted as good policy, a state-backed system of networks and information services on the French model of Transpac and Minitel was not up for consideration.12Instead, the most attractive policy model for Congress as it planned for the future of telecommunication was the long-distance market created by the break-up of the Bell System between 1982 and 1984. In 1974, the Justice Department filed suit against AT&T, its first major suit against the organization since the 1950s, alleging that it had engaged in anti-competitive behavior in violation of the Sherman Antitrust Act. Specifically, they accused the company of using its market power to exclude various innovative new businesses from the market – mobile radio operators, data networks, satellite carriers, makers of specialized terminal equipment, and more. The suit thus clearly drew much of its impetus from the ongoing disputes since the early 1960s (described in an earlier installment), between AT&T and the likes of MCI and Carterfone.When it became clear that the Justice Department meant business, and intended to break the power of AT&T, the company at first sought redress from Congress. John de Butts, chairman and CEO since 1972, attempted to push a “Bell bill” – formally the Consumer Communications Reform Act – through Congress. It would have enshrined into law AT&T’s argument that the benefits of a single, universal telephone network far outweighed any risk of abusive monopoly, risks which in any case the FCC could already effectively check. But the proposal received stiff opposition in the House Subcommittee on Communications, and never reached a vote on the floor of either Congressional chamber. In a change of tactics, in 1979 the board replaced the combative de Butts – who had once declared openly to an audience of state telecommunications regulators the heresy that he opposed competition and espoused monopoly – with the more conciliatory Charles Brown. But it was too late by then to stop the momentum of the antitrust case, and it became increasingly clear to the company’s leadership that they would not prevail. In January 1982, therefore, Brown agreed to a consent decree that would have the presiding judge in the case, Harold Greene, oversee the break-up of the Bell System into its constituent parts.The various Bell companies that brought copper to the customer’s premise, which generally operated by state (New Jersey Bell, Indiana Bell, and so forth) were carved up into seven blocks called Regional Bell Operating Companies (RBOCs). Working clockwise around the country, they were NYNEX in the northeast, Bell Atlantic, Bell South, Southwestern Bell, Pacific Telesis, US West, and Ameritech. All of them remained regulated entities with an effective monopoly over local traffic in their region, but were forbidden from entering other telecom markets. AT&T itself retained the “long lines” division for long-distance traffic. Unlike local phone service, however, the settlement opened this market to free competition from any entrant willing and able to pay the interconnection fees to transfer calls in and out of the RBOCs. A residential customer in Indiana would always have Ameritech as their local telephone company, but could sign up for long-distance service with anyone.However, splitting apart the local and long-distance markets meant forgoing the subsidies that AT&T had long routed to rural telephone subscribers, under-charging them by over-charging wealthy long-distance users. A sudden spike in rural telephone prices across the nation was not politically tenable, so the deal preserved these transfers via a new organization, the non-profit National Exchange Carrier Association, which collected fees from the long-distance companies and distributed them to the RBOCS.   The new structure worked. Two major competitors entered the market in the 1980s, MCI and Sprint, and cut deeply into AT&T’s market share. Long-distance prices fell rapidly. Though it is arguable how much of this was due to competition per se, as opposed to the advent of ultra-high-bandwidth fiber optic networks, the arrangement was generally seen as a great success for de-regulation and a clear argument for the power of market forces to modernize formerly hidebound industries. This market structure, created ad hoc by court fiat but evidently highly successful, provided the template from which Congress drew in the mid-1990s to finally resolve the question of what telecom policy for the Internet era would look like. Second Time Isn’t The CharmPrior to the main event, there was one brief preliminary. The High Performance Computing Act of 1991 was important tactically, but not strategically. It advanced no new major policy initiatives. Its primary significance lay in providing additional funding and Congressional backing for what Wolff and the NSF already were doing and intended to keep doing – providing networking services for the research community, subsidizing academic institutions’ connections to NSFNET, and continuing to upgrade the backbone infrastructure.  Then came the accession of the 104th Congress in January 1995. Republicans took control of both the Senate and the House for the first time in forty years, and they came with an agenda to fight crime, cut taxes, shrink and reform government, and uphold moral righteousness. Gore and his allies had long touted universal access as a key component of the National Information Infrastructure, but with this shift in power the prospects for a strong universal service component to telecommunications reform diminished from minimal to none. Instead, the main legislative course would consist of regulatory changes to foster competition in telecommunications and Internet access, with a serving of bowdlerization on the side. The market conditions looked promising. Circa 1992, the major players in the telecommunications industry were numerous. In the traditional telephone industry there were the seven RBOCs, GTE, and three large long distance companies – AT&T, MCI, and Sprint – along with many smaller ones. The new up-and-comers included Internet service providers, such as UUNET, and PSINET as well as the IBM/MCI backbone spin-off, ANS; and other companies trying to build out their local fiber networks, such as Metropolitan Fiber Systems (MFS). BBN, the contractor behind ARPANET, had begun to build its own small Internet empire, snapping up some of the regional networks that orbited around NSFNET – Nearnet in New England, BARRNet in the Bay area, and SURANet in the southeast of the U.S. To preserve and expand this competitive landscape would be the primary goal of the 1996 Telecommunications Act, the only major rewrite of communications policy since the Communications Act of 1934. It intended to reshape telecommunications law for the digital age. The regulatory regime established by the original act siloed industries by their physical transmission medium – telephony, broadcast radio and television, cable TV; in each in its own box, with its own rules, and generally forbidden to meddle in each other’s business. As we have seen, sometimes regulators even created silos within silos, segregating the long-distance and local telephone markets. This made less and less sense as media of all types were reduced to fungible digital bits, which could be commingled on the same optical fiber, satellite transmission, or ethernet cable. The intent of the 1996 Act, shared by Democrats and Republicans alike, was to tear down these barriers, these “Berlin Walls of regulation”, as Gore’s own summary of the act put it.13 A complete itemization of the regulatory changes in this doorstopper of a bill is not possible here, but a few examples provide a taste of its character. Among other things it:allowed the RBOCs to compete in long-distance telephone markets,lifted restrictions forbidding the same entity from owning both broadcasting and cable services,axed the rules that prevented concentration of radio station ownership.The risk, though, of simply removing all regulation, opening the floodgates and letting any entity participate in any market, was to recreate AT&T on an even larger scale, a monopolistic megacorp that would dominate all forms of communication and stifle all competitors. Most worrisome of all was control over the so-called last mile – from the local switching office to the customer’s home or office. Building an inter-urban network connecting the major cities of the U.S. was expensive but not prohibitive, several companies had done so in recent decades, from Sprint to UUNET. To replicate all the copper or cable to every home in even one urban area, was another matter. Local competition in landline communications had scarcely existed since the early wildcat days of the telephone, when tangled skeins of iron wire criss-crossed urban streets. In the case of the Internet, the concern centered especially on high-speed, direct-to-the-premises data services, later known as broadband. For years, competition had flourished among dial-up Internet access providers, because all the end user required to reach the provider’s computer was access to a dial tone. But this would not be the case by default for newer services that did not use the dial telephone network. The legislative solution to this conundrum was to create the concept of the “CLEC” – competitive local exchange carrier. The RBOCs, now referred to as “ILECs” (incumbent local exchange carriers), would be allowed full, unrestricted access to the long-distance market only once the had unbundled their networks by allowing the CLECs, which would provide their own telecommunications services to homes and businesses, to interconnect with and lease the incumbents’ infrastructure. This would enable competitive ISPs and other new  service providers to continue to get access to the local loop even when dial-up service became obsolete – creating, in effect, a dial tone for broadband. The CLECs, in this model, filled the same role as the long-distance providers in the post-break-up telephone market. Able to freely interconnect at reasonable fees to the existing local phone networks, they would inject competition into a market previously dominated by the problem of natural monopoly. Besides the creation of the CLECS, the other major part of the bill that affected the Internet addressed the Republicans’ moral agenda rather than their economic one. Title V, known as the Communications Decency Act, forbade the transmission of indecent or offensive material – depicting or describing “sexual or excretory activities or organs”, on any part of the Internet accessible to minors. This, in effect, was an extension of the obscenity and indecent rules that governed broadcasting into the world of interactive computing services. How, then, did this sweeping act fare in achieving its goals? In most dimensions it proved a failure. Easiest to dispose with is the Communications Decency Act, which the Supreme Court struck down quickly (in 1997) as a violation of the First Amendment. Several parts of Title V did survive review however, including Section 230, the most important piece of the entire bill for the Internet’s future. It allows websites that host user-created content to exist without the fear of constant lawsuits, and protects the continued existence of everything from giants like Facebook and Twitter to tiny hobby bulletin boards. The fate of the efforts to promote competition within the local loop took longer to play out, but proved no more successful than the controls on obscenity. What about the CLECs, given access to the incumbent cable and telephone infrastructure so that they could compete on price and service offerings? The law required FCC rulemaking to hash out the details of exactly what kind of unbundling had to be offered. The incumbents pressed the courts hard to dispute any such ruling that would open up their lines to competition, repeatedly winning injunctions on the FCC, while threatening that introducing competitors would halt their imminent plans for bringing fiber to the home. Then, with the arrival of the Bush Administration and new chairman Michael Powell in 2001, the FCC became actively hostile to the original goals of the Telecommunications Act. Powell believed that the need for alternative broadband access would be satisfied by intermodal competition among cable, telephone, power communications networks, cellular and wireless networks. No more FCC rules in favor of CLECs would be forthcoming. For a brief time around the year 2000, it was possible to subscribe to third-party high-speed internet access using the infrastructure of your local telephone or cable provider. After that, the most central of the Telecom Act’s  pro-competitive measures became, in effect, a dead letter. The much ballyhooed fiber-to-the home only began to actually reach a significant number of homes after 2010, and the only with reluctance on the part of the incumbents.14 As author Fred Goldstein put it, the incumbents had “gained a fig leaf of competition without accepting serious market share losses.”15During most of the twentieth century, networked industries in the U.S. had sprouted in a burst of entrepreneurial energy and then been fitted into the matrix of a regulatory framework as they grew large and important enough to affect the public interest. Broadcasting and cable television had followed this pattern. So had trucking and the airlines. But with the CLECs all but dead by the early 2000s, the Communications Decency Act revoked, and other attempts to control the Internet such as the Clipper chip16 stymied, the Internet would follow an opposite course. Having come to life under the guiding hand of the state, it would now be allowed to develop in an almost entirely laissez-faire fashion. The NAP framework established by the NSF at the hand-off of the backbone would be the last major government intervention in the structure of the Internet. This was true at both the transport layer – the networks such as Verizon and AT&T that transported raw data, and the applications layer – software services from portals like Yahoo! to search engines like Google to online stores like Amazon.  In our last chapter, we will look at the consequences of this fact, briefly sketching the evolution of the Internet in the U.S. from the mid-1990s onward. [Previous] [Next]Quoted in Richard Wiggins, “Al Gore and the Creation of the Internet” 2000.“Remarks by Vice President Al Gore at National Press Club“, December 21, 1993.Biographical details on Wolff’s life prior to NSF are scarce – I have recorded all of them that I could find here. Notably I have not been able to find even his date and place of birth.Schrader and PSINet rode high on the Internet bubble in the late 1990s, acquiring other businesses aggressively, and, most extravagantly, purchasing the naming rights to the football stadium of the NFL’s newest expansion team, the Baltimore Ravens. Schrader tempted fate with a 1997 article entitled “Why the Internet Crash Will Never Happen.” Unfortunately for him, it did happen, bringing about his ouster from the company in 2001 and PSINet’s bankruptcy the following year.To get a sense of how fast the cost of bandwidth was declining – in the mid-1980s, leasing a T1 line from New York to L.A. would cost $60,000 per month. Twenty years later, a OC-3 circuit with 100 times the capacity cost only $5,000, more than a thousand-fold reduction in price per capacity. See Fred R. Goldstein, The Great Telecom Meltdown, 95-96. Goldstein states that the 1.55 mpbs T1/DS1 line has 1/84th the capacity of OC-3, rather than 1/100th, a discrepancy I can’t account for. But this has little effect on the overall math.Office of Inspector General, “Review of NSFNET,” March 23, 1993.Fraser, “NSFNET: A Partnership for High-Speed Networking, Final Report”, 27.Brian Kahin, “RFC 1192: Commercialization of the Internet Summary Report,” November 1990.John Markoff, “Data Network Raises Monopoly Fear,” New York Times, December 19, 1991.Though many other technical details had to be sorted out, see  Susan R. Harris and Elise Gerich, “Retiring the NSFNET Backbone Service: Chronicling the End of an Era,” ConneXions, April 1996.The most problematic part of privatization proved to have nothing to do with the hardware infrastructure of the network, but instead with handing over control over the domain name system (DNS). For most of its history, its management had depended on the judgment of a single man – Jon Postel. But businesses investing millions in a commercial internet would not stand for such an ad hoc system. So the government handed control of the domain name system to a contractor, Network Solutions. The NSF had no real mechanism for regulatory oversight of DNS (though they might have done better by splitting the control of different top-level domains (TLDs) among different contractors), and Congress failed to step in to create any kind of regulatory regime. Control changed once again in 1998 to the non-profit ICANN (Internet Corporation for Assigned Names and Numbers), but the management of DNS still remains a thorny problem.The only quasi-exception to this focus on fostering competition was a proposal by Senator Daniel Inouye to reserve 20% of Internet traffic for public use: Steve Behrens, “Inouye Bill Would Reserve Capacity on Infohighway,” Current, June 20, 1994. Unsurprisingly, it went nowhere. Al Gore, “A Short Summary of the Telecommunications Reform Act of 1996”.Jon Brodkin, “AT&T kills DSL, leaves tens of millions of homes without fiber Internet,” Ars Technica, October 5, 2020.Goldstein, The Great Telecom Meltdown, 145.The Clipper chip was a proposed hardware backdoor that would give the government the ability to bypass any U.S.-created encryption software.Further ReadingJanet Abatte, Inventing the Internet (1999)Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996)Shane Greenstein, How the Internet Became Commercial (2015)Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (2018)Rajiv Shah and Jay P. Kesan, “The Privatization of the Internet’s Backbone Network,” Journal of Broadcasting & Electronic Media (2007)Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

Read more
The Rail Revolution

As we noted last time, twenty years elapsed from the time when Trevithick gave up on the steam locomotive before rails would begin to seriously challenge canals as major transport arteries for Britain, not mere peripheral capillaries. To complete that revolution required improvements in locomotives, better rails, and a new way of thinking about the comparative economics of transportation. Locomotives: The Trevithick Tradition The evolution of locomotive technology in the 1810s and 1820s took place entirely in the coal-mining regions of the north, and almost entirely along the River Tyne near Newcastle, into whose waters a torrent of coal flowed over a of tangle of railways. Because of this, Trevithick’s most lasting impact on history did not come from Penydarren, nor the “dragon,” nor Catch-me-who-Can, but an engine built for Christopher Blackett, proprietor of the Tyneside colliery of Wylam. Blackett’s colliery would become the most prolific locomotive-building center of the 1810s. In 1804, Blackett had learned of Trevithick’s locomotive, and had a skilled workman who had been at Penydarren reproduce the design for him in Northumberland. Nothing came of this first attempt, as Blackett realized that the five miles of wooden rails at his colliery would never survive the attentions of the five-ton locomotive. He put it to use as a stationary engine instead. After relaying his tracks in cast iron, he wrote to Trevithick in 1808 about trying again, but by that time the disillusioned inventor had already given up on locomotives for other schemes.[2] The story of exactly what happened at Wylam next is not entirely clear, and is further muddied by competing claims for precedence as the key figure in the construction of the first reallocomotive, claims pursued with a partiality verging on mendacity by the protagonists and their descendants well into the twentieth century.[3] But sometime in the 1810s, Blackett decided to try again, and shift for himself this time, having the locomotive construction done at his own works under the direction of his own “viewer” (the title for the general manager of a coal mine), William Hedley, with consultation from his smith foreman, Timothy Hackworth.[4] It may be that Blackett was stimulated to action by the activities of John Blenkinsop at the Middleton Colliery Railway near Leeds. The belief that a smooth wheel could not drive a vehicle on a smooth track still had currency, and inventors continued to look for alternative forms of steam traction: in 1813 one inventor, William Brunton, constructed a literal translation of a horse into mechanical form that would pull a vehicle along with metal legs.[5] Blenkinsop’s solution was a cog railway engine, built by the mechanic Matthew Murray, with a toothed drive wheel running in a rack set on the outside edge of the track. This Middletown engine ran consistently for years afterward, hauling up to thirty wagons at a leisurely three miles an hour.[6] The Blenkinsop-Murray rack locomotive Salamanca, named after a victorious Anglo-Portuguese battle against Napoleonic forces. Whether influenced by Blenkinsop or not, Blackett (like Trevithick) used a hand-powered truck to convince himself that a smooth-wheeled vehicle could in fact work, then had Hedley and Hackworth construct his first real locomotive. They clearly modeled their design on Trevithick’s Penydarren, with a return flue boiler and a flywheel. This first engine was too feeble. Nothing deterred, Blackett tried again. This second engine, known to history as Puffing Billy (it was originally named after Blackett’s daughter Jane), made considerable advances on Trevithick’s plan: it had two alternating pistons, which eliminated the need for a flywheel to sustain the vehicle’s momentum through the dead zones in the stroke. This change also made it easy to supply power to wheels on both sides, which avoided heavily wearing one side of the rail. Rather than direct gearing, vertical rods connected to small geared spur wheels brought power from the engine down to the wheels. However, Billy was too heavy even for the cast iron track, and consistently broke the rails. So, Blackett tried a third time. This time the builders placed the engine on two four-wheeled trucks, spreading the weight over twice as many wheels. This did the trick. Finally, Wylam had a usable steam locomotive.[7] The eight-wheeled Wylam locomotive design. One might wonder why Blackett persisted through so many failures. What we might see in retrospect as determination appeared to most contemporaries as folly, if not madness. Although the steam locomotive concept had a certain romantic appeal to nineteenth-century gearheads, economic forces also made it worthwhile to seek out any possible replacement for horse-power at exactly this time. Since the beginning of the Napoleonic Wars, Britain had been cut off from European trade and had been supplying its own armies overseas, and the price of horses and the grain to feed them rose accordingly. Oat prices in the 1810s were 50% or more higher than they had been in the 1790s, and the demands of the army’s operations also made the horses themselves dear. So, it is no coincidence that multiple steam locomotive experiments sprung up in this period.[8] George Stephenson had the same cost-cutting reason in mind when he built his first locomotive in 1814. Stephenson, like his father before him, became a steam engine minder in the Newcastle coal district, working his way up from assistant fireman (responsible for stoking the furnace) to brakeman (responsible for regulating the speed of the machinery that lifted cages of coal out of the mine).[9] But he was not an ordinary sort of workman: when his colleagues went to drink and bet on dogfights, he instead disassembled his engine to better understand its workings, cleaned it, and put it together again.[10]  In 1806, his young wife and infant daughter died, leaving him alone with a three-year-old son and infirm parents to care for. He considered leaving for a fresh start in the United States, but lacked he money. Nonetheless, he scraped together the funds to ensure that his son Robert would benefit from a more formal education than he did, and Robert tutored his father in turn, advancing the elder Stephenson’s mechanical and scientific knowledge. A turn of fortune finally came in 1810, when George repaired a faulty pumping engine that had defied all the attempts to its operators to make it run well enough to drain the pit. Stephenson thus gained a reputation as an “engine-doctor,” a kind of consulting engineer for problem engines in the region. This led to a position as “engine-wright” at the Killingworth High Pit colliery in 1812, with a salary of one hundred pounds a year, marking a permanent departure from the laboring class.[11] Stephenson, with the support of Killingworth’s owner, Thomas Liddell, was determined to bring down the cost of transporting coal from the mine to the river. He added inclines in several sections with a rope pull that used the weight of descending wagons to drag returning wagons up the incline. But he believed still more savings could be found with a steam locomotive. He and the workmen at Killingworth completed their first attempt, the Blücher, in July 1814. It was named in honor of the Prussian general who had helped to secure the defeat of Napoleonic France just a few months before.  Stephenson had learned, and borrowed, from the work at Middleton and at Wylam, but introduced one major improvement: the so-called “steam blast,” a suction force created by releasing the spent steam from the cylinders into the furnace exhaust pipe, rather than into the open air. His initial motivation for redirecting the steam may have been to serve as a muffler: neighbors complained consistently of the loud noise created by the squeal of steam from early locomotives. But the ultimate value of this change came from the fact that it acted like a bellows, drawing air through the furnace and thus combusting the coal more vigorously, delivering more power to the wheels. With the enhanced power from the steam blast, Stephenson had an economically sound engine, but it still ran in an unsatisfactory, jerky fashion. Stephenson identified the problem as the gears used to deliver power to the wheels in all locomotives since Trevithick’s. So, in 1815 he had a secondlocomotive constructed, which dispensed with the gearing by sending power from the piston through a rigid connecting rod directly to a pin on the wheel: the engine could thus work the wheel like a crank. This was trickier than it sounds, because he could not rely on the left and right rails running totally even. The connecting rod therefore required a ball-and-socket joint so each side could move up and down with the axle as it tilted one way or the other.[12] Stephenson’s Killingworth engine. Rails: A Materials Revolution So, the locomotive advanced bit by bit, becoming ever more powerful, reliable, and efficient. But the iron beast strode on feet of clay – its rails. Well, in fact, the rails were made of iron, too. But they did keep breaking. The traditional railway had to be, in effect, reinvented to serve as a suitable substructure for the locomotive. This created something of a catch-22, since to prove the value of the locomotive required first adopting rail designs that were themselves unproven and more costly than the status quo. Promoters of the locomotive would have to sell the capitalists building new railways on the rail and the machine to run upon it at the same time. In the first decades of the nineteenth century, vertical, flat-topped rails replaced the L-shaped plateway rails that were common around 1800 in new railway construction. Flanges on the inner lip of the wheel kept the vehicle on course. This approach reduced friction and used less metal per yard of track. In the 1820s locomotive makers also began to use coned wheels, with a narrower radius at the outside than at the inside, which greatly improved their ability to hold a consistent line on the track, especially around corners. So far, all of this was in effect a rediscovery of what had been standard practice on wooden railways in the eighteenth century.[13] A joint patent between George Stephenson and the chemist and engineer Wiliam Losh made some minor improvements to the design of cast iron rails, but the necessary improvements in rail design to make the steam locomotive a success appeared in 1820 in the work of John Birkinshaw. Birkinshaw introduced a whole host of innovations all at once. Most importantly, he had figured out how to roll sections of wrought iron rail that would be far tougher than the cast iron equivalent, allowing locomotives to swell in size and weight without concern for breaking the rails. He also replaced the traditional flat top for the rail with a convex curve, which would provide a smooth surface to ride on even if (as was often the case) the rail was not installed perfectly vertically. He realized that the sides of the rail were not needed for strength, and proposed the T-shaped rail cross-section that is still familiar today, saving on weight and cost. Finally, he found that he could produce rail in up to eighteen-foot-long sections, six times the standard for cast-iron rails, reducing the number of          joints that tended to jostle the machinery and the load.[14] Rail cross-sections from Birkinshaw’s patent. Note the curved top surface and the now-common T-shape of the left- and right-most designs. The basic design of railways for the steam age was now in place, in a form that would not change much until the Bessemer process made steel rails practical decades later. Stephenson recognized the superiority of Birkinshaw’s rails to such an extent that he jilted his own erstwhile partner, Losh, and chose wrought-iron rails for the first new railway for which he served as chief engineer, the Stockton and Darlington. This railway, opened in 1825, represented the emergence of the steam locomotive from colliery experiments and curiosities into the field of general public economic interest. Economics: The Virtue of Speed You’ll recall that the motivation for the various experiments with steam locomotives in the 1810s was to save money on horses – the steam engine was seen as a potentially cheaper source of traction within the framework of the existing system of colliery railways. However, there was a grander vision for rail transport that had been percolating in the background since as early as 1800, when William Thomas, a colliery engineer, proposed to the Newcastle Literary and Philosophical Society that the horse-drawn railway could serve as a general replacement for road transport, carrying goods and passengers between cities. A fellow visionary proposed that costs could be further reduced with supplementary steam engines along the way to pull the carriages along with chains. James Anderson, , a member of various philosophical and agricultural societies, wrote with enthusiasm of this proposal: “Around every market you may suppose a number of concentric circles drawn, within each of which certain articles are marketable, which were not so before, and thus become the source of wealth and prosperity to many individuals. Diminish the expence of carriage but one farthing, and you widen the circle ; you form, as it were, a new creation, not only of stones, and earth, and trees, and plants, but of men also, and, what is more, of industry, of happiness, and joy.”[15] An expression became commonplace that the railway would “annihilate space and time.” It seems to have originated in a couplet from the 1720s as a hyperbolic declaration of the despair of parted lovers: “Ye gods! annihilate but space and time, And make two lovers happy.”[16] But railroad visionaries would deploy it again and again in the decades to come in an economic and technological sense. William James, a lawyer and land agent born in 1771, was not the first railroad visionary, but he was the first to match such dreams with realistic means for achieving them. He became involved with railroads in 1801, when he helped fund the first one opened to public custom, the Surrey Iron Railway. In 1821, after surveying the various locomotive builders, he was most impressed with Stephenson, and penned a deal to promote his locomotives and railways. James connected Stephenson to the partners of the Stockton and Darlington Railway, a group of colliers who needed a link to the River Tees for their coal. With Stephenson as their chief engineer, they built the first public steam railway, twenty-five miles of rail open to anyone willing to pay to transport their cargo (or passengers). It was through speed that the locomotive would prove its worth as a form of general communication, not a mere adjunct to colliers and canals, and it was at Stockton and Darlington that the locomotive first proved it could be significantly faster than a team of horses: when the railway first opened on September 27, 1825, the Stephenson locomotive pulled its hundred-ton load on the downhill run at a brisk pace of ten-to-twelve miles-per-hour. Horsemen attempting to follow the locomotive were unable to keep pace as they attempted to follow it through the wall- and hedge-strewn terrain alongside the railroad.[17] This speed was anticipated by an anonymous 1824 Mechanics Magazine article on the economic advantages railways. The author pointed out that a horse pulled at its maximum power only at low speeds (say, two-and-a-half miles-per-hour). At higher speeds more and more of its power went to moving its own body, until at twelve miles-per-hour it could pull no load at all. Moreover, speed served even more of a handicap for the horse on a canal, because the friction of the water on the barge rose with the square of the speed. Neither disadvantage applied to a steam locomotive on rails, which could pull at ever higher speeds while losing relatively little power to air resistance. At two-and-a-half miles per hour, a given force would pull almost four times the weight in a canal barge than it would on rails, but at thirteen-and-a-half miles-per-hour the advantage was more than reversed: the rail’s power was undiminished but the canal load was reduced by a factor of almost thirty.[18] This doctrine of speed was a new idea in the world of transportation. For millennia, bulk transport on land had depended on animals and barges plodding along at a couple of miles per hour. Economizing on transportation costs meant assuming low speeds as a given, and focusing on lowering the cost of pulling a single load, just as the locomotive builders of the 1810s had tried to do. But with higher speeds, more loads could be pulled with the same capital investment in a given time period. What’s more, entirely new markets could be opened up: delivery of fresh produce to urban markets, and rapid inter-urban passenger service. The Mechanics Magazine article made an immediate impression and the doctrine of speed quickly became the dogma of the rail promoters. Speed would make the echoing refrain of “the annihilation of space and time” a reality. Settling the Question But the promoters of the steam locomotive had not yet settled the question of what the future of land transportation would look like. The creators of the Stockton and Darlington line hedged their bets, including two stationary engines for pulling trains up steep sections and using horses for much of the cargo.[19] Skeptics and critics of the steam locomotive could still readily be found. Much of the landed gentry worried about the effect of screeching locomotives on their livestock and their land values. Canal and turnpike operators, of course, feared the competition.  Other critics worried that locomotives would exhaust the country’s coal reserves, while still others questioned the safety of operating a vehicle at such high speeds.[20] One commentator on a proposed railroad at Woolwich wrote that …we should as soon expect the people of Woolwich to suffer themselves to be fired off upon one of Congreve’s ricochet rockets, as trust themselves to the mercy of such a machine, going at such a rate… if ponderous bodies, moving with a velocity of ten or twelve miles an hour, were to impinge on any sudden obstruction, or a wheel break, they would be shattered like glass bottles dashed on a pavement ; then what would become of the Woolwich rail-road passengers, in such a case, whirling along at sixteen or eighteen miles an hour…? We trust, however, that Parliament will, in all the rail-roads it may sanction, limit the speed to eight or nine miles an hour, which… is as great as can be ventured upon with safety.[21] Stephenson’s next project, the Liverpool and Manchester Railway, had to fight past these critics for Parliamentary approval. It was a landmark railway in two respects: first, by building an inter-urban link, its shareholders were committing to the railroad as a general form of transportation; this was not only or even primarily a means to bring coal to market. Second, those same shareholders committed wholeheartedly to steam traction; the traditional option of the horse was right out. Steam would pull their trains, the question was how: stationary engines or locomotives, and if a locomotive, of what design? To decide, they held a competition with a five-hundred-pound prize for the best engine, known as the Rainhill trials. One of the directors of the railway entered the Cycyloped, a carriage driven by a treadmill that was driven in turn by a horse walking atop it. More plausible entries included Sans Pareil, a locomotive design by former Wylam locomotive mechanic Timothy Hackworth, and Novelty, built by two London engineers.[22] The winning entry, however, came from George’s son, Robert. After returning from his mining ventures in the New World in 1827, he had apprenticed in locomotive construction under his father. But he built his own masterwork, Rocket, for the Liverpool and Manchester. Its great design advance lay in its multi-tubular boiler: rather than a single return flue pipe, it had twenty-five separate copper tubes to carry the hot gases from the firebox through the boiler. This greatly increased the surface area to transfer to the boiler. The narrower tubes also eliminated a serious problem with the steam blast: its tendency to suck burning embers straight out of the firebox along with the exhaust, wasting fuel. The new boiler design made the Rocket the most powerful locomotive built to date, capable of speeds of thirty miles-per-hour, on a par with the highest speeds humans had ever experienced (on the back of a galloping horse). A London reporter who witnessed the unladen Rocket whizzing by wrote that “[s]o astonishing was the celerity with which the engine, with its apparatus, darted past the spectators, that it could be compared to nothing but the rapidity with which the swallow darts through the air. Their astonishment was complete, every one exclaiming involuntarily, ‘The power of steam is unlimited!’”[23] Stephenson’s Rocket [National Railway Museum, UK / CCA 4.0]. Despite Rocket’s success, the centrality of the Stephensons to the history of the locomotive was more contingent than necessary, resulting from George’s central place in the development of two of the most important early lines (the Stockton and Darlington and Liverpool and Manchester). Ever since the burst of new designs in the 1810s, stimulated by the high price of horse feed, Britain had sustained multiple lines of locomotive development, and the basic skills required were familiar to anyone with experience in boiler and steam engine design. Hackworth’s Sans Pareil was almost as good as Rocket and also saw service on the Liverpool and Manchester line. In 1831, the Liverpool and Manchester carried 445,000 passengers and 54,000 tons of cargo. The turnpike roads and canals along the line suffered a sharp decline in revenue and had to lower their charges. The former stagecoach lines between the cities became instantly defunct. The steam railway had proved its economic worth, and by 1837 Britain could boast eighty railway companies and a thousand miles of track.[24] A train on the Liverpool and Manchester railway, crossing the peat bog of the Chat Moss. Still, the question was not altogether settled. For another fifteen years or so, entrepreneurs put forward a variety of alternative means of transport: several tried to revive the idea of steam road carriages, others promoted atmospheric railways that would operate by creating a vacuum on one side of the carriage. Canal owners were especially assiduous in searching for some other way forward that would not obviate their investments: barges pulled by locomotives on the tow path, barges pulled by paddle or screw steamboats, a tug that pulled itself along rails attached to either side of the canal. None of these could match the speed of the railway locomotive, and all struggled with the problem of locks.[25] By the early 1850s, railways carried more cargo in Britain than the canal system. Steam railways had spread across the United States and much of continental Europe, though European rails tended to follow a state-led development model, in contrast to the helter-skelter private buildout in the Anglo-American sphere. Despite talk among railway visionaries of unifying city and countryside, the railway tended to strengthen the cultural and economic centrality of the urban centers. Traffic between cities increased rapidly: that between Liverpool and Manchester quadrupled. Horse travel did not disappear, but was repurposed: local coaches and omnibuses multiplied to serve the flood of urban visitors. The products of the country became more readily available to the city than ever before: cows arrived in cattle cars on the hoof, to be butchered on site for urban middle- and upper-class customers; fresh milk, once a dubious prospect within a place like Paris, now arrived daily by railcar. Long-distance journeys across the whole of Britain became possible within a single day: in 1763 the stagecoach from London to Edinburgh took two weeks; by 1835 the roads and coaches had improved enough to do it in forty-eight hours; but in 1849 a rail passenger could make the journey in just twelve hours. [26] Neither canals nor turnpikes, important as they were to the development of Europe’s economy, had transformed everyday life to the same degree as the steam locomotive. The revolution was closed. Rails had won.

Read more
ARPANET, Part 2: The Packet

By the end of 1966, Robert Taylor, had set in motion a project to interlink the many computers funded by ARPA, a project inspired by the “intergalactic network” vision of J.C.R. Licklider. Taylor put the responsibility for executing that project into the capable hands of Larry Roberts. Over the following year, Roberts made several crucial decisions which would reverberate through the technical architecture and culture of ARPANET and its successors, in some cases for decades to come. The first of these in importance, though not in chronology, was to determine the mechanism by which messages would be routed from one computer to another. The Problem If computer A wants to send a message to computer B, how does the message find its way from the one to the other? In theory, one could allow any node in a communications network to communicate with any other node by linking every such pair with its own dedicated cable. To communicate with B, A would simply send a message over the outgoing cable that connects to B. Such a network is termed fully-connected. At any significant size, however, this approach quickly becomes impractical, since the number of connections necessary increases with the square of the number of nodes.1 Instead, some means is needed for routing a message, upon arrival at some intermediate node, on toward its final destination. As of the early 1960s, two basic approaches to this problem were known. The first was store-and-forward message switching. This was the approach used by the telegraph system. When a message arrived at an intermediate location, it was temporarily stored there (typically in the form of paper tape) until it could be re-transmitted out to its destination, or another switching center closer to that destination. Then the telephone appeared, and a new approach was required. A multiple-minute delay for each utterance in a telephone call to be transcribed and routed to its destination would result in an experience rather like trying to converse with someone on Mars. Instead the telephone system used circuit switching. The caller began each telephone call by sending a special message indicating whom they were trying to reach. At first this was done by speaking to a human operator, later by dialing a number which was processed by automatic switching equipment. The operator or equipment established a dedicated electric circuit between caller and callee. In the case of a long-distance call, this might take several hops through intermediate switching centers. Once this circuit was completed, the actual telephone call could begin, and that circuit was held open until one party or the other terminated the call by hanging up. The data links that would be used in ARPANET to connect time-shared computers partook of qualities of both the telegraph and the telephone. On the one hand, data messages came in discrete bursts, like the telegraph, unlike the continuous conversation of a telephone. But these messages could come in a variety of sizes for a variety of purposes, from console commands only a few characters long to large data files being transferred from one computer to another. If the latter suffered some delays in arriving at their destination, no one would particularly mind. But remote interactivity required very fast response times, rather like a telephone call. One important difference between computer data networks and bout the telephone and the telegraph was the error-sensitivity of machine-processed data. A single character in a telegram changed or lost in transmission, or a fragment of a word dropped in a telephone conversation, were matters unlikely to seriously impair human-to-human communication. But if noise on the line flipped a single bit from 0 to 1 in a command to a remote computer, that could entirely change the meaning of that command. Therefore every message would have to be checked for errors, and re-transmitted if any were found. Such repetition would be very costly for large messages, which would be all the more likely to be disrupted by errors, since they took longer to transmit. A solution to these problems was arrived at independently on two different occasions in the 1960s, but the later instance was the first to come to the attention of Larry Roberts and ARPA. The Encounter In the fall of 1967, Roberts arrived in Gatlinburg, Tennessee, hard by the forested peaks of the Great Smoky Mountains, to deliver a paper on ARPA’s networking plans. Almost a year into his stint at the Information Processing Technology Office (IPTO), many areas of the network design were still hazy, among them the solution to the routing problem. Other than a vague mention of blocks and block size, the only reference to it in Roberts’ paper is in a brief and rather noncommittal passage at the very end: “It has proven necessary to hold a line which is being used intermittently to obtain the one-tenth to one second response time required for interactive work. This is very wasteful of the line and unless faster dial up times become available, message switching and concentration will be very important to network participants.”2 Evidently, Roberts had still not entirely decided whether to abandon the approach he had used in 1965 with Tom Marrill, that is to say, connecting computers over the circuit-switched telephone network via an auto-dialer. Coincidentally, however, someone else was attending the same symposium with a much better thought-out idea of how to solve the problem of routing in data networks. Roger Scantlebury had crossed the Atlantic, from the British National Physical Laboratory (NPL), to present his own paper. Scantlebury took Roberts aside after hearing his talk, and told him all about something called packet-switching. It was a technique his supervisor at the NPL, Donald Davies had developed. Davies’ story and achievements are not generally well-known in the U.S, although in the fall of 1967, Davies’ group at the NPL was at least a year ahead of ARPA in its thinking. Davies, like many early pioneers of electronic computing, had trained as a physicist. He graduated from Imperial College, London in 1943, when he was only 19 years old, and was immediately drafted into the “Tube Alloy” program – Britain’s code name for its nuclear weapons project. There he was responsible for supervising a group of human computers, using mechanical and electric calculators to crank out numerical solutions to problems in nuclear fission.3 After the war, he learned from the mathematician John Womersley about a project he was supervising out at the NPL, to build an electronic computer that would perform the same kinds of calculations at vastly greater speed. The computer, designed by Alan Turing, was called ACE, for “automatic computing engine.” Davies was sold, and got himself hired at NPL as quickly as he could. After contributing to the detailed design and construction of the ACE machine, he remained heavily involved in computing as a research leader at NPL. He happened in 1965 to be in the United States for a professional meeting in that capacity, and used the occasion to visit several major time-sharing sites to see what all the buzz was about. In the British computing community time-sharing in the American sense of sharing a computer interactively among multiple users was unknown. Instead, time-sharing meant splitting a computer’s workload across multiple batch-processing programs (to allow, for example, one program to proceed while another was blocked reading from a tape).4 Davies’ travels took him to Project MAC at MIT, RAND Corporation’s JOSS Project in California, and the Dartmouth Time-Sharing System in New Hampshire. On the way home one of his colleagues suggested they hold a seminar on time-sharing to inform the British computing community about the new techniques that they had learned about in the U.S. Davies agreed, and played host to a number of major figures in American computing, among them Fernando Corbató (creator of the Compatible Time-Sharing System at MIT), and Larry Roberts himself. During the seminar (or perhaps immediately after), Davies was struck with the notion that the time-sharing philosophy could be applied to the links between computers, as well as to the computers themselves. Time-sharing computers gave each user a small time slice of the processor before switching to the next, giving each user the illusion of an interactive computer at their fingertips. Likewise, by slicing up each message into standard-sized pieces which Davies called “packets,” a single communications channel could be shared by multiple computers or multiple users of a single computer. And moreover, this would address all the aspects of data communication that were poorly served by telephone- or telegraph-style switching. A user engaged interactively at a terminal, sending short commands and receiving short responses, would not have their single-packet messages blocked behind a large file transfer, since that transfer would be broken into many packets. And any corruption in such large messages would only affect a single packet, which could easily be re-transmitted to complete the message. Davies wrote up his ideas in an unpublished 1966 paper, entitled “Proposal for a Digital Communication Network.” The most advanced telephone networks were then on the verge of computerizing their switching systems, and Davies proposed building packet-switching into that next-generation telephone network, thereby creating a single wide-band communications network that could serve a wide variety of uses, from ordinary telephone calls to remote computer access. By this time Davies had been promoted to Superintendent of NPL, and he formed a data communications group under Scantlebury to flesh out his design and build a working demonstration. Over the year leading up to the Gatlinburg conference, Scantlebury’s team had thus worked out details of how to build a packet-switching network. The failure of a switching node could be dealt with by adaptive routing with multiple paths to the destination, and the failure of an individual packet by re-transmission. Simulation and analysis indicated an optimal packet size of around 1000 bytes – much smaller and the loss of bandwidth from the header metadata required on each packet became too costly, much larger and the response times for interactive users would be impaired too often by large messages. The paper delivered by Scantlebury contained details such as a packet layout format… And an analysis of the effect of packet size on network delay. Meanwhile, Davies’ and Scantlebury’s literature search turned up a series of detailed research papers by an American who had come up with roughly the same idea, several years earlier. Paul Baran, an electrical engineer at RAND Corporation, had not been thinking at all about the needs of time-sharing computer users, however. RAND was a Defense Department-sponsored think tank in Santa Monica, California, created in the aftermath of World War II to carry out long-range planning and analysis of strategic problems in advance of direct military needs.[^sdc] Baran’s goal was to ward off nuclear war by building a highly robust military communications net, which could survive even a major nuclear attack. Such a network would make a Soviet preemptive strike less attractive, since it would be very hard to knock out America’s ability to respond by hitting a few key nerve centers. To that end, Baran proposed a system that would break messages into what he called message blocks, which could be independently routed across a highly-redundant mesh of communications nodes, only to be reassembled at their final destination.  [^sdc]: System Development Corporation (SDC), the primary software contractor to the SAGE system and the site of one of the first networking experiments, as discussed in the last segment, had been spun off from RAND. ARPA had access to Baran’s voluminous RAND reports, but disconnected as they were from the context of interactive computing, their relevance to ARPANET was not obvious. Roberts and Taylor seem never to have taken notice of them. Instead, in one chance encounter, Scantlebury had provided everything to Roberts on a platter: a well-considered switching mechanism, its applicability to the problem of interactive computer networks, the RAND reference material, and even the name “packet.” The NPL’s work also convinced Roberts that higher speeds would be needed than he had contemplated to get good throughput, and so he upgraded his plans to 50 kilobits-per-second lines. For ARPANET, the fundamentals of the routing problem had been solved.5 The Networks That Weren’t As we have seen, not one, but two parties beat ARPA to the punch on figuring out packet-switching, a technique that has proved so effective that its now the basis of effectively all communications. Why, then, was ARPANET the first significant network to actually make use of it? The answer is fundamentally institutional. ARPA had no official mandate to build a communications network, but they did have a large number of pre-existing research sites with computers, a “loose” culture with relatively little oversight of small departments like the IPTO, and piles and piles of money. Taylor’s initial 1966 request for ARPANET came to $1 million, and Roberts continued to spend that much or more in every year from 1969 onward to build and operate the network6. Yet for ARPA as a whole this amount of money was pocket change, and so none of his superiors worried too much about what Roberts was doing with it, so long as it could be vaguely justified as related to national defense.  By contrast, Baran at RAND had no means or authority to actually do anything. His work was pure research and analysis, which might be applied by the military services, if they desired to do so. In 1965, RAND did recommend his system to the Air Force, which agreed that Baran’s design was viable. But the implementation fell within the purview of the Defense Communications Agency, who had no real understanding of digital communications. Baran convinced his superiors at RAND that it would be better to withdraw the proposal than allow a botched implementation to sully the reputation of distributed digital communication. Davies, as Superintendent of the NPL, had rather more executive authority than Baran, but a more limited budget than ARPA, and no pre-existing social and technical network of research computer sites. He was able to build a prototype local packet-switching “network” (it had only one node, but many terminals) at NPL in the late 1960s, with a modest budget of £120,000 pounds over three years.7 ARPANET spent roughly half that on annual operational and maintenance costs alone at each of its many network sites, excluding the initial investment in hardware and software.8 The organization that would have had the power to build a large-scale British packet-switching network was the post office, which operated the country’s telecommunications networks in addition to its traditional postal system. Davies managed to interest a few influential post office officials in his ideas for a unified, national digital network, but to change the  momentum of such a large system was beyond his power. Licklider, through a combination of luck and planning, had found the perfect hothouse for his intergalactic network to blossom in. That is not to say that everything except for the packet-switching concept was a mere matter of money. Execution matters, too. Moreover, several other important design decisions defined the character of ARPANET. The next we will consider is how responsibilities would be divided between the host computers sending and receiving a message, versus the network over which they sent it. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late (1996) Leonard Kleinrock, “An Early History of the Internet,” IEEE Communications Magazine (August 2010) Arthur Norberg and Julie O’Neill, Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986 (1996) M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)

Read more
The Hobby Computer Culture

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] From 1975 through early 1977, the use of personal computers remained almost exclusively the province of hobbyists who loved to play with computers and found them inherently fascinating. When BYTE magazine came out with its premier issue in 1975, the cover called computers “the world’s greatest toy.” When Bill Gates wrote about the value of good software in the spring of 1976, he framed his argument in terms of making the computer interesting, not useful: “…software makes the difference between a computer being a fascinating educational tool for years and being an exciting enigma for a few months and then gathering dust in the closet.”[1] Even as late as 1978, an informed observer could still consider interest in personal computers to be exclusive to a self-limiting community of hobbyists. Jim Warren, editor of Dr. Dobb’s Journal of Computer Calisthenics and Orthodontia, predicted a maximum market of one million home computers, expecting them to be somewhat more popular than ham radio, which attracted about 300,000.[2] A survey conducted by BYTE magazine in late 1976 shows that these hobbyists were well-educated (72% had at least a bachelor’s degree), well-off (with a median annual income of $20,000, or $123,000 in 2025 dollars), and overwhelmingly (99%) male. Based on the letters and articles appearing in BYTE in that same centennial year of 1976, it is clear that what interested these hobbyists above all was the computers themselves: which one to buy, how to build it, how to program it, how to expand it and to accessorize it.[3] Discussion of practical software applications appeared infrequently. One intrepid soul went so far as to hypothesize a microcomputer-based accounting program, but he doesn’t seem to have actually written it. When  mention of software appeared it came most often in the form of games. The few with more serious scientific and statistical work in mind for their home computer complained of the excessive discussion of “super space electronic hangman life-war pong.” Star Trek games were especially popular:  In July, D.E. Hipps of Miami advertised a Star Trek BASIC game for sale for $10; in August, Glen Brickley of Florissant, Missouri wrote about demoing his “favorite version of Star Trek” for friends and neighbors; and in August, BYTE published, with pride, “the first version of Star Trek to be printed in full in BYTE” (though the author consistently misspelled “phasers” as “phasors”). Most computer hobbyists were electronic hobbyists first, and the electronics hobby grew up side-by-side with modern science fiction, and shared its fascination with the possibilities of future technology. We can guess that this is what drew them to this rare piece of popular culture that took the future and the “what-ifs” it poses seriously, rather than treating it as a mere backdrop for adventure stories.[4] The June 1976 issue of Interface is one of many examples of the hobbyists’ ongoing fascination with Star Trek. Other than a shared interest in computers—and, apparently, Star Trek—three kinds of organizations brought these men together: local clubs, where they could share expertise in software and hardware and build a sense of belonging and community; magazines like BYTE where they could learn about new products and get project ideas; and retail stores, where they could try out the latest models and shoot the shit with fellow enthusiasts. The computer hobbyists were also bound by a force more diffuse than any of these concrete social forms: a shared mythology of the origins of hobby computing that gave broader social and cultural meaning to their community. The Clubs The most famous computer club of all, of course, is the Homebrew Computer Club, headquartered in Silicon Valley, whose story is well documented in several excellent sources, especially Steven Levy’s book, Hackers. Its fame is well-deserved, for its role as the incubator of Apple Computer, if nothing else. But the focus of the historical literature on Homebrew as the computer club has tended to distort the image of American personal computing as a whole. The Homebrew Computer Club had a distinctive political bent, due to the radical left leanings of many of its leading members, including co-founder Fred Moore. In 1959, Moore had gone on hunger strike against the Reserve Officers’ Training Corps (ROTC) program at Berkeley, which had been compulsory for all students since the nineteenth century. He later became a draft resister and published a tract against institutionalized learning, Skool Resistance. Yet even the bulk of Homebrew’s membership stubbornly stuck to technical hobbyist concerns, despite Moore’s efforts to turn their attention to social causes such as aiding the disabled or protesting nuclear weapons. To the extent that personal computing had a politics, it was a politics of independence, not social justice.[5] Cover of the second Homebrew Computer Club newsletter, with sketches of members. Only Fred Moore is labeled, but the man with glasses on the far right is likely Lee Felsenstein. Moreover, excitement about personal computing was not at all a phenomenon confined to the Bay Area. By the summer of 1975, Altair shipments had begun in earnest, and clubs formed across the United States and beyond where enthusiasts could share information and ask for help with their new (or prospective) machines. The movement continued to grow as new companies sprang up and shipped more hobby machines. Over the course of 1976, dozens of clubs advertised their existence or attempted to find a membership through classifieds in BYTE, from the Oregon Computer Club headquartered in Portland (with a membership of forty-nine), to a proposed club in Saint Petersburg, Florida, mooted by one Allen Swan. But, as one might expect, the largest and most successful clubs were concentrated in and around major metropolitan areas with a large pool of existing computer professionals, such as Los Angeles, Chicago, and New York City.[6] The Amateur Computer Group of New Jersey convened for the first time in June 1975, in under the presidency of Sol Libes. Libes, a professor at Union County College, was another of those computer lovers working on their own home computers for years before the arrival of the Altair, who then suddenly found themselves joined by hundreds of like-minded hobbyists once computing became somewhat more accessible. Libe’s club grew to 1,600 members by the early 1980s, had a newsletter and software library, sponsored the annual Trenton Computer Festival, and is likely the only organization from the hobby computer years other than Apple and Microsoft to still survive today.[7] The Chicago Area Computer Hobbyist Exchange attracted several hundred members to its first meeting at Northwestern University in the summer of 1975. Like many of the larger clubs, they organized information exchange around “special interest groups” for each brand of computer (Digital Group, IMSAI, Altair, etc.). The club also gave birth to one of the most significant novel software applications to emerge from the personal computer hobby, the bulletin board system—we will have more to say on that later in this series.[8] The most ambitious—one might say hubristic—of the clubs was the Southern California Computer Society (SCCS) of Los Angeles, founded in Don Tarbell’s apartment in June of 1975. Within the year the club could boast of a glossy club magazine(in contrast to the cheap newsletters of most clubs) called Interface, plans to develop a public computer center, and—in answer to the challenge of Micro-Soft BASIC—ideas about distributing their own royalty-free program library, including “’branch’ repositories that would reproduce and distribute on a local basis.”[9] Not content with a regional purview, the leadership also encouraged the incorporation of far-flung club chapters into their organization; in that spirit, they changed their name in early 1977 to the International Computer Society. Several chapters opened in California, and more across the U.S, from Minnesota to Virginia, but interest in SCCS/ICS chapters could be found as far away as Mexico City, Japan, and New Zealand. Across all of these chapters, the group accumulated about 8,000 members.[10] The whole project, however, ran atop a rickety foundation of amateur volunteer work, and fell apart under its own weight. First came the breakdown in the relationship between the club and the publisher of Interface, Bob Jones. Whether frustrated with the club’s failure to deliver articles to fill the magazine (his version), or greedy to make more money as a for-profit enterprise (the club’s version), Jones broke away to create Interface Age, leaving SCCS scrambling to start up its own replacement magazine. Expensive lawsuits flew in both directions. Then came the mismanagement of the club’s group buy program: intended to save members money by pooling their purchases into a large-scale order with volume discounts, it instead lost thousands of members’ dollars to a scammer: “a vendor,” as one wry commenter put it “who never vended” (the malefactor traded under the moniker of “Colonel Winthrop.”)[11] The December 1976 issues of SCCS Interface and Interface Age. Which is authentic, and which the impostor? More lawsuits ensued. Squeezed by money troubles, the club leadership raised dues to $15 annually, and sent out a plea for early renewal and prepayment of multiple years’ dues. The club magazine missed several issues in 1977, then ceased publication in September. The ICS sputtered on into 1978 (Gordon French of Processor Technology announced his candidacy for the club presidency in March), then disappeared from the historical record.[12] Whatever the specific historic accidents that brought down SCCS, the general project—a grand non-profit network that would provide software, group buying programs and other forms of support to its members—was doomed by larger historical forces. Though many clubs survived into the 1980s or beyond, they waned in significance with the maturing of commercial software and the turn of personal computer sellers away from hobbyists and towards the larger and more lucrative consumer and business markets. Newer computer products no longer required access to secret lore to figure out what to do with them, and most buyers expected to get any support they did need from a retailer or vendor, not to rely on mutual support networks of other buyers. One-to-one commercial relations between buyer and seller became more common than the many-to-many communal webs of the hobby era. The Retailers The first buyers of Altair could not find it in any shop. Every transaction occurred via a check sent to MITS, sight unseen, in the hopes of receiving a computer in exchange. This way of doing businesses suited the hardcore enthusiast just fine, but anyone with uncertainty about the product—whether they wanted a computer at all, which model was best, how much memory or other accessories they needed—was unlikely to bite. It had disadvantages for the manufacturer, too. Every transaction incurred overhead for payment processing and shipping, and demand was uncertain and unpredictable week to week and month to month. Without any certainty about how many buyers would send in checks next month, they had to scale up manufacturing carefully or risk overcommitting and going bust. Retail computer shops would alleviate the problems of both sides of the market. For buyers, they provided the opportunity to see, touch, and try out various computer models, and get advice from knowledgeable salespeople. For sellers, they offered larger, more predictable orders, improving their cash flow and reducing the overhead of managing direct sales. The very first computer shop appeared around the same time when the clubs began spreading, in the summer of 1975. But they did not open in large numbers until 1976, after the hardcore enthusiasts had primed the pump for further sales to those who had seen or heard about the computers being purchased by their friends or co-workers. The earliest documented computer shop, Dick Heiser’s Computer Store, opened in July 1975 in a 1,000-square-foot store front on Pico Boulevard in West Los Angeles. Heiser had attended the very first SCCS meeting in Don Tarbell’s apartment, and, seeing the level of excitement about Altair, signed up to become the first licensed Altair dealer. Paul Terrell’s Byte Shop followed later in the year in Mountain View, California. In March of 1976, Stan Veit’s Computer Mart opened on Madison Avenue in New York City and Roy Borrill’s Data Domain in Bloomington, Indiana (home to Indiana University). Within a year, stores had sprouted across the United States like spring weeds: five hundred nation-wide by July 1977.[13] Paul Terrell’s Byte Shop at 1063 El Camino Real in Mountain View. Ed Roberts tries to enforce an exclusive license on Altair dealers, based on the car dealership franchise model. But the industry was too fast-moving and MITS too cash- and capital-strapped to make this workable. Hungry new competitors, from IMSAI to Processor Technology, entered the market constantly with new-and-improved models. Many buyers weren’t satisfied with only Altair offerings, MITS couldn’t supply dealers with enough stock to satisfy those who were, and they undercut even their few loyal dealers by continuing to offer direct sales in order to keep as much cash as possible flowing in. Even Dick Heiser, founder of the original Los Angeles Computer Store, broke ties with MITS in late 1977, unable to sustain an Altair-only partnership.[14] Dick Heiser with a customer at The Computer Store in Los Angeles in 1977. Not only is the teen here playing a Star Trek game, a picture of the ubiquitous starship Enterprise can be seen hanging in the background. [Photo by George Birch, from Benj Edwards, “Inside Computer Stores of the 1970s and 1980s,” July 13, 2022] Given the number of competing computer makers, retailers ultimately had the stronger position in the relationship. Manufacturers who could satisfy the desires of the stores for reliable delivery of stock and robust service and customer support would thrive, while the others withered.[15] But independent dealers faced competition of their own. Chain stores could extract larger volume discounts from manufacturers and build up regional or even national brand recognition. Byte Shop, for example, expanded to fifty locations by March 1978. The most successful chain was ComputerLand, run by the same Bill Millard who had founded IMSAI. Though he later claimed everything was “clean and appropriate,” Millard clearly extracted money and employee time from the declining IMSAI in order to get his new enterprise off the ground. As the company’s chronicler put it, “There was magic in ComputerLand. Started on just Milliard’s $10,000 personal investment, losing $169,000 in its maiden year, the fledgling company required no venture capital or bank loans to get off the ground.” Some small dealers, such as Veit’s Computer Mart, responded by forming a confederacy of independent dealers under a shared front called “XYZ Corporation” that they could use to buy computers with volume discounts.[16] A ComputerLand ad from the February 1978 issue of BYTE. Note that the store offers many of the services that most people could have only found in a club in 1975 or 1976: assistance with assembly, repair, and programming. The Publishers Just like manufacturers, retailers faced their own cash flow risks: outside the holiday season they might suffer from long dry spells without many sales. The early retailers typically solved this by simply not carrying inventory: they took customer orders until they accumulated a batch of ten or so computers from the same manufacturer, then filled all of the orders at once. But a big boon for their cash flow woes came in the form of publications that sold for much less than a computer, but at a much higher and steadier volume, especially the rapidly growing array of computer magazines.[17] BYTE was both the first of the national computer magazines, and the most successful. Launched in New Hampshire in the late summer of 1975, by 1978 it built up a circulation of 140,000 issues per month. It got a head start by cribbing thousands of addresses from the mailing lists of manufacturers such as Nat Wadsworth’s Connecticut-based SCELBI, one of the proto-companies of the pre-Altair era. But, like so much of the hobby computer culture, BYTE also had direct ancestry in the radio electronics hobby.[18] Conflict among the three principal actors has muddled the story of its origins. Wayne Green, publisher of a radio hobby magazine called 73 in Peterborough, New Hampshire, started printing articles about computers in 1974, and found that they were wildly popular. Virginia Londner Green, his ex-wife, worked at the magazine as a business manager. Carl Helmers, a computer enthusiast in Cambridge, Massachusetts, authored and self-published a newsletter about home computers. One of the Greens learned of Helmers’ newsletter, and one or more of the three came up with the idea of combining Helmers’ computer expertise with the infrastructure and know-how from 73 to launch a professional-quality computer hobby magazine.[19] The cover of BYTE‘s September 1976 0.01-centennial issue (i.e., one year anniversary). The phrase “cyber-crud” and the image of a fist on the shirt of the man at center both come from Ted Nelson’s Computer Lib/Dream Machines. Also, these people really liked Star Trek. Within months, for reasons that remain murky, Wayne Green found himself ousted by his ex-wife, who took over publishing of BYTE, with Helmers as editor. Embittered, Green launched a competing magazine, which he wanted to call Kilobyte, but was forced to change to Kilobaud. Thus began a brief period in which Peterborough, with a population of about 4,000, served as a global hub of computer magazine publishing.[20] Another magazine, Personal Computing, spun off from MITS in Albuquerque. Dave Bunnell, hired as a technical writer, had become so fond of running the company newsletter Computer Notes, that he decided to go into publishing on his own. On the West Coast, in addition to the aforementioned Interface Age, there was also Dr. Dobb’s Journal of Computer Calisthenics and Orthodontia—conceived by Stanford lecturer Dennis Allison and computer evangelist Bob Albrecht (Dennis and Bob making “Dobb”), and edited by the hippie-ish Jim Warren, who drifted into computers after being fired from a position teaching math at a Catholic school for holding (widely-publicized) nude parties. Bunnell (right) with Bill Gates. This photo probably dates to sometime in the early 1980s. Computer books also went through a publishing boom. Adam Osborne, born to British parents in Thailand and trained as a chemical engineer, began writing texts for computer companies after losing his job at Shell Oil in California. When Altair arrived, it shook him with the same sense of revelation that so many other computer lovers had experienced. He whipped out a new book, Introduction to Microcomputers, and put it out himself when his previous publishers declined to print it. A highly technical text, full of details on Boolean logic and shift registers, it nonetheless sold 20,000 copies within a year to buyers eager for any information to help them understand and use their new machines.[21] The magazines served several roles. They offered up a cornucopia of content to inform and entertain their readers: industry news, software listings, project ideas, product announcements and reviews, and more. One issue of Interface Age even came with a BASIC implementation inscribed onto a vinyl record, ready to be loaded directly into a computer as if from a cassette reader. The magazines also provided manufacturers with a direct advertising and sales channel to thousands of potential buyers—especially important for smaller makers of computers or computer parts and accessories, whose wares were unlikely to be found in your local store. Finally, they became the primary texts through with the culture of the computer hobbyist was established and promulgated.[22] Each of the magazines had its own distinctive character and personality. BYTE was the magazine for the established hobbyist and tried to cover it all: hardware, software, community news, book reviews, and more. But the hardcore libertarian streak of founding editor Carl Helmers (an avid fan of Ayn Rand) also shone through in the slant of some of its articles. Wayne Green’s Kilobaud, with its spartan cover (title and table of contents only), appealed especially those with an interest in starting a business to make money off of their interest in computers. The short-lived ROM spoke to the humanist hobbyist, offering longer reports and think-pieces. Dr. Dobb’s had an amateur, free-wheeling aesthetic and tone not far removed from an underground newsletter. In keeping with its origins as a vehicle to publish Tiny BASIC (a free Microsoft BASIC alternative), itfocused on software listings. Creative Computing also had a software bent, but as a pre-Altair magazine designed to target users of BASIC in schools and universities, it took a more lighthearted and less technical tone, while Bunnell’s Personal Computing opened its arms to the beginner, with the message that computing was for everyone.[23] The Mythology of the Microcomputer Running through many of these early publications can be found a common narrative, a mythology of the microcomputer. To dramatize it: Until recently, darkness lay over the world of computing. Computers, a font of intellectual power, had served the interests only of the elite few. They lay solely in the hands of large corporate and government bureaucracies. Worse yet, even within those organizations, an inner circle of priests mediated access to the machine: the ordinary layperson could not be allowed to approach it. Then came the computer hobbyist. A Prometheus, a Martin Luther, and a Thomas Jefferson all wrapped into one, he ripped the computer and the knowledge of how to use it from the hands of the priests, sharing freedom and power with the masses. The “priesthood” metaphor came from Ted Nelson’s 1974 book, Computer Lib/Dream Machines, but became a powerful means for the post-Altair hobbyist to define himself against what came before. The imagery came to BYTE magazinein an October 1976 article by Mike Wilbur and David Fylstra: The movement towards personalized and individualized computing is an important threat to the aura of mystery that has surrounded the computer for its entire history. Until now, computers were understood by only a select few who were revered almost as befitted the status of priesthood.[24] In this cartoon from Wilbur and Fylstra’s article on the “computer priesthood,” the sinister “HAL” (aka IBM) finds himself chagrined by the spread of hobby computerists. BYTE editor Carl Helmers made the historical connection with the Enlightenment explicit: Personal computing as practiced by large numbers of people will help end the concentration of apparent power in the “in” group of programmers and technicians, just as the enlightenment and renaissance in Europe brought about a much wider understanding beginning in the 14th century.[25] The notion that computing had been jealously guarded by the powerful and kept away from the people can be found as early as June 1975, in the pages Homebrew Computer Club newsletter. In the words of club co-founder Fred Moore: The evidence is overwhelming the people want computers… Why did the Big Companies miss this market? They were busy selling overpriced machines to each other (and the government and military). They don’t want to sell directly to the public.[26] In the first collected volume of Dr. Dobb’s Journal, editor Jim Warren sounded the same theme of a transition from exclusivity to democracy in more eloquent language: …I slowly come to believe that the massive information processing power which has traditionally been available only to the rich and powerful in government and large corporations will truly become available to the general public. And, I see that as having a tremendous democratizing potential, for most assuredly, information–ability to organize and process it–is power. …This is a new and different kind of frontier. We are part of the small cadre of frontiersmen who are exploring it. exploring this new frontier.[27] Personal Computing editor Dave Bunnell further emphasized the potential for the computer as a political weapon against entrenched bureaucracy: …personal computers have already proliferated beyond most government regulation. People already have them, just like (pardon the analogy) people already have hand guns. If you have a computer, use it. It is your equalizer. It is a way to organize and fight back against the impersonal institutions and the catch-22 regulations of modern society.[28] The journalists and social scientists who began to write the first studies of the personal computer in the mid-1980s lapped up this narrative, which provided a heroic framing for the protagonists of their stories. They gave it new life and a much broader audience in books like Silicon Valley Fever (“Until the mid-1970s when the microcomputer burst on the American scene, computers were owned and operated by the establishment–government, big corporations, and other large institutions”) and Fire in the Valley (“Programmers, technicians, and engineers who worked with large computers all had the feeling of being ‘locked out’ of the machine room… there also developed a ‘computer priesthood’… The Altair from MITS breached the machine room door…”)[29] This way of telling the history of the hobby computer gave deeper meaning to a pursuit that looked frivolous on the surface: paying thousands of dollars for a machine to play Star Trek. And, like most myths, it contained elements of truth. There was a large installed base of batch-processing systems, surrounded by a contingent of programmers denied direct access to the machine. Between the two there did stand a group of technicians whose relation to the computer was not unlike the relation of the pre-Vatican II priest to the Eucharist. But in promoting this myth, the computer hobbyists denied their own parentage, obscuring the time-sharing and minicomputer cultures that had made the hobby computer possible and from which it had borrowed most of its ideas. The Altair was not an ex nihilo response to an oppressive IBM batch-processing culture that had made access to computers impossible. The announcement of Altair had called it the “world’s first minicomputer kit”: it was the fulfillment of the dream of owning your own minicomputer, a type of computer most of its buyers had already used. It could not have been successful if thousands of people hadn’t already gotten hooked on the experience of interacting directly with a time-sharing system or minicomputer. This self-confident hobby computer culture, however—with its clubs, its local shops, its magazines, and its myths—would soon be subsumed by a larger phenomenon. From this point forward, no longer will nearly every major character in the story of the personal computer have a background in hobby electronics or ham radio. No longer will nearly all the computer makers and buyers alike be computer lovers who found their passion on mainframe, minicomputer, or time-sharing systems. In 1977, the personal computer entered a new phase of growth, led by a new class of businessmen who targeted the mass market.

Read more
The Era of Fragmentation, Part 3: The Statists

In the spring of 1981, after several smaller trials, The French telecommunications administration (Direction générale des Télécommunications, or DGT), began a large-scale videotex experiment in a region of Brittany called Ille-et-Vilaine, named after its two main rivers. This was the prelude to the full launch of the system across l’Hexagone in the following year. The DGT called their new system Télétel, but before long everyone was calling it Minitel, a synecdoche that derived from the name of the lovable little terminals that were distributed free of charge, by the hundreds of thousands, to French telephone subscribers. Among all the consumer-facing information service systems in this “era of fragmentation” Minitel deserves our special attention, and thus its own chapter in this series, for three particular reasons. First, the motive for its creation. Other post, telephone, and telegraph authorities (PTTs) built videotex systems, but no other state invested as heavily in making it a success, nor gave so much strategic weight to that success. Entangled with hopes for a French economic and strategic renaissance, Minitel was meant not just to produce new telecom revenues or generate more network traffic, but to prime the pump for the entire French technology sector. Second, the extent of its reach. The DGT provided Minitel terminals to subscribers free of charge, and levied all charges at time of use rather than requiring an up-front subscription. This meant that, although many of them used the system infrequently,  more people had access to Minitel than to even the largest American on-line services of the 1980s, despite France’s much smaller population. The comparison to its nearest direct equivalent, Britain’s Prestel, which never broke 100,000 subscribers, is even more stark. Finally, there is the architecture of its backend systems. Every other commercial purveyor of digital services was a monolith, with all services hosted on their own machines. While they may have collectively formed a competitive market, each of their systems were structured internally as a command economy. Minitel, despite being the product of a state monopoly, was ironically the only system of the 1980s that created a free market for information services. The DGT, acting as an information broker rather than information supplier, provided one possible model for exiting the era of fragmentation. Playing Catch Up It was not by happenstance that the Minitel experiments began in Brittany. In the decades after World War II, the French government had deliberately seeded the region, whose economy still relied heavily upon agriculture and fishing, with an electronics and telecommunications industry. This included two major telecom research labs: the Centre Commun d’Études de Télévision et Télécommunications (CCETT) in Rennes, the region’s capital, and a branch of the Centre National d’Études des Télécommunications (CNET) in Lannion, on the northern coast. The CCETT lab in Rennes Themselves a product of an effort to bring a lagging region into the modern era, by the late 1960s and early 1970s these research departments found themselves playing catch up with their peers in other countries. The French phone network of the late 1960s was an embarrassment for a country that, under de Gaulle, wished to see itself as a resurgent world power. It still relied heavily on switching infrastructure built in the first decades of the century, and only 75% of the network was automated by 1967. The rest still depended on manual operators, which had been all but eliminated in the U.S. the rest of Western Europe. There were only thirteen phones for every 100 inhabitants of France, compared to twenty-one in neighboring Britain, and nearly fifty in the countries with the most advanced telecommunications systems, Sweden and the U.S. France therefore began a massive investment program of rattrapage, or “catch up,” in the 1970s. Rattrapage ramped up steeply after the 1974 election of Valéry Giscard d’Estaing to the presidency of France, and his appointment of a new director for the DGT, Gérard Théry. Both were graduates of France’s top engineering school, l’École Polytechnique, and both believed in the power of technology to improve society. Théry set about making the DGT’s bureaucracy more flexible and responsive and Giscard secured 100 billion francs in funding from Parliament for modernizing the telephone network, money that paid for the installation of millions more phones and the replacement of old hardware with computerized digital switches. Thus France dispelled its reputation as a sad laggard in telephony. But in the meantime new technologies had appeared in other nations that took telecommunications in new directions – videophone, fax, and the fusion of computer services with communication networks. The DGT wanted to ride the crest of this new wave, rather than having to play catch up again. In the early 1970s, Britain announced two separate teletex systems, which would deliver rotating screens of data to television sets in the blanking intervals in television broadcasts. CCETT, DGT’s joint venture with France’s television broadcaster, the Office de radiodiffusion-télévision française (ORTF) launched two projects in response. DIDON1 was modeled closely on the the British television broadcasting model, but ANTIOPE2 took a more ambitious tack, to investigate the delivery of screens of text independently of the communications channel. Bernard Marti in 2007 Bernard Marti headed the ANTIOPE team in Rennes. He was yet another polytechnichien (class of 1963), and had joined CCETT from ORDF, where he specialized in computer animation and digital television. In 1977, Marti’s team merged the ANTIOPE display technology with ideas borrowed from CNET’s TIC-TAC3, a system for delivering interactive digital services over telephone. This fusion, dubbed TITAN4, was basically equivalent to the British Viewdata system that later evolved into Prestel. Like ANTIOPE it used a television to display screens of digital information, but it allowed users to interact with the computer rather than merely receiving data passively. Moreover, both the commands to the computer and the screen data it returned passed over a telephone line, not over the air. Unlike Viewdata, TITAN supported a full alphabetic keyboard, not just a telephone keypad. In order to demonstrate the system at a Berlin trade fair, the team used France’s Transpac packet-switching network to mediate between the terminals and the CCETT computer in Rennes. Théry’s lab had assembled an impressive tech demo, but as of yet none of it had left the lab, and it had no obvious path to public use. Télématique In the fall of 1977, DGT director Gerard Théry, satisfied with how the modernization of the phone network was progressing, turned his attention to the British challenge in videotex. To develop a strategic response, he first looked to CCETT and CNET, where he found TITAN and TIC-TAC prototypes ready to be put to use. He turned these experimental raw materials over to his development office (the DAII) to be molded into products with a clear path to market and business strategy. The DAIIn recommended pursuing two projects: first, a videotex experiment to test out a variety of services in a town near Versailles, and second, investment in an electronic phone directory, intended to replace the paper phone book. Both would use Transpac as the networking backbone, and TITAN technology for the frontend, with color imagery, character-based graphics, and a full keyboard for input. An early experimental Télétel setup, before the idea of using the TV as the display was abandoned. The strategy the DAII devised for videotex differed from Britain’s in three important ways. First, whereas Prestel hosted all of the videotex content themselves, the DGT planned to serve only as a switchboard from which users could reach any number of different privately-hosted service providers, running any type of computer that could connect to Transpac and serve valid ANTIOPE data. Second, they decided to abandon the television as the display unit and go with custom, all-in-one terminals. People bought TVs to watch TV, the DGT leadership reasoned, and would not want to tie up their screen with new services like the electronic phone book. Moreover, cutting the TV set out of the picture meant that the DGT would not have to negotiate over the launch with their counterparts at Télédiffusion de France (TDF), the successor to the ORDF5. Finally, and most audaciously, France cracked the chicken-and-egg problem (that a network without users was unattractive to service providers and vice versa) by planning to lease those all-in-one videotex terminals free of charge. Despite these bold plans, however, videotex remained a second-tier priority for Théry. When it came to ensuring DGT’s place at the forefront of communications technology, his focus was on developing the fax into a nationwide consumer service. He believed that fax messaging could take over a huge portion of the market for written communication from the post office, whose bureaucrats the DGT looked upon as hidebound fuddy-duddies.  Théry’s priorities changed within months, however, with the completion of a government report in early 1978 entitled The Computerization of Society. Released to bookstores in a paperback edition in May, it sold 13,500 copies in its first month, and a total of 125,000 copies over the following decade, quite a blockbuster for a government report6 How did such a seemingly recondite topic engender such excitement? The authors, Simon Nora and Alain Minc, officers in the General Inspectorate of Finance, had been asked to write the report by the Giscard government in order to consider the threat and the opportunity presented by the growing economic and cultural significance of the computer. By the mid-1970s, it was becoming clear to most technically-minded intellectuals that computing power could and likely would be democratized, brought to the masses in the form of new computer-mediated services. Yet for decades, the United States had led the way in all forms of digital technology, and American firms held a seemingly unassailable grip on the market for computer hardware. The leaders of France considered the democratization of computers a huge opportunity for French society, yet they did not want to see France become a dependent satellite of a dominating foreign power. Nora and Minc’s reported presented a synthesis that resolved this tension, proposing a project that would catapult France into the post-modern age of information. The nation would go directly from trailing the pack in computing to leading it, by building the first national infrastructure for digital services – computing centers, databases, standardized networks – all of which would serve as the substrate for an open, democratic marketplace in digital services. This would, in turn, stimulate native French expertise and industrial capacity in computer hardware, software, and networking. Nora and Minc called this confluence of computers and communications télématique, a fusion of telecommunications and informatique (the french word for computing or computer science). “Until recently,” they wrote, computing… remained the privilege of the large and the powerful. It is mass computing that will come to the fore from now on, irrigating society, as electricity did. La télématique, however, in contrast to electricity, will not transmit an inert current, but information, that is to say, power. The Nora-Minc report, and the resonance it had within the Giscard government, put the effort to commercialize TITAN in a whole new light. Before the report, the DGT’s videotex strategy had been a response to their British rivals, intended to avoid being caught unprepared and forced to operate under a British technical standard for videotex. Had it remained only that, France’s videotex efforts might well have languished, ending up much like Prestel, a niche service for a few curious early adopters and a handful of business sectors that it found it useful. After Nora-Minc, however, videotex could only be construed as a central component of télématique, the basis for building a new future for the whole French nation, and it would receive more attention and investment than it might otherwise ever have hoped for. The effort to launch Minitel on a grand scale gained backing from the French state that might otherwise have failed to materialize, as it did for Théry’s plans for a national fax service, which dwindled to a mere Minitel printer accessory. This support included the funding to provide millions of terminals to the populace, free of charge. The DGT argued that the cost of the terminals would be offset by the savings from no longer printing and distributing the phone book, and from new network traffic stimulated by the Minitel service. Whether they sincerely believed this or not, it provided at least a fig leaf of commercial rationale for a massive industrial stimulus program, starting with Alcatel (paid billions of francs to manufacture terminals) and running downstream to the Transpac network, Minitel service providers, the computers purchased by those providers, and the software services required to run an on-line business. Man in the Middle In purely commercial terms, Minitel did not in fact contribute much to the DGT’s bottom line. It first achieved profitability on an annual basis in 1989, and if it ever achieved overall net profitability, it was not until well into its slow but terminal decline in the later 1990s. Nor did it achieve Nora and Minc’s aspiration to create an information-driven renaissance of French industry and society. Alcatel and other makers of telecom equipment did benefit from the contracts to build terminals, and the French Transpac network benefited from a large increase in traffic – though, unfortunately, with the X.25 protocol they turned out to have bet on the wrong packet-switching technology in the long-term. The thousands of Minitel service providers, however, mostly got their hardware and systems software from American providers. The techies who set up their own online services eschewed both the French national champion, Bull, and the dreaded giant of enterprise sales, IBM, in favor scrappy Unix boxes from the likes of Texas Instruments and Hewlett-Packard. So much for Minitel as industrial policy, what about its role in enervating French society with new information services, which would reach democratically into both the most elite arrondissements of Paris and the plus petit village of Picardy? Here it achieved rather more, though still mixed, success. The Minitel system grew rapidly, from about 120,000 terminals at its initial large-scale deployment in 1983, to over 3 million in 1987 and 5.6 million in 1990.7 However, with the exception of the first few minutes of the electronic phonebook, actually using those terminals cost money on a minute-by-minute basis, and there’s no doubt that usage was distributed much more unequally than the equipment. The most heavily used services, the online chat rooms, could easily burn hours of call time in an evening, at a base rate of 60 francs per hour (equivalent to about $8, more than double the U.S. minimum wage at the time). Nonetheless, nearly 30 percent of French citizens had access to a Minitel terminal at home or work in 1990. France was undoubtedly the most online country (if I may use that awkward adjective) in the world at that time. In that same year, the largest two online services in the United States, that colossus of computer technology, totaled just over a million subscribers, in a population of 250 million8. And the catalog of services that one could dial into grew as rapidly as the number of terminals – from 142 in 1983 to 7,000 in 1987 and nearly 15,000 in 1990. Ironically, a paper directory was needed to index all of the services available on this terminal that was intended to supplant the phone book. By the late 1980s that directory, Listel, ran to 650 pages.9 A man using a Minitel terminal Beyond the DGT-provided phone directory, services ran the gamut from commercial to social, and covered many of the major categories we still associate today with being online – shopping and banking, travel booking, chat rooms, message boards, games. To connect to a service, a Minitel user would dial an access number, most often 3615, which connected his phone line to a special computer in his local telephone switching office called a point d’accès vidéotexte, or PAVI. Once connected to the PAVI, the user could then enter a further code to indicate which Minitel service they wished to connect to. Companies plastered their access code in a mnemonic alphabetic form onto posters and billboards, much as they would do with website URLs in later decades: 3615 TMK, 3615 SM, 3615 ULLA. The 3615 code connected users into the PAVI’s “kiosk” billing system, introduced in 1984, which allowed Minitel to operate much like a news kiosk, offering a variety of wares for sale from different vendors, all from a single convenient location. Of the sixty francs charged per hour for basic kiosk services, 40 went to the service itself, and twenty to the DGT to pay for the use of the PAVI and the Transpac network. All of this was entirely transparent to the user; the charges would appear automatically on their next telephone bill, and they never needed to provide payment information to establish a financial relationship with the service provider. As access to the open internet began to spread in the 1990s, it became popular for the cognoscenti to retrospectively deprecate the online services of the era of fragmentation – the CompuServes, the AOLs – as “walled gardens”10. The implied contrast in the metaphor is to the freedom of the open wilderness. If CompuServe is a carefully cultivated plot of land, the internet, from this point of view, is Nature itself. Of course the internet is no more natural than CompuServe, nor Minitel. There is more than one way to architect an online service, and all of them are based on human choices. But if we stick to this metaphor of the natural versus the cultivated, Minitel sits somewhere in between. We might compare it to a national park. Its boundaries are controlled, regulated, and tolled, but within them one can wander freely and visit whichever wonders might strike your interest. DGT’s position in the middle of the market between user and service, with a monopoly on the user’s entry point and the entire communications pathway between the two parties, offered advantages over both the monolithic, all-inclusive service providers like CompuServe and the more open architecture of the later Internet. Unlike the former, once past the initial choke point, the system opened out into a free market of services unlike anything else available at the time. Unlike the latter, there was no monetization problem. The user paid automatically for computer time used, avoiding the need for the bloated and intrusive edifice of ad-tech that supports the bulk of the modern Internet. Minitel also offered a secure end-to-end connection. Every bit traveled only over DGT hardware, so as long as you trusted both the DGT and the service to which you were connected, your communications were safe from attackers. This system also had some obvious disadvantages compared to the Internet that succeeded it, however. For all is relative openness, one could not just turn on a server, connect it to the net, and be open for business. It required government pre-approval to make your server accessible via a PAVI. More fatally, the Minitel’s technical structure was terribly rigid, tied to a videotex protocol that, while advanced for the mid-1980s, appeared dated and extremely restrictive within a decade.11 It supported pages of text, in twenty-four rows of forty characters each (with primitive character-based graphics) and nothing more. None of the characteristic features of the mid-1990s World wide Web – free-scrolling text, GIFs and JPEGs, streaming audio, etc. –  were possible on Minitel. Minitel offered a potential road out of the era of fragmentation, but, outside of France, it was a road not taken. The DGT, privatized as France Télécom in 1988, made a number of efforts to export the Minitel technology, to Belgium, Ireland, and even the U.S. (via a system in San Francisco called 101 Online). But without the state-funded stimulus of free terminals, none of them had anything like the success of the original. And, with France Télécom, and most other PTTs around the world, now expected to fend for themselves as lean businesses in a competitive international market, the era when such a stimulus was politically viable had passed. Though the Minitel system did not finally cease operation until 2012, usage went into decline from the mid-1990s onward. In its twilight years it still remained relatively popular for banking and financial services, due to the security of the network and the availability of terminals with an accessory that could securely read and transmit data from banking and credit cards. Otherwise, french online enthusiasts increasingly turned to the Internet. But before we return to that system’s story, we have one last stop to visit on our tour of the era of fragmentation. [Previous] [Next] Further Reading Julien Mailland and Kevin Driscoll, Minitel: Welcome to the Internet (2017) Marie Marchand, The Minitel Saga (1988)    

Read more