High Pressure, Part 2: The First Steam Railway

Railways long predate the steam locomotive. Trackways with grooves to keep a wheeled cart on a fixed path date back to antiquity (such as the Diolkos, which could carry a naval vessel across the Isthmus of Corinth on a wheeled truck). The earliest evidence for carts running atop wooden rails, though, comes from the mining districts of sixteenth century Europe. Agricola describes a kind of primitive railway used by German miners in his 1556 treatise De Re Metallica. Agricola reports that the miners ran trucks called Hunds (“dogs”) (supposedly because of the barking noise they made while in motion) over two parallel wooden planks. A metal pin protruding down from the truck into the gap between the planks kept it from rolling off the track.[1] This system allowed a laborer to carry far more material out of the mine in a single trip than they could by carrying it themselves.

British Railways

Wooden railways called “waggon ways” are first attested in the coal-mining areas of Britain around 1600. These differed in two important ways from earlier mining carts: first, they ran outside the mine, carrying coal a short distance (perhaps a mile or two) to the nearest high-quality road or navigable waterway from which it could be brough to market. Second, they were drawn by horses, at least on the uphill courses—on some eighteenth-century wagon ways, the horse actually caught a ride downhill, standing on a flat carriage behind the cart. Flanged wheels to keep the wagon on the track were also probably introduced around this time. Both wheels and rails were still constructed of wood, however, which limited the load the wagons could carry.[2]

By the middle of the eighteenth century, waggon ways crisscrossed the mining districts of northern England, especially around the coalfields, creating a substantial trade in birch wheels and rails of beech or ash from the South. They were called by many different names, such as “gangways,” “plateways,” “tramways,” or “tramroads.” Colliers invested sophisticated engineering into their design, using bridges, causeways, and tunnels to create a smooth grade from the pithead to the point of embarkation (such as the Tyne or the Severn rivers).[3] Most were no more than a mile or two long, but some ran as far as ten miles. They were smooth enough that a single horse could haul several times on rails what it could on an ordinary eighteenth-century road: the figures given by various sources for the load of a horse-drawn rail carriage range from two to ten tons, likely depending on the grade of the railway and the material composition of the rails and wheels.[4]

The Little Eaton Gangway, a railway built in the 1790s, that, incredibly, continued to operate until 1908, when this photo was taken. It carried coal five miles down to the Derby Canal.

This close-up of the Little Eaton Gangway shows clearly the design of the railbed, with L-shaped rails to hold the wagon on the track, and stone blocks underneath to which they were nailed. The Penydarren railway, discussed below, had the same design.

This may seem prologue enough, but two further milestones in the development of railways still intervened before the steam locomotive came into the picture. Around the late 1760s, the Darbys of Coalbrookdale step into our history once more. They are reputed to have been the first to introduce durable cast iron plates to strengthen the rails that they used to carry materials among their various Shropshire properties.[5] Later the Darbys and others introduced fully cast-iron rails, doing away with wood altogether. With this change in material the railways of England (already intimately linked with coal mining) now became fully enmeshed in the cycle of the triumvirate—coal, iron, and steam—well before they became steam-powered.

Then, in 1799, came the first public horse-drawn railway. Up to this time, all railways  served the needs of a single owner (though some required an easement across neighboring properties), typically a mining concern. But the Surrey Iron Railway, which ran from Croydon (south of London) up to the Thames at Wandsworth, was open to any paying cargo, much like a turnpike road or a canal. Among the backers of the Surrey Iron Railway was a Midlands colliery owner, William James, who will have an important part to play later in our story.[6]

So, although we think of them now as two components of a single technological system, the locomotive and the railway did not start out that way. Instead, the locomotive appeared on the scene as an alternative way of hauling freight over an already familiar and well-established transportation medium.

Trevithick

Richard Trevithick was the first Englishman to attempt this substitution. He was born in 1771, in the heart of the copper-mining region of Cornwall. His birthplace, the village of Illogan, sat beneath the weathered hill of Carn Brea, said to be the ancient dwelling place of a giant.[7] But the only giants still found upon the landscape of eighteenth-century Cornwall breathed steam. They sheltered in the stone engine houses that still dot the countryside today, and raised water from the bottom of the mine, allowing the proprietors to delve ever deeper into the earth. Trevithick’s father was a mine “captain,” a high-status position with the responsibilities of a general manager and some of the same cachet among the mining community as a sea captain would have in a nautical community. This included the privilege of an honorific title: he was “Captain Trevithick” to his neighbors. The elder Trevithick’s work included serving as mine engineer and assayer, and he would have been familiar with all the technical workings of the mine, from the digging equipment to the pumping engine.

The younger Trevithick must have learned well from his father. At fifteen, he was employed by his father at Dolcoath, the most lucrative copper mine of the region. By age 21, having grown into something of a giant himself—standing a burly six feet two, his pastimes were said to include hurling sledgehammers over buildings—the miners of Cornwall already consulted him for his expertise on steam engines.[8]

Linnell, John; Richard Trevithick (1771-1833); Science Museum, London ; http://www.artuk.org/artworks/richard-trevithick-17711833-179865

" data-medium-file="https://cdn.accountdigital.net/Fvh4aGvmBxWfkJ8nuFse_zi81ic9" data-large-file="https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j?w=739" loading="lazy" src="https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j?w=831" alt="" class="wp-image-14451" width="566" height="696" srcset="https://cdn.accountdigital.net/FnhVecf3Lm75yyCxkoj00B9ZFNOG 566w, https://cdn.accountdigital.net/FikdTskwdmnAoksYCE_1j4a5Fl2B 122w, https://cdn.accountdigital.net/Fvh4aGvmBxWfkJ8nuFse_zi81ic9 244w, https://cdn.accountdigital.net/FgMy22xNugUYyjZ88KOSKWMwaLrv 768w, https://cdn.accountdigital.net/Fup2esWZsI8X2tlxXLO8fiQzWI8j 974w" sizes="(max-width: 566px) 100vw, 566px">
A portrait of Trevithick painted in 1816, when he was 45. He gestures to the Andes of Peru in the background, where Trevithick intended, at the time, to make his fortune in silver mining.

By the 1790s, Boulton and Watt were about as popular in Cornwall as Fulton and Livingston were in the American West, and for the same reason: they were seen as grasping monopolists who kept the miners of Cornwall, who depended on effective pumps for their livelihood, in thrall to the Watt patent. Fifteen years earlier, Watt’s efficient engines had appeared as a lifeline to copper mines suffering under competition from the prodigious Parys Mountain in Anglesey, whose ample ores could be cheaply mined directly from the surface.[9] But as the mines continued to struggle, Boulton and Watt began to take shares in mines in lieu of payment, and set up a headquarters at Cusgarne, right in the copper district, to oversee their investments. One of their most skilled mechanics, William Murdoch, moved to Cornwall and acted as their local agent. To the copper miners, Boulton and Watt began to look like meddlers as well as leeches. By the 1790s, Anglesey ran out of easy-to-reach ore, and the fortunes of the Cornwall copper mines began to look up. With their mutual enemy gone, the grudging partnership between the Cornish miners and Boulton and Watt soured rapidly.

The Dolcoath Copper Mine, Camborne, Cornwall, circa 1831. (Photo by Hulton Archive/Getty Images)

" data-medium-file="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=300" data-large-file="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=739" loading="lazy" width="902" height="637" src="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=902" alt="" class="wp-image-14453" srcset="https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96 902w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=150 150w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=300 300w, https://cdn.accountdigital.net/FqNKGmmJrqZRNRB6bm49nha_7A96?w=768 768w" sizes="(max-width: 902px) 100vw, 902px">
An 1831 engraving of Dolcoath copper mine, in Cornwall.

Trevithick, a hot-headed young man, took up the banner of revolution against the Boulton and Watt regime in 1792, fighting a series of legal battles on behalf of the competing engine design of Edward Bull. By 1796 every battle had been lost—Bull and Trevithick’s attempt to defy the Watt patent had failed, and there seemed to be nothing for the Cornwall interests to do but wait for the expiration of its term, in 1800.[10]

But Trevithick found another way forward: strong steam. More than any other element, the separate condenser distinguished Watt’s patent engine from its predecessors. By shedding the condenser and operating well above atmospheric pressure instead, Trevithick could avoid claims of infringement. Concerned that releasing uncondensed steam would waste all the power of the engine, he consulted Cornwall’s resident mathematician, Davies Giddy. Giddy reassured him that he would waste a fixed amount of power equal to the weight of the atmosphere, and would gain some compensation in return by saving the power required to work an air pump and lift water into the condenser.[11] As in the U.S., then, the socioeconomic environment pushed steam engine users on the periphery toward high-pressure, though in this case it was the presence of a rival patent rather than an absence of capital resources.

Trevithick saw an immediate application for high-pressure steam as a replacement for the horse whim, an animal-powered lift which worked alongside the pumping engine in many Cornish mines, usually in the same vertical shaft, to raise ore and dross from below. A few whims had been installed with Watt engines, but Trevithick’s “puffers” (so called for the visible puff of exhaust steam they released) cost less to build and transport. The compact high-pressure engine also fit much more comfortably in the engine house alongside the pumping engine than a second Watt behemoth would. 

An 1806 Trevithick stationary steam engine, minus the flywheel it would have had at the time to maintain a steady motion. Note how the exhaust flue comes out of the middle of the cylindrical boiler, the same return-flue design used by Evans to extract additional heat from the hot gases of the furnace.

Trevithick’s engines thus began replacing horse whims in engine houses across Cornwall in the early 1800s.[12] The Watt interests were not happy: much later in life Trevithick claimed that Watt (probably referring in this case to the belligerent James Watt, Jr., the inventor’s son), “said to an eminent scientific character still living that I deserved hanging for bringing into use the high pressure,” presumably because of the danger of explosion.[13] One of Trevithick’s boilers, installed to drain the foundation for a corn mill in Greenwich, did in fact explode in 1803 when left unattended, and the Watts did not miss the opportunity to get in their “I told you sos” in the press.[14] In future engines Trevithick would include two safety valves, plus a plug soldered with lead as a final safety measure: if the water level fell too low, the heat would melt the solder and blow out the plug, relieving excess pressure.

But Trevithick’s interest had by this time already wandered from staid industrial applications to the more romantic dream of a steam carriage.

Steam Carriage

As we have seen already several times in this story, many inventors and philosophers had dreamed the same dream, dating back well over a century. To realize how readily available the idea of a steam carriage was, we must remember that steam power’s job, in a sense, had always been to replace either horse- or water-power, and that carriages were the most ubiquitous piece of horse-powered machinery around in early modern Europe.

The first person we know of to successfully build a steam carriage (if we construe success loosely), was a French army officer named Nicolas-Joseph Cugnot. More specifically, he built a steam fardier, a cart for pulling cannon. It was a curious looking tricycle with the boiler hanging off the front like an elephantine proboscis. Cugnot carried out some trial runs of his vehicle in 1769, but with no way to refill the boiler while in use, it had to stop every fifteen minutes to let the boiler cool, refill it, and work up steam once more. This was a curiosity without real practical value.[15]

Cugnot’s Fardier à Vapeur, preserved at the Musée des Arts et Métiers in Paris.

Trevithick probably never heard of Cugnot, but he certainly knew William Murdoch, Watt’s representative in Cornwall. Murdoch began experimenting with high-pressure steam carriages in the 1780s, and built a three-wheeled carriage that (like Cugnot’s cart) survives today in a museum. Unlike Cugnot’s, vehicle however, Murdoch’s surviving machine is a model, no more than a foot tall. Lacking the backing of his employers, who disliked strong steam and found the carriage concept unpromising if not ridiculous, Murdoch’s tinkerings did not even get as far as Cugnot’s. There is no evidence that he ever built a full-sized carriage. [16]

Editing Undertaken: Levels, Unsharp Mask

" data-medium-file="https://cdn.accountdigital.net/FhB8HkLDtOTEC4mIMPtlRbD1ZWQw" data-large-file="https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T?w=739" loading="lazy" src="https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T?w=1024" alt="" class="wp-image-14457" width="561" height="421" srcset="https://cdn.accountdigital.net/FpqH97sUpkaqcDXUxuJIXFhnlUoO 561w, https://cdn.accountdigital.net/FjVNxoFYXseTtLwNZJAeiqhVHwOf 150w, https://cdn.accountdigital.net/FhB8HkLDtOTEC4mIMPtlRbD1ZWQw 300w, https://cdn.accountdigital.net/FtfsjNccReQS7wrfGvu9EyUGjYcr 768w, https://cdn.accountdigital.net/FhK0Cs7bJUzfypvzPNEZgik6nL4T 1024w" sizes="(max-width: 561px) 100vw, 561px">
Murdoch’s model steam carriage.

It’s unclear why Trevithick decided to build a steam-powered vehicle—he may have been trying to develop a portable engine that could be moved between work sites under its own power. It is possible that Trevithick got the idea for a steam carriage from Murdoch, but, as we have seen, the idea was commonplace. In the execution of that idea, Trevithick went far beyond his predecessor.

He began work on his steam carriage in late 1800, with the help of his cousin Andrew Vivian and several other local craftsmen. He already had in hand his high-pressure engine design, with a very favorable power-to-weight ratio compared to a Watt engine. A small and light engine was advantageous in a steamboat, but it was crucial in a land vehicle that had to rest on wheels and fit on narrow roads. He used the same return-flue boiler design as Oliver Evans had; given the distance and timing, they almost certainly arrived at this idea independently.

Many wise men of the time doubted that a self-driving wheel was even possible, arguing that it would simply spin in place without an animal with traction to pull it. Trevithick therefore felt it necessary to first disprove this theory (in an experiment probably devised by Giddy) by sitting in a chaise with his compatriots, and moving the vehicle by turning the wheels with their hands.[17]

In December 1801 they went for their first steam-powered ride. What exactly the first carriage looked like is unknown, but it was likely a simple wheeled platform with engine and boiler mounted atop it and a crude lever for steering. Years later one “old Stephen Williams” (not so old at the time) would recall:

I was a cooper by trade, and when Captain Dick [Trevithick] was making his first-steam carriage I used to go every day into John Tyack’s blacksmiths’ shop at the Weith, close by here, where they were putting it together. …In the year of 1801, upon Christmas-eve, coming on evening, Captain Dick got up steam, out in the high road… we jumped up as many as could; may be seven or eight of us. ‘Twas a stiffish hill going from the Weith up to Cambourne Beacon, but she went off like a little bird.[18]

Within days, this first carriage quite literally crashed and burned (though the burning was apparently caused by leaving the carriage unattended with the firebox lit, not by the crash itself).[19] Nonetheless, Trevithick formed a partnership with his cousin Vivian to develop both the high-pressure engine and its use in carriages, and they went to London to seek a patent and additional backers and advisers, including such scientific luminaries as Humphrey Davy and Count Rumford.

They had a second carriage built, this one designed as a true passenger vehicle with a compartment to accommodate eight. Giddy nicknamed it “Trevithick’s Dragon.” It worked better than the first attempt, running a good eight miles-per-hour on level ground, but the ride was rough. For some decades, steel spring suspensions had been standard on carriages, but the direct geared linkage between the drive wheels and the engine on Trevithick’s carriage did not allow for them to move independently.[20] The steering mechanism also worked poorly. In one early trial Trevithick tore the rail from a garden wall, and Vivian’s relative Captain Joseph Vivian (actually a sea captain) reported after a drive that he “thought he was more likely to suffer shipwreck on the steam-carriage than on board his vessel…”[21] It offered no obvious advantages over a horse carriage to offset the loss of comfort and control, not to mention the risk of fire and explosion. The Dragon attracted some curious onlookers, but no investors.

Steam Railway

If steam-powered vehicles on water found success first in the U.S. because alternative modes of inland transportation were lacking, steam-powered vehicles on land found success first in Britain because the transportation medium to support them already existed. The railways offered the perfect solution for the problems of Trevithick’s steam carriage: a road without cobbles or ruts to jounce on, a road that steered the carriage for you, and a road with no passengers to annoy or endanger. But Trevithick was not positioned to see it, because Cornwall did not have railways of any kind (its first, the Portreath Tramroad was not constructed until 1812). It would take a new connection to link the engine born out of the struggle with Watt over the mines of Cornwall to the rails created to solve the problems of northern coalfields.

On business in Bristol in 1803, Trevithick made that connection, when he met a Welsh ironmaster named Samuel Homfray, who provided him with fresh capital in exchange for a share in his patent, and solicited his aid in building steam engines for his ironworks, called Penydarren. It also happened that Homfray also had part ownership of a railway, and the opportunity thus arose to marry high-pressure steam to rails.

For Homfray this was also an opportunity to show up a rival. He and several other ironmasters had invested in a canal to carry their wares down to the port at Cardiff, but the controlling partner, Richard Crawshay, demanded exclusive privileges over the waterway. Homfray and several of the other partners exploited a loophole to bypass Crawshay. At the time, any public thoroughfare (on land or water) required an act of Parliament to approve its construction. The act approving the Cardiff canal also allowed for the construction of railways within four miles of the canal.

The intent of this was to allow for feeder lines. Rails, at the time, were a strictly secondary transportation system. They provided “last-mile” service from mining centers to a navigable waterway. A boom in canal building that began in the later eighteenth century extended and interconnect those waterways, which offered far lower transportation costs than any form of land transportation. If a horse could pull several times the weight on a railway that it could on an ordinary road, it could pull several times more again when hitched to a canal barge.[22] (The plummeting transportation costs brought about by the ability to float cargo to the coast from nearly any town in England by horse-drawn barge account for the lack of British interest in riverine steamboats.) So the goal was almost always to get goods to water as quickly as possible.

The trick that Homfray and his allies pulled was to build a railway as a primarytransportation link in its own right, paralleling the canal for over nine miles, rather than connecting directly to it, and thereby neutering Crawshay’s privileges.[23] It was on this railway that Homfray (or perhaps Trevithick, which partner initiated the idea is unknown) proposed to replace horse power with steam power. Crawshay found the concept laughable. Like many of his contemporaries, he believed that the smooth wheels would find no purchase on smooth rails, and would simply spin in place. The ironmasters placed a not-so-friendly wager of 500 guineas over whether Trevithick could build a locomotive to haul ten tons of iron the length of the railway. On February 21st, 1804, Crawshay lost. As Trevithick reported to Giddy:

Yesterday we proceeded on our journey with the engine; we carry’d ten tons of Iron, five waggons, and 70 Men riding on them the whole of the journey. Its above 9 miles which we perform’d in 4 hours & 5 Mints, but we had to cut down som trees and remove some Large rocks out of road. The engine, while working, went nearly 5 miles pr hour; …We shall continue to work on the road, and shall take forty tons the next journey. The publick untill now call’d mee a schemeing fellow but now their tone is much alter’d.[24]

We should not picture the Penydarren engine in the mind’s eye as the iconic, fully-developed steam locomotive of the mid-19th century. The railbed itself looked very different than what we might imagine: the cast-iron rails were outward-facing Ls, whose vertical stroke kept the wheels from leaving the track. Nails driven into two parallel rows of stone blocks held the rails in place. This arrangement avoided having perpendicular rail ties (or sleepers, as the British call them) that could trip up the horses, who walked between the rails as they pulled their cargo. Trevithick’s locomotive resembled a stationary engine jury-rigged to a wheeled platform. A crosshead and large gears carried power from the cylinder down to the left-hand wheels (only, the right side received no power), and a flywheel kept the vehicle from lurching each time the piston reached the dead center position. Trevithick’s goal was to show off the versatility of high-pressure steam, not to launch a railroad revolution.

A replica showing what the Penydarren locomotive may have looked like. Note the fixed gearing system for delivering power to the two wheels in the foreground, the flywheel in the background, and the L-shaped rails. Notice also how much it resembles Trevithick’s stationary steam engine, with additional mechanisms to transmit power to the wheels.

The Penydarren locomotive performed several more trial runs; on at least one, the rails cracked under the engine’s weight: a portent of a major technical obstacle yet to be overcome before steam railways could find lasting success. Trevithick then seems to have removed the engine and put it to work running a hammer in the ironworks; what became of the rest of the vehicle is unknown.[25]

Many other endeavors captured Trevithick’s attention in the following years; among them stationary engines at Penydarren and elsewhere, steam dredging experiments, and a scheme to use a steam tug to drag a fireship into the midst of Napoleon’s putative invasion fleet at Bolougne (as we have seen, Robert Fulton was at this time trying to sell the British government on his “torpedoes” to serve the same purpose). In 1808, he made once last stab at steam locomotion, a demonstration vehicle called the Catch-me-who-can that ran over a temporary circular track in London. Again, rail breakage proved a problem. Trevithick hoped to earn some money from paying riders and to attract the interest of investors, but he failed on both accounts.[26]

The reasons for the lack of interest are clear. Trevithick’s locomotives were neither much faster nor obviously cheaper than a team of horses, and they came with a host of new, unsolved technical problems. Twenty more years would elapse before rails would begin to seriously challenge canals as major transport arteries for Britain, not mere peripheral capillaries. To make that happen would require improvements in locomotives, better rails, and a new way of thinking about the comparative economics of transportation.

Trevithick himself had twenty-five more years of restless, peripatetic life ahead of him, much of it spent on fruitless mining ventures in South and Central America. In an irresistible historical coincidence, in 1827, at the end of a financially ruinous trip to Costa Rica, he crossed paths with another English engineer named Robert Stephenson. Stephenson gave the downtrodden older man fifty pounds to help him get home. After a spate of mostly failed or abortive projects, Trevithick died in 1833. The one item of real wealth remaining to him, a gold watch brought back from South America, went to defray his funeral expenses.[27] Young Stephenson, however, returned to much brighter prospects in England. He and his father would soon redeem the promise hinted at by the trials at Penydarren.

Steam Revolution: The Turbine

Incandescent electric light did not immediately snuff out all of its rivals: the gas industry fought back with its own incandescent mantle (which used the heat of the gas to induce a glow in another material) and the arc lighting manufacturers with a glass-enclosed arc bulb.[1] Nonetheless, incandescent lighting grew at an astonishing pace: the U.S. alone had an estimated 250,000 such lights in use by 1885, three million by 1890 and 18 million by the turn of the century.[2] Edison’s electric light company expanded rapidly across the U.S. and into Europe, and its success encouraged the creation of many competitors. An organizational division gradually emerged between manufacturing companies that built equipment and supply companies that used it to generate and deliver power to customers. A few large competitors came to dominate the former industry: Westinghouse Electric and General Electric (formed from the merger of Edison’s company with Thomson-Houston) in the U.S., and the Allgemeine Elektricitäts-Gesellschaft (AEG) and Siemens in Germany. In a sign of its gradual relative decline, Britain produced only a few smaller firms, such as Charles Parsons’ C. A. Parsons and Company—of whom more later.  In accordance with Edison’s early imaginings, manufacturers and suppliers expanded beyond lighting to general-purpose electrical power, especially electric motors and electric traction (trains, subways, and street cars). These new fields opened up new markets for users: electric motors, for example, enabled small-scale manufacturers who lacked the capital for a steam engine or water wheel to consider mechanization, while releasing large-scale factories from the design constraints of mechanical power transmission. They also provided electrical supply companies with a daytime user base to balance the nighttime lighting load. The demands of this growing electric power industry pushed steam engine design to its limits. Dynamos typically rotated hundreds of times a minute, several times the speed of a typical steam engine drive shaft. Engineers overcame this with belt systems, but these gave up energy to friction. Faster engines that could drive a dynamo directly required new high-speed valve control machinery, new cooling and lubrication systems to withstand the additional friction, and higher steam pressures more typical of marine engines than factories. That, in turn, required new boiler designs like the Babcock and Wilcox, which could operate safely at pressures well over 100 psi.[3] A high-speed steam engine (made by the British firm Willans) directly driving a dynamo (the silver cylinder at left). From W. Norris and Ben. H. Morgan, High Speed Steam Engines, 2nd edition (London: P.S. King & Son, 1902), 13. But the requirement that ultimately did in the steam engine was not for speed, but for size. As the electric supply companies evolved into large-scale utilities, providing power and light to whole urban centers and then beyond, they demanded more and more output from their power houses. Even Edison’s Pearl Street station, a tiny installation when looking back from the perspective of the turn of the century, required multiple engines to supply it. By 1903, the Westminster Electric Supply Corporation, which supplied only a part of London’s power, required forty-nine Willans engines in three stations to provide about 9 megawatts of power (an average of about 250 horsepower an engine). But demand continued to grow, and engines grew in response. Perhaps the largest steam engines ever built were the 12,000 horsepower giants designed by Edwin Reynolds and installed in 1901 for the Manhattan Elevated Railway Company and in 1904 for the Interborough Rapid Transit (IRT) subway company. Each of these engines actually consisted of two compound engines grafted together, each with its own high- and low-pressure cylinder, set at right angles to give eight separate impulses per rotation to the spinning alternator (an alternating current dynamo). The combined unit, engine and alternator, weighed 720 tons. But the elevated railway required eight of these monsters, and the IRT expected to need eleven to meet its power needs. The IRT’s power house, with a Renaissance Revival façade designed by famed architect Stanford White, filled a city block near the Hudson River (where it still stands today).[4] The inside of the IRT power house, with five engines installed. Each engine consists of two towers, with a disc-shaped dynamo between them. From Scientific American, October 29th, 1904. How much farther the reciprocating steam engine might have been coaxed to grow is hard to say with certainty, because even as the IRT powerhouse was going up in Manhattan, it was being overtaken by a new power technology based on whirling rotors instead of cycling pistons, the steam turbine. This great advancement in steam power borrowed from developments that had been brewing for decades in its most long-standing rival, water power. Niagara The signature electrical project of the turn of the twentieth century was the Niagara Falls Power Company. The immense scale of its works, its ambitions to distribute power over dozens of miles, its variety of prospective customers, and its adoption of alternating current: all signaled that the era of local, Pearl Street-style direct-current electric light plants was drawing to a close. The tremendous power latent in Niagara’s roaring cataract as it dropped from the level of Lake Erie to that of Lake Ontario was obvious to any observer—engineers estimated its potential horsepower in the millions—the problem was how to capture it, and where to direct it. By the late nineteenth century, several mills had moved to draw off some of its power locally. But Niagara could power thousands of factories, with each having to dig its own canals, tunnels and wheel pits to draw off the small fraction of waterfall that it required. New York State law, moreover, forbid development in the immediate vicinity of the falls to protect its scenic beauty. The solution ultimately decided on was to supply power to users from a small number of large-scale power plants, and the largest nearby pool of potential users lay in Buffalo, about twenty miles away.[5] The Niagara project originated in the 1886 designs of New York State engineer Thomas Evershed for a canal and tunnel lined with hundreds of wheel pits to supply power to an equal number of local factories. But the plan took a different direction in 1889 after securing the backing of a group of New York financiers, headed once again by J.P. Morgan. The Morgan group consulted a wide variety of experts in North America and Europe before settling on an electric power system as the best alternative, despite the unproven nature of long-distance electric power transmission. This proved a good bet: by 1893, Westinghouse had proved in California that it could deliver high-voltage alternating current over dozens of miles, convincing the Niagara company to adopt the same model.[6] Cover of the July 22, 1899 issue of Scientific American with multiple views of the first Niagara Falls Power Company power house and its five-thousand-horsepower turbine-driven generators. By 1904, the company had completed canals, vertical shafts for the fall of water, two powerhouses with a total capacity of 110,000 horsepower, and a mile-long discharge tunnel. They supplied power to local industrial plants, the city of Buffalo, and a wide swath of New York State and Ontario.[7] The most important feature of the power plant for our story, however, were the Westinghouse generators driven by water turbines, each with a capacity of 5,000 horsepower each. As Terry Reynolds, a historian of the waterwheel, put it, this was “more than ten times [the capacity] of the most powerful vertical wheel ever built.”[8] Water turbines had made possible the exploitation of water power on a previously inconceivable scale; appropriately so, for they originated from a hunger on the European continent for a power that could match British steam. Water Turbines The exact point at which a water wheel becomes a turbine is somewhat arbitrary; a turbine is simply a kind of water wheel that has reached a degree of efficiency and power that earlier designs could not approach. But the distinction most often drawn is in terms of relative motion: the water in a traditional wheel pushes the vane along with the same speed and direction as its own flow (like a person pushing a box along the floor). A turbine, on the other hand, creates “motion of the water relative to the buckets or floats of the wheel” in order to extract additional energy: that is to say, it uses the kinetic energy of the water as well as its weight or pressure. That can occur through either impulse (pressing water against the turning vanes), or reaction (shooting water out from them to cause them to turn) but very often includes a combination of both.[9] The exact origins of the horizontal water wheel are unknown, but they had been used in Europe since at least the late Middle Ages. They offered by far the simplest way to drive a millstone, since it could be attached directly to the wheel without any gearing, and remained in wide use in poorer regions of the continent well into the modern period. For centuries, the manufacturers and engineers of Western Europe focused their attention on the more powerful and efficient vertical water wheel, and this type constitutes most of our written record of water technology. Going back to the Renaissance, however, descriptions and drawings can be found of horizontal wheels with curved vanes intended to capture more of the flow of water, and it was the application of rigorous engineering to this general idea that led to the modern turbine. The turbine was in this sense the revenge of the horizontal water wheel, transforming the most low-tech type of water wheel into the most sophisticated. All of the early development of the water turbine occurred in France, which could draw on a deep well of hydraulic theory but could not so easily access coal and iron to make steam as could their British neighbors. Bernard Forest de Belidor, an eighteenth-century French engineer, recorded in his 1737 treatise on hydraulic engineering the existence of some especially ingenious horizontal wheels, used to grind flour at Bascale on the Garonne. They had curved blades fitted inside a surrounding barrel and angled like the blades of a windmill, such that “the water that pushes it works it with the force of its weight composed with the circular motion given to it by the barrel…”[10] Nothing much came of this observation for another century, but Belidor had identified what we could call a proto-turbine, where water not only pushed on the vanes but also glided down through them like the breeze on the arms of a windmill, capturing more of its energy. The horizontal mill wheels observed on the Garonne by Belidor. From Belidor, Architecture hydraulique vol. 1, part 2, Plan 5. In the meantime, theorists came to an important insight. Jean-Charles de Borda, another French engineer (there will be a lot of them in this part of the story), was only a small child in a spa town just north of the Pyrenees when Belidor was writing about water wheels. He studied mathematics and wrote mathematical treatises, became an engineer for the Army and then the Navy, undertook several scientific voyages, fought in the American Revolutionary War, and headed the commission that established the standard length of the meter. In the midst of all this he found some time in 1767 to write up a study on hydraulics for the French Academy of Sciences, in which he articulated the principle that, to extract the most power from a water wheel, the water should enter the machine without shock and leave it without velocity. Lazare Carnot, father of Sadi, restated this principle some fifteen years later, in a treatise that reached a wider audience than de Borda’s paper.[11] Though it is obviously impossible for the water to literally leave the wheel without velocity (for after all without velocity it would never leave), it was through striving for this imaginary ideal that engineers developed the modern, highly efficient water turbine. First came Jean-Victor Poncelet (from now on, if I mention someone, just assume they are French), another military engineer who had accompanied Napoleon’s Grande Armée into Russia in 1812, where he ended up a prisoner of war for two years. After returning home to Metz he became the professor of mechanics at the local military engineering academy. While there he turned his mind to vertical water wheels, and a long-standing tradeoff in their design: undershot wheels, in which the water passed under the wheel, were cheaper to construct but not very efficient, while overshot wheels, where the water came to the top of the wheel and fell on its vanes or buckets, had the opposite attributes. Poncelet combined the virtues of both by applying the principle of de Borda and Carnot. The traditional undershot waterwheel had a maximum theoretical efficiency of 50%, because the ideal wheel turned at half the speed of the water current, allowing the water to leave the vanes of the wheel behind with half of its initial velocity. The appearance of cheap sheet iron had made it possible to substitute metal vanes for wooden, and iron vanes could easily be bent in a curve. By curving the vanes of the wheel just so towards the incoming water, Poncelet found that it would run up the cupped vane, expending all of its velocity, and then fall out of the bottom of the wheel.[12] He published his idea in 1825 to immediate acclaim: “no other paper on water-wheels… had proved so interesting and commanded such attention.”[13] The Poncelet water wheel. Poncelet’s advance hinted at the possibility of a new water-powered industrial future for France. His wheel design soon became a common sight in a France eager to develop its industrial might, and richer in falling water than in reserves of coal. It inspired the Société d’Encouragement pour l’Industrie Nationale, an organization founded in 1801 to push France to be more industrially competitive with Britain, to offer a prize of 6,000 francs to anyone who “would apply on a large scale, in a satisfactory manner, in factories and manufacturing works, the water turbines or wheels with curved blades of Belidor.” The revenge of the horizontal wheel was at hand.[14] Benoît Fourneyron, an engineer at a water-powered ironworks in the hilly country near the Swiss border, claimed the prize in 1833. Even before the announcement of the prize, he had, in fact, already undertaken a deep study of hydraulic theory, reading up on Borda and his successors. He had devised and tested an improved “Belidor-style” wheel, applying the curved metal vanes of Poncelet to a horizontal wheel situated in a barrel-shaped pit, which we can fairly call the first modern water turbine. He went on to install over a hundred of these turbines around Europe, but his signal achievement was the 1837 spinning mill amid the hills of the Black Forest in Baden, which took in a head of water falling over 350 feet and generated sixty horsepower at 80% efficiency. The spinning rotor of the turbine responsible for this power was a mere foot across and weighed only forty pounds. A traditional wheel could neither take on such a head of water nor derive so much power, so efficiently, from such a compact machine.[15] The Fourneyron turbine. The inflowing water, from the reservoir A drives the rotor before emptying from its radial exterior into the basin D. From Eugène Armengaud, Traité théorique et pratique des moteurs hydrauliques et a vapeur, nouvelle edition (Paris: Armengaud, 1858), 279. Steam Turbines The water turbine was thus a far smaller and more efficient machine than its ancestor, the traditional water wheel. Its basic form had existed since at least the time of Belidor, but to achieve an efficient, high-speed design like Fourneyron’s required a body of engineers deeply educated in mathematical physics and a surrounding material culture capable of realizing those mathematical ideas in precisely machined metal. It also required a social context in which there existed demand for more power than traditional sources could ever provide: in this case, a France racing to catch up with rapidly industrializing Britain. The same relation held between the steam turbine and the reciprocating steam engine: the former could be much more compact and efficient, but put much higher demands on the precision of its design and construction. It was no great leap to imagine that steam could drive a turbine in the same way that water did: through the reaction against or impulse from moving steam. One could even look to some centuries-old antecedents for inspiration: the steam-jet reaction propulsion of Heron’s of Alexandria’s whirling “engine” (mentioned much earlier in this history), or a woodcut in Giovanni Branca’s seventeenth-century Le Machine, which showed the impulse of a steam jet driving a horizontal paddlewheel.   But it is one thing to make a demonstration or draw a picture, and another to make a useful power source. A steam turbine presented a far harder problem than a water turbine, because steam was so much less dense than liquid water. Simply transplanting steam into a water turbine design would be like blowing on a pinwheel: it would spin, but generate little power.[16] The difficulty was clear even in the eighteenth century: when confronted in 1784 with reports of a potential rival steam engine driven by the reaction created by a jet of steam, James Watt calculated that, given the low relative density of steam, the jet would have to shoot from the ends of the rotor at 1,300 feet per second, and thus “without god makes it possible for things to move 1000 feet [per second] it can not do much harm.” As historian of steam Henry Dickinson epitomized Watt’s argument, “[t]he analysis of the problem is masterly and the conclusion irrefutable.”[17] Even when future generations of metal working made the speeds required appear more feasible, one could get nowhere with traditional “cut and try” techniques with ordinary physical tools; the problem demanded careful analysis with the precision tools offered by mathematics and physics.[18] Dozens of inventors took a crack at the problem, nonetheless, including another famed steam engine designer, Richard Trevithick. None found success. Though Fourneyron had built an effective water turbine in the 1830s, the first practical steam turbines did not appear until the 1880s: a time when metallurgy and machine tools had achieved new heights (with mass-produced steels of various grades and qualities available) and a time when even the steam engine was beginning to struggle to sate modern society’s demand for power. It first appeared in two places more or less at once: Sweden and Britain. Gustaf de Laval burst from his middle-class background in the Swedish provinces into the engineering school at Uppsala with few friends but many grandiose dreams: he was the protagonist in his own heroic tale of Swedish national greatness, the engineering genius who would propel Sweden into the first rank of great nations. He lived simultaneously in grand style and constant penury, borrowing from his visions for an ever more prosperous tomorrow to live beyond his means of today. In the 1870s, while working a day job at a glassworks, he developed two inventions based on centrifugal force generated by a rapidly spinning wheel. The first, a bottle-making machine, flopped, but the second, a cream separator, became the basis for a successful business that let him leave his day job behind.[19] Then, in 1882 he patented a turbine powered by a jet of steam directed at a spinning wheel. De Laval claimed that his inspiration came from seeing a nozzle used for sandblasting at the glassworks come loose and whip around, unleashing its powerful jet into the air; it is also not hard to see some continuity in his interest in high-speed rotation. De Laval used his whirling turbines to power his whirling cream separators, and then acquired an electric light company, giving himself another internal customer for turbine power.[20] Though superficially similar to de Branca’s old illustration, de Laval’s machine was far more sophisticated. As Watt had calculated a century earlier, the low density of steam demanded high rotational speeds (otherwise the steam would escape from the machine having given up very little energy to the wheel) and thus a very high-velocity jet: de Laval’s steel rotor spun at tens of thousands of rotations per minute in an enclosed housing. A few years later he invented an hourglass-shaped nozzle to propel the steam jet to supersonic speeds, a shape that is still used in rocket engines for the same purpose today. Despite the more advanced metallurgy of the late-nineteenth century, however, de Laval still ran up against its limits: he could not run his turbine at the most efficient possible speed without burning out his bearings and reduction gear, and so his turbines didn’t fully capture their potential efficiency advantage over a reciprocating engine.[21] Cutaway view of a de Laval turbine, from William Ripper, Heat Engines (London: Longmans, Green, 1909), 234. Meanwhile, the British engineer Charles Parsons came up with a rather different approach to extracting energy from the steam, which didn’t require such rapid rotation. Whereas De Laval strove up from the middle class, Parsons came from the highest gentry. Son of the third Earl of Rosse, he grew up in a castle in Ireland, with grounds that included a lake and a sixty-foot-long telescope constructed to his father’s specifications. He studied at home under, Robert Ball, who later became the Astronomer Royal of Ireland, then went on to graduate from Cambridge University in 1877 as eleventh wrangler—the eleventh best in his class on the mathematics exams.[22] Despite his noble birth, Parsons appeared determined to find his own way in the world. He apprenticed himself at Elswick Works, a manufacturer of heavy construction and mining equipment and military ordnance in Newcastle on Tyne. He spent a couple years with a partner in Leeds trying to develop rocket-powered torpedoes before taking up as a junior partner at another heavy engineering concern, Clarke Chapman in Gateshead (back on the River Tyne).[23] His new bosses directed Parsons away from torpedoes toward the rapidly growing field of electric lighting. He turned to the turbine concept in search of a high-speed rotor that could match the high rotational speeds of a dynamo. Parsons came up with a different solution for the density problem than Laval’s. Rather than try to extract as much power as possible from the steam jet with one extremely fast rotor, he would send the steam through a series of rotors arranged horizontally. They would then not have to spin so quickly (though Parson’s first prototype still ran at 18,000 rotations per minute), and each could extract a bit of energy from the steam as it flowed through the turbine, dropping in pressure. This design extended the two-or three- stages of pressure reduction in a multi-cylinder steam engine into a continuous flow across a dozen or more rotors. Parsons’ approach created some new challenges (keeping the long, rapidly spinning shaft from bowing too far in one direction or the other, for example) but ultimately most future steam turbines would copy this elongated form.[24] Parson’s original prototype turbine and dynamo, with the top removed. Steam entered at the center and exited from both ends, which eliminated the need to deal with “end thrust,” a force pushing on one end of the turbine. From Dickinson, A Short History of the Steam Engine, plate vii. The Rise of Turbines Parsons soon founded his own firm to exploit the turbine. Because it has far less inherent friction than the piston of a traditional engine, and because none of its parts have to touch both hot and cold steam, a turbine had the potential to be much more efficient, but they didn’t start out that way. So his early customers were those who cared mainly about the smaller size of turbines: shipbuilders looking to put in electric lighting without adding too much weight or using too much space in the hull. In other applications reciprocating engines still won out.[25] Further refinements, however, allowed turbines to start to supplant reciprocating engines in electrical systems more generally: more efficient blade designs, the addition of a regulator to ensure that steam entered the turbine only at full pressure, the superheating of steam at one end and the condensing of it at the other to maximize the fall in temperature across the entire engine. Turbo-generators—electrical dynamos driven by turbines—began to find buyers in the 1890s. By 1896, Parsons could boast that a two-hundred-horsepower turbine his firm constructed for a Scottish electric power station ran at 98% of its ideal efficiency, and Westinghouse had begun to develop turbines under license in the United States.[26] Cutaway view of a fully developed Parsons-style turbine. Steam enters at left (A) and passes through the rotors to the right. From Ripper, Heat Engines, 241. At the same time, Parsons was pushing for the construction of ships with turbine powerplants, starting with the prototype Turbinia, which drove nine propellers with three turbines and achieved a top speed of nearly forty miles-per-hour. Suitably impressed, the British Admiralty ordered turbine-powered destroyers (starting with Viper in 1897), but the real turning point came in 1906 with the completion of the first turbine-driven battleship (Dreadnought) and transatlantic steamers (Lusitania and Muaretania), all supplied with Parsons powerplants.[27] HMS Dreadnought was remarkable not only for her armament and armor, but also for her speed of 21 knots (24 miles per hour), made possible by Parsons turbines. The very first steam turbines had demonstrated their advantage over traditional engines in size; a further decade-and-a-half of development allowed them to realize their potential advantages in efficiency; and now these massive vessels made clear their third advantage: the ability to scale to enormous power outputs. As we saw, the monster steam engines at the subway power house in New York could generate 12,000 horsepower, but the turbines aboard Lusitiania churned out half again as much, and that was far from the limit of what was possible. In 1915, the Interborough Rapid Transit Company, facing ever-growing demand for power with the addition of a third (express) track to its elevated lines, installed three 40,000 horsepower turbines for electrical generation, obsoleting Reynolds’ monster engines of a decade earlier. By the 1920s, 40,000 horsepower turbines were being built in the U.S., and burning half as much coal per watt of power generated as the most efficient reciprocating engines.[28] Parsons lived to see the triumph of his creation. He spent his last years cruising the world, and preferred to spend the time between stops talking shop with the crew and engineers rather than lounging with other wealthy passengers. He died in 1931, at age 76, in the Caribbean while aboard ship on the (turbine-powered of course) Duchess of Richmond.[29] Meanwhile, power usage shifted towards electricity, made widely available by the growth of steam and water turbines and the development of long-distance power transmission, not by traditional steam engines. Niagara was just a foretaste of the large-scale water power projects made feasible by the newly found capacity to transmit that power wherever it was needed: the Hoover Dam and Tennessee Valley Authority in the U.S., the Rhine power dams in Europe, and later projects intended to spur the modernization of poorer countries, from the Aswan Dam on the Nile and the Gezhouba Dam on the Yangtze. In regions with easy access to coal, however, steam turbines provided the majority of all electric power until far in the twentieth century. Cheap electricity transformed industry after industry. By 1920, manufacturing consumed half of the electricity produced in the U.S., mainly through dedicated electric motors at each tool, eliminating the need for the construction and maintenance of a large, heavy steam engine and for bulky and friction-heavy shafts and belts to transmit power through the factory. The capital barriers to starting a new manufacturing plant thus dropped substantially along with the recurring cost of paying for power, and the way was opened to completely rethink how manufacturing plants were built and operated. Factories became cleaner, safer, and more pleasant to work in, and the ability to organize machines according to the most efficient work process rather than the mechanical constraints of power delivery produced huge dividends in productivity.[30] A typical pre-electricity factory power distribution system, based on line shafts and belts (in this case driving power looms). All the machines in the factory have to be organized around the driveshafts. [Z22, CC BY-SA 3.0] The 1910 Ford Highland Park plant represents a hybrid stage on the way to full electrification of every machine; the plant still had overhead line shafts (here for milling engine blocks), but each area was driven by a local electric motor, allowing for a much more flexible arrangement of machinery. By that time, the heyday of the piston-driven steam engine was over. For large-scale installations, it could no longer compete with turbines (whether powered by liquid water or steam). At the same time, feisty new competitors, diesel and gasoline engines, were gnawing away at its share of the lower horsepower market. The warning shot fired by the air engine had finally caught up to steam. It could not outrun thermodynamics, and the incredibly energy-dense new fuel source that had come bubbling up out of the ground: rock oil, or petroleum.

Read more
From ACS to Altair: The Rise of the Hobby Computer

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] The Early Electronics Hobby A certain pattern of technological development recurred many times in the decades around the turn of the twentieth century: a scattered hobby community, tinkering with a new idea, develops it to the point where those hobbyists can sell it as a product. This sets off a frenzy of small entrepreneurial firms, competing to sell to other hobbyists and early adopters. Finally, a handful of firms grow to the point where they can drive down costs through economies of scale and put their smaller competitors out of business. Bicycles, automobiles, airplanes, and radio broadcasting all developed more or less in this way. The personal computer followed this same pattern; indeed, it marks the very last time that a “high-tech” piece of hardware emerged from this kind of hobby-led development. Since that time, new hardware technology has typically depended on new microchips. That is a capital barrier far too high for hobbyists to surmount; but as we have seen, the computer hobbyists lucked into ready-made microchips created for other reasons, but already suited to their purposes. The hobby culture that created the personal computer was historically continuous with the American radio hobby culture of the early twentieth-century, and, to a surprising degree, the foundations of that culture can be traced back to the efforts of one man: Hugo Gernsback. Gernsback (born Gernsbacher, to well-off German Jewish parents) came to the United States from Luxembourg in 1904 at the age of nineteen, shortly after his father’s death. Already fascinated by electrical equipment, American culture, and the fiction of Jules Verne and H.G. Wells, he started a business, the Electro Importing Company, in Manhattan, that offered both retail and mail-order sales of radios and related equipment. His company catalog evolved into a magazine, Modern Electrics, and Gernsback evolved into a publisher and community builder (he founded the Wireless Association of America in 1909 and the Radio League of America in 1915), a role he relished for the rest of his working life.[1] Gernsback (foreground) giving an over-the-air lecture on the future of radio. From his 1922 book, Radio For All, p. 229. The culture that Gernsback nurtured valued hands-on tinkering and forward-looking futurism, and in fact viewed them as two sides of the same coin. Science fiction (“scientifiction,” as Gernsback called it) writing and practical invention went hand in hand, for both were processes for pulling the future into the present. In a May 1909 article in Modern Electrics, for example, Gernsback opined on the prospects for radio communication with Mars: “If we base transmission between the earth and Mars at the same figure as transmission over the earth, a simple calculation will reveal that we must have the enormous power of 70,000 K. W. to our disposition in order to reach Mars,” and went on to propose a plan for building such a transmitter within the next fifteen or twenty years. As science fiction emerged as its own genre with its own publications in the 1920s (many of them also edited by Gernsback), this kind of speculative article mostly disappeared from the pages of electronic hobby magazines. Gernsback himself occasionally dropped in with an editorial, such as a 1962 piece in Radio-Electronics on computer intelligence, but the median electronic magazine article had a much more practical focus. Readers were typically hobbyists looking for new projects to build or service technicians wanting to keep up with the latest hardware and industry trends.[2] Nonetheless, the electronic hobbyists were always on the lookout for the new, for the expanding edge of the possible: from vacuum tubes, to televisions, to transistors, and beyond. It’s no surprise that this same group would develop an early interest in building computers. Nearly everyone who we find building (or trying to build) a personal or home computer prior to 1977 had close ties to the electronic hobby community. The Gernsback story also highlights a common feature of hobby communities of all sorts. A subset of radio enthusiasts, seeing the possibility of making money by fulfilling the needs of their fellow hobbyists, started manufacturing businesses to make new equipment for hobby projects, retail businesses to sell that equipment, or publishing businesses to keep the community informed on new equipment and other hobby news. Many of these enterprises made little or no money (at least at first), and were fueled as much by personal passion as by the profit motive; they were the work of hobby-entrepreneurs. It was this kind of hobby-entrepreneur who would first make personal computers available to the public. The First Personal Computer Hobbyists The first electronic hobbyist to take an interest in building computers, whom we know of, was Stephen Gray. In 1966, he founded the Amateur Computer Society (ACS), an organization that existed mainly to produce a series of quarterly newsletters typed and mimeographed by Gray himself. Gray has little to say about his own biography in the newsletter or in later reflections on the ACS. He reveals that he worked as an editor of the trade magazine Electronics, that he lived in Manhattan and then Darien, Connecticut, that he had been trying to build a computer of his own for several years, and little else. But he clearly knew the radio hobby world. In the fourth, February 1967, number of his newsletter, he floated the idea of a “Standard Amateur Computer Kit” (SACK) that would provide an economical starting point for new hobbyists, writing that,[3] Amateur computer builders are now much like the early radio amateurs. There’s a lot of home-brew equipment, much patchwork, and most commercial stuff is just too expensive. The ACS can help advance the state of the amateur computer art by designing a standard amateur computer, or at least setting up the specs for one. Although the mere idea of a standard computer makes the true blue home-brew types shudder, the fact is that amateur radio would not be where it is today without the kits and the off-the-shelf equipment available.[4] By the Spring of 1967, Gray had found seventy like-minded members through advertisements in trade and hobby publications, most of them in the United States, but a handful in Canada, Europe, and Japan. We know little about the backgrounds or motivations of these men (and they were exclusively men), but when their employment is mentioned, they are found at major computer, electronics, or aerospace firms; at national labs; or at large universities. We can surmise that most worked with or on computers as part of their day job. A few letter writers disclose prior involvement in hobby electronics and radio, and from the many references to attempts to imitate the PDP-8 architecture, we can also guess that many members had some association with DEC minicomputer culture. It is speculative but plausible to guess that the 1965 release of the PDP-8 might have instigated Gray’s own home computer project and the later creation of the ACS. Its relatively low price, compact size, and simple design may have catalyzed the notion that home computers lay just out of reach, at least for Gray and his band of like-minded enthusiasts. Whatever their backgrounds and motivations, the efforts of these amateurs to actually builda computer proved mostly fruitless in these early years. The January 1968 newsletter reported a grand total of two survey respondents who possessed an actual working computer, though respondents as a whole had sunk an average of two years and $650 on their projects ($6,000 in 2024 dollars). The problem of assembling one’s own computer would daunt even the most skilled electronic hobbyist: no microprocessors existed, nor any integrated circuit memory chips, and indeed virtually no chips of any kind, at least at prices a “homebrewer” could afford. Both of the two complete computers reported in the survey were built from hand-wired transistor logic. One was constructed from the parts of an old nuclear power system control computer, PRODAC IV. Jim Sutherland took the PRODAC’s remains home from his work at Westinghouse after its retirement, and re-dubbed it the ECHO IV (for Electronic Computing Home Operator). Though technically a “home” computer, to borrow an existing computer from work was not a path that most would-be home-brewers could follow. This hardly had the makings of a technological revolution. The other complete “computer,” the EL-65 by Hans Ellenberger of Switzerland, on the other hand, was truly an electronic desktop calculator; it could perform arithmetic ably enough, but could not be programmed. [5] The Emergence of the Hobby-Entrepreneur As integrated circuit technology got better and cheaper, the situation for would-be computer builders gradually improved. By 1971, the first, very feeble, home computer kits appeared on the market, the first signs of Gray’s “SACK.” Though neither used a microprocessor, they took advantage of the falling prices of integrated circuits: the CPU of each consisted of dozens of small chips wired together. The first was the National Radio Institute (NRI) 832, the hardware accompaniment to a computer technician course disseminated by the NRI, and priced at about $500. Unsurprisingly, the designer, Lou Freznel, was a radio hobby enthusiast, and a subscriber to Stephen Gray’s ACS Newsletter. But the NRI 832 is barely recognizable as a functional computer: it had a measly sixteen 8-bit words of read-only memory, configured by mechanical switches (with an additional sixteen bytes of random-access memory available for purchase).[6] OLYMPUS DIGITAL CAMERA " data-medium-file="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=300" data-large-file="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=739" loading="lazy" width="1024" height="684" src="https://technicshistory.com/wp-content/uploads/2025/02/nri-832_pic1.jpg?w=1024" alt="" class="wp-image-14940">The NRI 832. The switches on the left were used to set the values of the bits in the tiny memory. The banks of lights at the top left and right, showing the binary values of the program counter and accumulator, were the only form of output  [vintagecomputer.net]. The $750 Kenbak-1 that appeared the same year was nominally more capable, with 256 bytes of memory, though implemented with shift-register chips (accessible one bit at a time), not random-access memory. Indeed, the entire machine had a serial-processing architecture, processing only one bit at a time through the CPU, and ran at only about 1,000 instructions per second—very slow for an electronic computer. Like the NRI 832, it offered only switches as input and only a small panel of display lights for showing register contents as output. Its creator, John Blankenbaker, was a radio lover from boyhood before enrolling as an electronics technician in the Navy. He began working on computers in the 1950s, beginning with the Bureau of Standards SEAC. Intrigued by the possibility of bringing a computer home, he tinkered with spare parts for making his own computer for years, becoming his own private ACS. By 1971 he thought he had a saleable device that could be used for teaching programming, and he formed the eponymous “Kenbak” company to sell it.[7] Blankenbaker was the first of the amateur computerists to try to bring his passion to market; the first hobby-entrepreneur of the personal computer. He was not the most successful. I found no records of the sales of the NRI 832, but by Blankenbaker’s own testimony, only forty-four Kenbak-Is were sold. Here were home computer kits readily available at a reasonable price, four years before Altair. Why did they fall flat? As we have seen, most members of the Amateur Computer Society had aimed to make a PDP-8 or something like it; this was the most familiar computer of the 1960s and early 1970s, and provided the mental model for what a home computer could and should be. The NRI 832 and Kenbak-I came nowhere close to the capabilities of a PDP-8, nor were they designed to be extensible or expandable in any way that might allow them to transcend their basic beginnings. These were not machines to stir the imaginative loins of the would-be home computer owner. Hobby-Entrepreneurship in the Open These early, halting steps towards a home computer, from Stephen Gray to the Kenbak-I, took place in the shadows, unknown to all but a few, the hidden passion of a handful of enthusiasts exchanging hand-printed newsletters. But several years later, the dream of a home computer burst into the open in a series of stories and advertisements in major hobby magazines. Microprocessors had become widely available. For those hooked on the excitement of interacting one-on-one with a computer, the possibility of owning their own machine felt tantalizing close. A new group of hobby-entrepreneurs now tried to make their mark by providing computer kits to their fellow enthusiasts, with rather more success than NRI and Kenbak. The overture came in the fall of 1973, with Don Lancaster’s “TV Typewriter,” featured on the cover of the September issue of Radio-Electronics (a Gernsback publication, though Gernsback himself was, by then, several years dead). Lancaster, like most of the people we have met in this chapter, was an amateur “ham” radio operator and electronics tinkerer. Though he had a day job at Goodyear Aerospace in Phoenix, Arizona, he figured out how to make a few extra bucks from his hobby by publishing projects in magazines and selling pre-built circuit boards for those projects via a Texas hobby firm called Southwest Technical Products (SWTPC). The 1973 Radio-Electronics TV Typewriter cover. His TV Typewriter was, of course, not a computer at all, but the excitement it generated certainly derived from its association with computers. One of many obstacles to a useful home computer was the lack of a practical output device: something more useful than the handful of glowing lights that the Kenbak-I sported, but cheaper and more compact than the then-standard computer input/output device, a bulky teletype terminal. Lancaster’s electronic keyboard, which required about $120 in parts, could hook up to an ordinary television and turn it into a video text terminal, displaying up to sixteen lines of thirty-two characters each. Shift-registers continued to be the only cheap form of semiconductor memory, and so that was what Lancaster used for storing the characters to be displayed on screen. Lancaster gave the parts list and schematic to the TV Typewriter away for free, but made money by selling pre-built subassemblies via SWTPC that saved buyers time and effort, and by publishing guidebooks likethe TV Typewriter Cookbook.[8] The next major landmark appeared six months later in a ham radio magazine, QST, named after the three-letter ham code for “calling all stations.” A small ad touted the availability of “THE TOTALLY NEW AND THE VERY FIRST MINI-COMPUTER DESIGNED FOR THE ELECTRONIC/COMPUTER HOBBYIST” with kit prices as low as $440. This was the SCELBI 8-H, the first computer kit based around a microprocessor, in this case the Intel 8008. Its creator, Nat Wadsworth, lived in Connecticut, and became enthusiastic about the microprocessor after attending a seminar given by Intel in 1972, as part of his job as an electrical engineer at an electronics firm. Wadsworth was another ham radio enthusiast, and already enough of a personal computing obsessive to have purchased a surplus DEC PDP-8 at a discount for home use (he paid “only” $2,000, about $15,000 in 2024 dollars). Since his employer did not share his belief in the 8008, he looked for another outlet for his enthusiasm, and teamed up with two other engineers to develop what became the SCELBI-8H (for SCientific ELectronic BIological). Their ads drew thousands of responses and hundreds of orders over the following months, though they ended up losing money on every machine sold.[9] A similar machine appeared several months later, this time as a hobby magazine story, on the cover the July 1974 issue of Radio-Electronics: “Build the Mark-8 Minicomputer,” ran the headline (notice again the “minicomputer” terminology: a PDP-8 of one’s own remained the dream). The Mark-8 came from Jonathan Titus, a grad student from Virginia, who had built his own 8008-based computer and wanted to share the design with the rest of the hobby. Unlike SCELBI, he did not sell it as a complete machine or even a kit: he expected the Radio-Electronics reader to buy and assemble everything themselves. That is not to say that Titus made no money: he followed a hobby-entrepreneur business model similar to Don Lancaster’s, offering an instructional guidebook for $5, and making some pre-made boards available for sale through a retailer in New Jersey, Techniques, Inc. The 1974 Mark-8 Radio-Electronics cover. The SCELBI-8H and Mark-8 looked much more like a “real” minicomputer than the NRI 832 or Kenbak-I. A hobbyist hungry for a PDP-8-like machine of their own could recognize in this generation of machines something edible, at least. Both used an eight-bit parallel processor, not an antiquated bit-serial architecture, came with one kilobyte of random-access memory, and were designed to support textual input/output devices. Most importantly both could be extended with additional memory or I/O cards. These were computers you could tinker with, that could become an ongoing hobby project in and of themselves. A ham radio operator and engineering student in Austin, Texas named Terry Ritter spent over a year getting his Mark-8 fully operational with all of the accessories that he wanted, including an oscilloscope display and cassette tape storage.[10] In the second half of 1974, a community of hundreds of hobbyists like Ritter began to form around 8008-based computers, significantly larger than the tiny cadre of Amateur Computer Society members. In September 1974, Hal Singer began publishing the Mark-8 User Group Newsletter (later renamed the Micro-8 Newsletter) for 8008 enthusiastsout of his office at the Cabrillo High School Computer Center in Lompoc, California. He attracted readers from all across the country: California and New York, yes, but also Iowa, Missouri, and Indiana. Hal Chamberlain started the Computer Hobbyist newsletter two months later. Hobby entrepreneurship expanded around the new machines as well: Robert Suding formed a company in Denver called the Digital Group to sell a packet of upgrade plans for the Mark-8.[11] The first tender blossoms of a hobby computer community had begun to emerge. Then another computer arrived like a spring thunderstorm, drawing whole gardens of hobbyists up across the country and casting the efforts of the likes of Jonthan Titus and Hal Singer in the shade. It, too, came as a response to the arrival of the Mark-8, by a rival publication in search of a blockbuster cover story of their own. Altair Arrives Art Salsberg and Les Solomon, editors at Popular Electronics, were not oblivious to the trends in the hobby, and had been on the lookout for a home computer kit they could put on their cover since the appearance of the TV Typewriter in the fall of 1973. But the July 1974 Mark-8 cover story at rival Radio-Electronics threw a wrench in their plans: they had an 8008-based design of their own lined up, but couldn’t publish something that looked like a copy-cat machine. They needed something better, something to one-up the Mark-8. So, they turned to Ed Roberts. He had nothing concrete, but had pitched Solomon a promise that he could build a computer around the new, more powerful Intel 8080 processor. This pitch became Altair—named, according to legend, by Solomon’s daughter, after the destination of the Enterprise in the Star Trek episode “Amok Time”—and it set the hobby electronics world on fire when it appeared as the January 1975 Popular Electronics cover story. The famous Popular Electronics Altair cover story. Altair, it should be clear by now, was continuous with what came before: people had been dreaming of and hacking together home computers for years, and each year the process became easier and more accessible, until by 1974 any electronics hobbyist could order a kit or parts for a basic home computer for around $500. What set the Altair apart, what made it special, was the sheer amount of power it offered for the price, compared to the SCELBI-8H and Mark-8. The Altair’s value proposition poured gasoline onto smoldering embers, it was an accelerant that transformed a slowly expanding hobby community into a rapidly expanding industry. The Altair’s surprising power derived ultimately from the nerve of MITS founder Ed Roberts. Roberts, like so many of his fellow electronics hobbyists, had developed an early passion for radio technology that was honed into a professional skill by technical training in the U.S. armed forces—the Air Force, in Roberts’ case. He founded Micro Instrumentation and Telemetry Systems (MITS) in Albuquerque with fellow Air Force officer Forrest Mims to sell electronic telemetry modules for model rockets. A crossover hobby-entrepreneur business, this straddled two hobby interests of the founders, but did not prove very profitable. A pivot in 1971 to sell low-cost kits to satiate the booming demand for pocket calculators, on the other hand, proved very successful—until it wasn’t. By 1974 the big semiconductor firms had vertically integrated and driven most of the small calculator makers out of business. For Roberts, the growing hobby interest in home computers offered a chance to save a dying MITS, and he was willing to bet the company on that chance. Though already $300,000 in debt, he secured a loan of $65,000 from a trusting local banker in Albuquerque, in September 1974. With that money, he negotiated a steep volume discount from Intel by offering to buy a large quantity of “ding-and-dent” 8080 processors with cosmetic damage. Though the 8080 listed for $360, MITS got them for $75 each. So, while Wadsworth at SCELBI (and builders assembling their own Mark-8s) were paying $120 for 8008 processors, MITS was paying nearly half that for a far better processor.[12] It is hard to overstate what a substantial leap forward in capabilities the 8080 represented: it ran much faster than the 8008, integrated more capabilities into a single chip (for which the 8008 required several auxiliary chips), could support four times as much memory, and had a much more flexible 40-pin interface (versus the 18 pins on the 8008). The 8080 also referenced a program stack an external memory, while the 8008 had a strictly size-limited on-CPU stack, which limited the software that could be written for it. The 8080 represented such a large leap forward that, until 1981, essentially the entire personal and home computer industry ran on the 8080 and two similar designs: the Zilog Z80 (a processor that was software-compatible with the 8080 but ran at higher speeds), and the MOS Technology 6502 (a budget chip with roughly the same capabilities as the 8080).[13] The release of the Altair kit at a total price of $395 instantly made the 8008-based computers irrelevant. Nat Wadsworth of SCELBI reported that he was “devastated by appearance of Altair,” and “couldn’t understand how it could sell at that price.” Not only was the price right, the Altair also looked more like a minicomputer than anything before it. To be sure, it came standard with a measly 256 bytes of memory and the same “switches and lights” interface as the ancient kits from 1971. It would take quite a lot of additional money and effort to turn into a fully functional computer system. But it came full of promise, in a real case with an extensible card slot system for adding additional memory and input/output controllers. It was by far the closest thing to a PDP-8 that had ever existed at a hobbyist price point—just as the Popular Electronics cover claimed: “World’s First Minicomputer Kit to Rival Commercial Models.” It made the dream of the home computer, long cherished by thousands of computer lovers, seem not merely imminent, but immanent: the digital divine made manifest. And this is why the arrival of the MITS Altair, not of the Kenbak-I or the SCELBI-8H, is remembered as the founding event of the personal computer industry.[14] All that said, even a tricked-out Altair was hardly useful, in an economic sense. If pocket calculators began as a tool for business people, and then became so cheap that people bought them as a toy, the personal computer began as something so expensive and incapable that only people who enjoyed them as a toy would buy them. Next time, we will look at the first years of the personal computer industry: a time when the hobby computer producers briefly flourished and then wilted, mostly replaced and outcompeted by larger, more “serious” firms. But a time when the culture of the typical computer user remained very much a culture of play. Appendix: Micral N, The First Useful Microcomputer There is another machine sometimes cited as the first personal computer: the Micral N. Much like Nat Wadsworth, French engineer François Gernelle was smitten with the possibilities opened up by the Intel 8008 microprocessor, but could not convince his employer, Intertechnique, to use it in their products. So, he joined other Intertechnique defectors to form Réalisation d’Études Électroniques (R2E), and began pursuing some of their erstwhile company’s clients. In December 1972, R2E signed an agreement with one of those clients, the Institut National de la Recherche Agronomique (INRA, a government agronomical research center), to deliver a process control computer for their labs at fraction of the price of a PDP-8. Gernelle and his coworkers toiled through the winter in a basement in the Paris suburb of Châtenay-Malabry to deliver a finished system in April 1973, based on the 8008 chip and offered at a base price of 8,500 francs, about $2,000 in 1973 dollars (one fifth the going rate for a PDP-8).[15] The Micral N was a useful computer, not a toy or a plaything. It was not marketed and sold to hobbyists, but to organizations in need of a real-time controller. That is to say, it served the same role in the lab or factory floor that minicomputers had served for the previous decade. It can certainly be called a microcomputer by dint of its hardware. But the Altair lineage stands out because it changed how computers were used and by whom; the microprocessor happened to make that economically possible, but it did not automatically make every machine into which it was placed a personal computer. The Micral N looks very much like the Altair on the outside, but was marketed entirely differently [Rama, Cc-by-sa-2.0-fr]. Useful personal computers would come, in time. But the demand that existed for a computer in one’s own home or office in the mid-1970s came from enthusiasts with a desire to tinker and play on a computer, not to get serious business done on one. No one had yet written and published the productivity software that would even make a serious home or office computer conceivable. Moreover, it was still far too expensive and difficult to assemble a comprehensive office computer system (with a display, ample memory, and external mass storage for saving files) to attract people who didn’t already love working on computers for their own sake. Until these circumstances  changed, which would take several years, play reigned unchallenged among home computer users. The Micral N is an interesting piece of history, but it is an instructive contrast with the story of the personal computer, not a part of it.

Read more
Britain’s Steam Empire

The British empire of the nineteenth century dominated the world’s oceans and much of its landmass: Canada, southern and northeastern Africa, the Indian subcontinent, and Australia. At its world-straddling Victorian peak, this political and economic machine ran on the power of coal and steam; the same can be said of all the other major powers of the time, from also-ran empires such as France and the Netherlands, to the rising states of Germany and the United States. Two technologies bound the far-flung British empire together, steamships and the telegraph; and the latter, which might seem to represent a new, independent technical paradigm based on electricity, depended on the former. Only steamships, who could adjust course and speed at will regardless of prevailing winds, could effectively lay underwater cable.[1] A 1901 map of the cable network of the Eastern Telegraph Company (which later became Cable & Wireless) shows the pervasive commercial and imperial power of Victorian London. Not just an instrument of imperial power, the steamer also created new imperial appetites: the British empire and others would seize new territories just for the sake of provisioning their steamships and protecting the routes they plied. Within this world system under British hegemony, access to coal became a central economic and strategic factor. As the economist Stanley Jevons wrote in his 1865 treatise on The Coal Question: Day by day it becomes more obvious that the Coal we happily possess in excellent quality and abundance is the Mainspring of Modem Material Civilization. …Coal, in truth, stands not beside but entirely above all other commodities. It is the material energy of the country — the universal aid — the factor in everything we do. With coal almost any feat is possible or easy; without it we are thrown back into the laborious poverty of early times.[2] Steamboats and the Projection of Power As the states of Atlantic Europe—Portugal and Spain, then later the Netherlands, England, and France—began to explore and conquer along the coasts of Africa and Asia in the sixteenth and seventeenth centuries, their cannon-armed ships proved one of their major advantages. Though the states of India and Indonesia had access to their own gunpowder weaponry, they did not have the ship-building technology to build stable firing platforms for large cannon broadsides. The mobile fortresses that the Europeans brought with them allowed them to dominate the sea lanes and coasts, wresting control of the Indian Ocean trade from the local powers.[3] What they could not do, however, was project power inland from the sea. The galleons and later heavily armed ships of the Europeans could not sail upriver. In this era, Europeans rarely could dominate inland states. When it did happen, as in India, it typically required years or decades of warfare and politicking, with the aid of local alliances. The steamboat, however, opened the rivers of Africa and Asia to lightning attacks or shows of force: directly by armed gunboats themselves, or indirectly through armies moving upriver supplied by steam-powered craft. We already know, of course, how Laird used steamboats in his expedition up the Niger in 1832. Although his intent was purely commercial, not belligerent, he had demonstrated the that interior of Africa could be navigated with steam. When combined with quinine to protect European settlers from malaria, the steamboat would help open a new wave of imperial claims on African territory. But even before Laird’s expedition, the British empire had begun to experiment with the capabilities of riverine steamboats. British imperial policy in Asia still operated under the corporate auspices of the East India Company (EIC), not under the British government, and in 1824 the EIC went to war with Burma over control of territories between the Burmese Empire and British India, in what is now Bangladesh. It so happened that the company had several steamers on hand, built in the dockyards of Calcutta (now Kolkata), and the local commanders put them to work in war service (much as Andrew Jackson had done with Shreve’s Enterprise in 1814).[4] Most impressive was Diana, which penetrated 400 miles up the Irrawaddy to the Burmese imperial capital at Amarapura: “she towed sailing ships into position, transported troops, reconnoitered advance positions, and bombarded Burmese fortifications with her swivel guns and Congreve rockets.”[5] She also captured the Burmese warships, who could not outrun her and whose small cannons on fixed mounts could not effectively put fire on her either. A depiction of an attack on Burmese fortifications by the British fleet. The steamship Diana is at right. In the Burmese war, however, steamships had served as the supporting cast. In the First Opium War, the steamship Nemesis took a star turn. The East India Company traditionally made its money by bringing the goods of the East—mainly tea, spices, and cotton cloth—back west to Europe. In the nineteenth century, however, the directors had found an even more profitable way to extract money from their holdings in the subcontinent: by growing poppies and trading the extracted drug even further east, to the opium dens of China. The Qing state, understandably, grew to resent this trade that immiserated its citizens, and so in 1839 the emperor promulgated a ban on the drug. The iron-hulled Nemesis was built and dispatched to China by the EIC with the express purpose of carrying war up China’s rivers. Shemounted a powerful main battery of twin swivel-mount 32-pounders and numerous smaller weapons, and with a shallow draft was able to navigate not just up the Pearl River, but into the shallow waterways around Canton (Guangzhou), destroying fortifications and ships and wreaking general havoc. Later Nemesis and several other steamers, towing other battleships, brought British naval power 150 miles up the Yangtze to its junction with the Grand Canal. The threat to this vital economic lifeline brought the Chinese government to terms.[6] Nemesis and several British boats destroying a fleet of Chinese junks in 1841. Steamboats continued to serve in imperial wars throughout the nineteenth century. A steam-powered naval force dispatched from Hong Kong helped to break the Indian Rebellion of 1857. Steamers supplied Herbert Kitchener’s 1898 expedition up the Nile to the Sudan, with the dual purpose of avenging the death of Charles “Chinese” Gordon fourteen years earlier and of preventing the French from securing a foothold on the Nile. His steamboat force consisted of a mix of naval gunboats and a civilian ship requisitioned from the ubiquitous Cook & Son tourism and logistics firm.[7] Kitchener could only dispatch such an expedition because of the British power base in Cairo (from whence it ruled Egypt through a puppet khedive), and that power base existed for one primary reason: to protect the Suez Canal. The Geography of Steam: Suez In 1798, Napoleon’s army of conquest, revolution, and Enlightenment arrived in Egypt with the aim of controlling the Eastern half of the Mediterranean and cutting off Britain’s overland link to India. There they uncovered the remnants of a canal linking the Nile Delta to the Red Sea. Constructed in antiquity and restored several times after, it had fallen into disuse sometime in the medieval period. It’s impossible to know for certain, but when operable, this canal had probably served as a regional waterway connecting the Egyptian heartland around the Nile with the lands around the head of the Red Sea. By the eighteenth century, in an age of global commerce and global empires, however, a nautical connection between the Mediterranean and Red Sea had more far-reaching implications.[8] A reconstruction of the possible location of the ancient Nile-Suez canal. [Picture by Annie Brocolie / CC BY-SA 2.5] Napoleon intended to restore the canal, but before any work could commence, France’s forces in Egypt withdrew in the face of a sustained Anglo-Ottoman assault. Though British commercial and imperial interests presented a far stronger case for a canal than any benefits France might have hoped to get from it, the British government fretted about upsetting the balance of power in the Middle East and disrupting their textile industry’s access to the Egyptian cotton cloth. They contented themselves instead with a cumbrous overland route to link the Red Sea and the Mediterranean. Meanwhile, a series of French engineers and diplomats, culminating in Ferdinand de Lesseps, pressed for the concession required to build a sea-to-sea Suez Canal, and construction under French engineers finally began in 1861. The route formally opened in November, 1869 in a grand celebration that attracted most of the crowned heads of continental Europe.[9] It was just as well that the project was delayed: it allowed for the substitution, in 1865, of steam dredges for conscripted labor at the work site. Of the hundred million cubic yards of earth excavated for the canal, four-fifths were dug out with iron and steam rather than muscle, generating 10,000 horsepower at the cost of £20,000 of coal per month.[10] Without mechanical aid, the project would have dragged on well into the 1870s, if it were completed at all. Moreover, Napoleon’s precocious belief in the project notwithstanding, the canal’s ultimate fiscal health depended of the existence of ocean-going steamships as well. By sail, depending on the direction of travel and the season, the powerful trade winds on the southern route could make it the faster option, or at least the more efficient one given the tolls on the canal.[11] But for a steamship, the benefits of cutting off thousands of miles from the journey were three-fold: it didn’t just save time, it also saved fuel, which in turn freed more space for cargo. Given the tradeoffs, as historian Max Fletcher wrote, “[a]lmost without exception, the Suez Canal was an all-steamer route.”[12] The modern Suez Canal, with the Mediterranean Sea on the left and the Red Sea on the right. [Picture by Pierre Markuse / CC BY 2.0] Ironically, the British, too conservative in their instincts to back the canal project, would nonetheless derive far more obvious benefit from it than the French government or investors, who struggled to make their money back in the early years of the canal. The new canal became the lifeline to the empire in India and beyond. This new channel for the transit of people and goods was soon complemented by an even more rapid channel for the transmission of intelligence. The first great achievement of the global telegraph age was the transatlantic cable laid in 1866 by Brunel’s Great Eastern, whose cavernous bulk allowed it to lay the entire line from Ireland to Newfoundland in a single piece in 1866.[13] This particular connection served mainly commercial interests, but the Great Eastern went on to participate in the laying of a cable from Suez to Aden and on to Bombay in 1870, providing relatively instantaneous electric communication (modulo a few intermediate hops) from London to its most precious imperial possession.[14] The importance of the Suez for quick communications with India in turn led to further aggressive British expansion in 1882: the bombarding of Alexandria and the de facto conquest of an Egypt still nominally loyal to the Sultan in Istanbul. This was not the only such instance. Steam power opened up new ways for empires to exert their might, but also pulled them to new places sought out only because steam power itself had made them important. The Geography of Steam: Coaling Stations In that vein, coaling stations—coastal and island stations for restocking ships with fuel—became an essential component of global empire. In 1839, the British seized the port of Aden (on the gulf of the same name) from the Sultan of Lahej for exactly that purpose, to serve as a coaling station for the steamers operating between the Red Sea and India.[15] Other, pre-existing waystations waxed or waned in importance along with the shift from the geography of sail to that of steam. St. Helena in the Atlantic, governed by the East India Company since the 1650s, could only be of use to ships returning from Asia in the age of sail, due to the prevailing trade winds that pushed outbound ships towards South America. The advent of steam made an expansion of St. Helena’s role possible, but then the opening of Suez diverted traffic away from the South Atlantic altogether. The opening of the Panama Canal similarly eclipsed the Falkland Islands’ position as the gateway to the Pacific.[16] In the case of shore-bound stations such as Aden, the need to protect the station itself sometimes led to new imperial commitments in its hinterlands, pulling empire onward in the service of steam. Aden’s importance only multiplied with the opening of the Suez Canal, which now made it part of the seven-thousand-mile relay system between Great Britain and India. Aggressive moves by the Ottoman Empire seemed to imperil this lifeline, and so the existence of the station became the justification for Britain to create a protectorate (a collection of vassal states, in effect) over 100,000 square miles of the Arabian Peninsula.[17] Britain created the 100,000-square-mile Aden protectorate to safeguard its steamship route to India. Coaling stations acquired local coal where it was available—from North America, South Africa, Bengal, Borneo, or Australia—where it was not, it had to be brought in, ironically, by sailing ships. But although one lump of coal may seem as good as another, it was not, in fact, a single fungible commodity. Each seam varied in the ratio and types of chemical impurities it contained, which affected how the coal burned. Above all, the Royal Navy was hungry for the highest quality coal. By the 1850s, the British Admiralty determined that a hard coal from the deeper layers of certain coal measures in South Wales exceeded all others in the qualities required for naval operations: a maximum of energy and a minimum of residues that would dirty engines and black smoke that would give away the position of their ships over the horizon. In 1871 the Navy launched its first all-steam oceangoing warship, the HMS Devastation, which needed, at full bore, 150 tons of this top-notch coal per day, without which it would become “the verist hulk in the navy.” The coal mines lining a series of north-south valleys along the Bristol Channel, which had previously supplied the local iron industry, thus became part of a global supply chain. The Admiralty demanded access to imported Welsh coal across the globe, in every port where the Navy refueled, even where local supplies could be found.[18] The dark green area indicates the coal seams of South Wales, where the best  steam coal in the world could be found. The British supply network far exceeded that of any other nation in its breadth and reliability, which gave their navy a global operational capacity that no other fleet could match. When the Russians sent their Baltic fleet to attack Japan in 1905, the British refused it coaling service and pressured the French to do likewise, leaving the ships reliant on sub-par German supplies. It suffered repeated delays and quality shortfalls in its coal before meeting its grim fate in Tsushima Strait. Aleksey Novikov-Priboi, a sailor on one of the Russian ships, later wrote that “coal had developed into an idol, to which we sacrificed strength, health, and comfort. We thought only in terms of coal, which had become a sort of black veil hiding all else, as if the business of the squadron had not been to fight, but simply to get to Japan.”[19] Even the rising naval power of the United States, stoked by the dreams of Alfred Mahan, could scarcely operate outside its home waters without British sufferance. The proud Great White Fleet of the United States that circumnavigated the globe to show the flag found itself repeatedly humbled by the failures of its supply network, reliant on British colliers or left begging for low-quality local supplies.[20] But if British steam power on the oceans still outshone that of the U.S. even beyond the turn of the twentieth century, on land it was another matter, as we shall next time.

Read more
The Hobby Computer Culture

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] From 1975 through early 1977, the use of personal computers remained almost exclusively the province of hobbyists who loved to play with computers and found them inherently fascinating. When BYTE magazine came out with its premier issue in 1975, the cover called computers “the world’s greatest toy.” When Bill Gates wrote about the value of good software in the spring of 1976, he framed his argument in terms of making the computer interesting, not useful: “…software makes the difference between a computer being a fascinating educational tool for years and being an exciting enigma for a few months and then gathering dust in the closet.”[1] Even as late as 1978, an informed observer could still consider interest in personal computers to be exclusive to a self-limiting community of hobbyists. Jim Warren, editor of Dr. Dobb’s Journal of Computer Calisthenics and Orthodontia, predicted a maximum market of one million home computers, expecting them to be somewhat more popular than ham radio, which attracted about 300,000.[2] A survey conducted by BYTE magazine in late 1976 shows that these hobbyists were well-educated (72% had at least a bachelor’s degree), well-off (with a median annual income of $20,000, or $123,000 in 2025 dollars), and overwhelmingly (99%) male. Based on the letters and articles appearing in BYTE in that same centennial year of 1976, it is clear that what interested these hobbyists above all was the computers themselves: which one to buy, how to build it, how to program it, how to expand it and to accessorize it.[3] Discussion of practical software applications appeared infrequently. One intrepid soul went so far as to hypothesize a microcomputer-based accounting program, but he doesn’t seem to have actually written it. When  mention of software appeared it came most often in the form of games. The few with more serious scientific and statistical work in mind for their home computer complained of the excessive discussion of “super space electronic hangman life-war pong.” Star Trek games were especially popular:  In July, D.E. Hipps of Miami advertised a Star Trek BASIC game for sale for $10; in August, Glen Brickley of Florissant, Missouri wrote about demoing his “favorite version of Star Trek” for friends and neighbors; and in August, BYTE published, with pride, “the first version of Star Trek to be printed in full in BYTE” (though the author consistently misspelled “phasers” as “phasors”). Most computer hobbyists were electronic hobbyists first, and the electronics hobby grew up side-by-side with modern science fiction, and shared its fascination with the possibilities of future technology. We can guess that this is what drew them to this rare piece of popular culture that took the future and the “what-ifs” it poses seriously, rather than treating it as a mere backdrop for adventure stories.[4] The June 1976 issue of Interface is one of many examples of the hobbyists’ ongoing fascination with Star Trek. Other than a shared interest in computers—and, apparently, Star Trek—three kinds of organizations brought these men together: local clubs, where they could share expertise in software and hardware and build a sense of belonging and community; magazines like BYTE where they could learn about new products and get project ideas; and retail stores, where they could try out the latest models and shoot the shit with fellow enthusiasts. The computer hobbyists were also bound by a force more diffuse than any of these concrete social forms: a shared mythology of the origins of hobby computing that gave broader social and cultural meaning to their community. The Clubs The most famous computer club of all, of course, is the Homebrew Computer Club, headquartered in Silicon Valley, whose story is well documented in several excellent sources, especially Steven Levy’s book, Hackers. Its fame is well-deserved, for its role as the incubator of Apple Computer, if nothing else. But the focus of the historical literature on Homebrew as the computer club has tended to distort the image of American personal computing as a whole. The Homebrew Computer Club had a distinctive political bent, due to the radical left leanings of many of its leading members, including co-founder Fred Moore. In 1959, Moore had gone on hunger strike against the Reserve Officers’ Training Corps (ROTC) program at Berkeley, which had been compulsory for all students since the nineteenth century. He later became a draft resister and published a tract against institutionalized learning, Skool Resistance. Yet even the bulk of Homebrew’s membership stubbornly stuck to technical hobbyist concerns, despite Moore’s efforts to turn their attention to social causes such as aiding the disabled or protesting nuclear weapons. To the extent that personal computing had a politics, it was a politics of independence, not social justice.[5] Cover of the second Homebrew Computer Club newsletter, with sketches of members. Only Fred Moore is labeled, but the man with glasses on the far right is likely Lee Felsenstein. Moreover, excitement about personal computing was not at all a phenomenon confined to the Bay Area. By the summer of 1975, Altair shipments had begun in earnest, and clubs formed across the United States and beyond where enthusiasts could share information and ask for help with their new (or prospective) machines. The movement continued to grow as new companies sprang up and shipped more hobby machines. Over the course of 1976, dozens of clubs advertised their existence or attempted to find a membership through classifieds in BYTE, from the Oregon Computer Club headquartered in Portland (with a membership of forty-nine), to a proposed club in Saint Petersburg, Florida, mooted by one Allen Swan. But, as one might expect, the largest and most successful clubs were concentrated in and around major metropolitan areas with a large pool of existing computer professionals, such as Los Angeles, Chicago, and New York City.[6] The Amateur Computer Group of New Jersey convened for the first time in June 1975, in under the presidency of Sol Libes. Libes, a professor at Union County College, was another of those computer lovers working on their own home computers for years before the arrival of the Altair, who then suddenly found themselves joined by hundreds of like-minded hobbyists once computing became somewhat more accessible. Libe’s club grew to 1,600 members by the early 1980s, had a newsletter and software library, sponsored the annual Trenton Computer Festival, and is likely the only organization from the hobby computer years other than Apple and Microsoft to still survive today.[7] The Chicago Area Computer Hobbyist Exchange attracted several hundred members to its first meeting at Northwestern University in the summer of 1975. Like many of the larger clubs, they organized information exchange around “special interest groups” for each brand of computer (Digital Group, IMSAI, Altair, etc.). The club also gave birth to one of the most significant novel software applications to emerge from the personal computer hobby, the bulletin board system—we will have more to say on that later in this series.[8] The most ambitious—one might say hubristic—of the clubs was the Southern California Computer Society (SCCS) of Los Angeles, founded in Don Tarbell’s apartment in June of 1975. Within the year the club could boast of a glossy club magazine(in contrast to the cheap newsletters of most clubs) called Interface, plans to develop a public computer center, and—in answer to the challenge of Micro-Soft BASIC—ideas about distributing their own royalty-free program library, including “’branch’ repositories that would reproduce and distribute on a local basis.”[9] Not content with a regional purview, the leadership also encouraged the incorporation of far-flung club chapters into their organization; in that spirit, they changed their name in early 1977 to the International Computer Society. Several chapters opened in California, and more across the U.S, from Minnesota to Virginia, but interest in SCCS/ICS chapters could be found as far away as Mexico City, Japan, and New Zealand. Across all of these chapters, the group accumulated about 8,000 members.[10] The whole project, however, ran atop a rickety foundation of amateur volunteer work, and fell apart under its own weight. First came the breakdown in the relationship between the club and the publisher of Interface, Bob Jones. Whether frustrated with the club’s failure to deliver articles to fill the magazine (his version), or greedy to make more money as a for-profit enterprise (the club’s version), Jones broke away to create Interface Age, leaving SCCS scrambling to start up its own replacement magazine. Expensive lawsuits flew in both directions. Then came the mismanagement of the club’s group buy program: intended to save members money by pooling their purchases into a large-scale order with volume discounts, it instead lost thousands of members’ dollars to a scammer: “a vendor,” as one wry commenter put it “who never vended” (the malefactor traded under the moniker of “Colonel Winthrop.”)[11] The December 1976 issues of SCCS Interface and Interface Age. Which is authentic, and which the impostor? More lawsuits ensued. Squeezed by money troubles, the club leadership raised dues to $15 annually, and sent out a plea for early renewal and prepayment of multiple years’ dues. The club magazine missed several issues in 1977, then ceased publication in September. The ICS sputtered on into 1978 (Gordon French of Processor Technology announced his candidacy for the club presidency in March), then disappeared from the historical record.[12] Whatever the specific historic accidents that brought down SCCS, the general project—a grand non-profit network that would provide software, group buying programs and other forms of support to its members—was doomed by larger historical forces. Though many clubs survived into the 1980s or beyond, they waned in significance with the maturing of commercial software and the turn of personal computer sellers away from hobbyists and towards the larger and more lucrative consumer and business markets. Newer computer products no longer required access to secret lore to figure out what to do with them, and most buyers expected to get any support they did need from a retailer or vendor, not to rely on mutual support networks of other buyers. One-to-one commercial relations between buyer and seller became more common than the many-to-many communal webs of the hobby era. The Retailers The first buyers of Altair could not find it in any shop. Every transaction occurred via a check sent to MITS, sight unseen, in the hopes of receiving a computer in exchange. This way of doing businesses suited the hardcore enthusiast just fine, but anyone with uncertainty about the product—whether they wanted a computer at all, which model was best, how much memory or other accessories they needed—was unlikely to bite. It had disadvantages for the manufacturer, too. Every transaction incurred overhead for payment processing and shipping, and demand was uncertain and unpredictable week to week and month to month. Without any certainty about how many buyers would send in checks next month, they had to scale up manufacturing carefully or risk overcommitting and going bust. Retail computer shops would alleviate the problems of both sides of the market. For buyers, they provided the opportunity to see, touch, and try out various computer models, and get advice from knowledgeable salespeople. For sellers, they offered larger, more predictable orders, improving their cash flow and reducing the overhead of managing direct sales. The very first computer shop appeared around the same time when the clubs began spreading, in the summer of 1975. But they did not open in large numbers until 1976, after the hardcore enthusiasts had primed the pump for further sales to those who had seen or heard about the computers being purchased by their friends or co-workers. The earliest documented computer shop, Dick Heiser’s Computer Store, opened in July 1975 in a 1,000-square-foot store front on Pico Boulevard in West Los Angeles. Heiser had attended the very first SCCS meeting in Don Tarbell’s apartment, and, seeing the level of excitement about Altair, signed up to become the first licensed Altair dealer. Paul Terrell’s Byte Shop followed later in the year in Mountain View, California. In March of 1976, Stan Veit’s Computer Mart opened on Madison Avenue in New York City and Roy Borrill’s Data Domain in Bloomington, Indiana (home to Indiana University). Within a year, stores had sprouted across the United States like spring weeds: five hundred nation-wide by July 1977.[13] Paul Terrell’s Byte Shop at 1063 El Camino Real in Mountain View. Ed Roberts tries to enforce an exclusive license on Altair dealers, based on the car dealership franchise model. But the industry was too fast-moving and MITS too cash- and capital-strapped to make this workable. Hungry new competitors, from IMSAI to Processor Technology, entered the market constantly with new-and-improved models. Many buyers weren’t satisfied with only Altair offerings, MITS couldn’t supply dealers with enough stock to satisfy those who were, and they undercut even their few loyal dealers by continuing to offer direct sales in order to keep as much cash as possible flowing in. Even Dick Heiser, founder of the original Los Angeles Computer Store, broke ties with MITS in late 1977, unable to sustain an Altair-only partnership.[14] Dick Heiser with a customer at The Computer Store in Los Angeles in 1977. Not only is the teen here playing a Star Trek game, a picture of the ubiquitous starship Enterprise can be seen hanging in the background. [Photo by George Birch, from Benj Edwards, “Inside Computer Stores of the 1970s and 1980s,” July 13, 2022] Given the number of competing computer makers, retailers ultimately had the stronger position in the relationship. Manufacturers who could satisfy the desires of the stores for reliable delivery of stock and robust service and customer support would thrive, while the others withered.[15] But independent dealers faced competition of their own. Chain stores could extract larger volume discounts from manufacturers and build up regional or even national brand recognition. Byte Shop, for example, expanded to fifty locations by March 1978. The most successful chain was ComputerLand, run by the same Bill Millard who had founded IMSAI. Though he later claimed everything was “clean and appropriate,” Millard clearly extracted money and employee time from the declining IMSAI in order to get his new enterprise off the ground. As the company’s chronicler put it, “There was magic in ComputerLand. Started on just Milliard’s $10,000 personal investment, losing $169,000 in its maiden year, the fledgling company required no venture capital or bank loans to get off the ground.” Some small dealers, such as Veit’s Computer Mart, responded by forming a confederacy of independent dealers under a shared front called “XYZ Corporation” that they could use to buy computers with volume discounts.[16] A ComputerLand ad from the February 1978 issue of BYTE. Note that the store offers many of the services that most people could have only found in a club in 1975 or 1976: assistance with assembly, repair, and programming. The Publishers Just like manufacturers, retailers faced their own cash flow risks: outside the holiday season they might suffer from long dry spells without many sales. The early retailers typically solved this by simply not carrying inventory: they took customer orders until they accumulated a batch of ten or so computers from the same manufacturer, then filled all of the orders at once. But a big boon for their cash flow woes came in the form of publications that sold for much less than a computer, but at a much higher and steadier volume, especially the rapidly growing array of computer magazines.[17] BYTE was both the first of the national computer magazines, and the most successful. Launched in New Hampshire in the late summer of 1975, by 1978 it built up a circulation of 140,000 issues per month. It got a head start by cribbing thousands of addresses from the mailing lists of manufacturers such as Nat Wadsworth’s Connecticut-based SCELBI, one of the proto-companies of the pre-Altair era. But, like so much of the hobby computer culture, BYTE also had direct ancestry in the radio electronics hobby.[18] Conflict among the three principal actors has muddled the story of its origins. Wayne Green, publisher of a radio hobby magazine called 73 in Peterborough, New Hampshire, started printing articles about computers in 1974, and found that they were wildly popular. Virginia Londner Green, his ex-wife, worked at the magazine as a business manager. Carl Helmers, a computer enthusiast in Cambridge, Massachusetts, authored and self-published a newsletter about home computers. One of the Greens learned of Helmers’ newsletter, and one or more of the three came up with the idea of combining Helmers’ computer expertise with the infrastructure and know-how from 73 to launch a professional-quality computer hobby magazine.[19] The cover of BYTE‘s September 1976 0.01-centennial issue (i.e., one year anniversary). The phrase “cyber-crud” and the image of a fist on the shirt of the man at center both come from Ted Nelson’s Computer Lib/Dream Machines. Also, these people really liked Star Trek. Within months, for reasons that remain murky, Wayne Green found himself ousted by his ex-wife, who took over publishing of BYTE, with Helmers as editor. Embittered, Green launched a competing magazine, which he wanted to call Kilobyte, but was forced to change to Kilobaud. Thus began a brief period in which Peterborough, with a population of about 4,000, served as a global hub of computer magazine publishing.[20] Another magazine, Personal Computing, spun off from MITS in Albuquerque. Dave Bunnell, hired as a technical writer, had become so fond of running the company newsletter Computer Notes, that he decided to go into publishing on his own. On the West Coast, in addition to the aforementioned Interface Age, there was also Dr. Dobb’s Journal of Computer Calisthenics and Orthodontia—conceived by Stanford lecturer Dennis Allison and computer evangelist Bob Albrecht (Dennis and Bob making “Dobb”), and edited by the hippie-ish Jim Warren, who drifted into computers after being fired from a position teaching math at a Catholic school for holding (widely-publicized) nude parties. Bunnell (right) with Bill Gates. This photo probably dates to sometime in the early 1980s. Computer books also went through a publishing boom. Adam Osborne, born to British parents in Thailand and trained as a chemical engineer, began writing texts for computer companies after losing his job at Shell Oil in California. When Altair arrived, it shook him with the same sense of revelation that so many other computer lovers had experienced. He whipped out a new book, Introduction to Microcomputers, and put it out himself when his previous publishers declined to print it. A highly technical text, full of details on Boolean logic and shift registers, it nonetheless sold 20,000 copies within a year to buyers eager for any information to help them understand and use their new machines.[21] The magazines served several roles. They offered up a cornucopia of content to inform and entertain their readers: industry news, software listings, project ideas, product announcements and reviews, and more. One issue of Interface Age even came with a BASIC implementation inscribed onto a vinyl record, ready to be loaded directly into a computer as if from a cassette reader. The magazines also provided manufacturers with a direct advertising and sales channel to thousands of potential buyers—especially important for smaller makers of computers or computer parts and accessories, whose wares were unlikely to be found in your local store. Finally, they became the primary texts through with the culture of the computer hobbyist was established and promulgated.[22] Each of the magazines had its own distinctive character and personality. BYTE was the magazine for the established hobbyist and tried to cover it all: hardware, software, community news, book reviews, and more. But the hardcore libertarian streak of founding editor Carl Helmers (an avid fan of Ayn Rand) also shone through in the slant of some of its articles. Wayne Green’s Kilobaud, with its spartan cover (title and table of contents only), appealed especially those with an interest in starting a business to make money off of their interest in computers. The short-lived ROM spoke to the humanist hobbyist, offering longer reports and think-pieces. Dr. Dobb’s had an amateur, free-wheeling aesthetic and tone not far removed from an underground newsletter. In keeping with its origins as a vehicle to publish Tiny BASIC (a free Microsoft BASIC alternative), itfocused on software listings. Creative Computing also had a software bent, but as a pre-Altair magazine designed to target users of BASIC in schools and universities, it took a more lighthearted and less technical tone, while Bunnell’s Personal Computing opened its arms to the beginner, with the message that computing was for everyone.[23] The Mythology of the Microcomputer Running through many of these early publications can be found a common narrative, a mythology of the microcomputer. To dramatize it: Until recently, darkness lay over the world of computing. Computers, a font of intellectual power, had served the interests only of the elite few. They lay solely in the hands of large corporate and government bureaucracies. Worse yet, even within those organizations, an inner circle of priests mediated access to the machine: the ordinary layperson could not be allowed to approach it. Then came the computer hobbyist. A Prometheus, a Martin Luther, and a Thomas Jefferson all wrapped into one, he ripped the computer and the knowledge of how to use it from the hands of the priests, sharing freedom and power with the masses. The “priesthood” metaphor came from Ted Nelson’s 1974 book, Computer Lib/Dream Machines, but became a powerful means for the post-Altair hobbyist to define himself against what came before. The imagery came to BYTE magazinein an October 1976 article by Mike Wilbur and David Fylstra: The movement towards personalized and individualized computing is an important threat to the aura of mystery that has surrounded the computer for its entire history. Until now, computers were understood by only a select few who were revered almost as befitted the status of priesthood.[24] In this cartoon from Wilbur and Fylstra’s article on the “computer priesthood,” the sinister “HAL” (aka IBM) finds himself chagrined by the spread of hobby computerists. BYTE editor Carl Helmers made the historical connection with the Enlightenment explicit: Personal computing as practiced by large numbers of people will help end the concentration of apparent power in the “in” group of programmers and technicians, just as the enlightenment and renaissance in Europe brought about a much wider understanding beginning in the 14th century.[25] The notion that computing had been jealously guarded by the powerful and kept away from the people can be found as early as June 1975, in the pages Homebrew Computer Club newsletter. In the words of club co-founder Fred Moore: The evidence is overwhelming the people want computers… Why did the Big Companies miss this market? They were busy selling overpriced machines to each other (and the government and military). They don’t want to sell directly to the public.[26] In the first collected volume of Dr. Dobb’s Journal, editor Jim Warren sounded the same theme of a transition from exclusivity to democracy in more eloquent language: …I slowly come to believe that the massive information processing power which has traditionally been available only to the rich and powerful in government and large corporations will truly become available to the general public. And, I see that as having a tremendous democratizing potential, for most assuredly, information–ability to organize and process it–is power. …This is a new and different kind of frontier. We are part of the small cadre of frontiersmen who are exploring it. exploring this new frontier.[27] Personal Computing editor Dave Bunnell further emphasized the potential for the computer as a political weapon against entrenched bureaucracy: …personal computers have already proliferated beyond most government regulation. People already have them, just like (pardon the analogy) people already have hand guns. If you have a computer, use it. It is your equalizer. It is a way to organize and fight back against the impersonal institutions and the catch-22 regulations of modern society.[28] The journalists and social scientists who began to write the first studies of the personal computer in the mid-1980s lapped up this narrative, which provided a heroic framing for the protagonists of their stories. They gave it new life and a much broader audience in books like Silicon Valley Fever (“Until the mid-1970s when the microcomputer burst on the American scene, computers were owned and operated by the establishment–government, big corporations, and other large institutions”) and Fire in the Valley (“Programmers, technicians, and engineers who worked with large computers all had the feeling of being ‘locked out’ of the machine room… there also developed a ‘computer priesthood’… The Altair from MITS breached the machine room door…”)[29] This way of telling the history of the hobby computer gave deeper meaning to a pursuit that looked frivolous on the surface: paying thousands of dollars for a machine to play Star Trek. And, like most myths, it contained elements of truth. There was a large installed base of batch-processing systems, surrounded by a contingent of programmers denied direct access to the machine. Between the two there did stand a group of technicians whose relation to the computer was not unlike the relation of the pre-Vatican II priest to the Eucharist. But in promoting this myth, the computer hobbyists denied their own parentage, obscuring the time-sharing and minicomputer cultures that had made the hobby computer possible and from which it had borrowed most of its ideas. The Altair was not an ex nihilo response to an oppressive IBM batch-processing culture that had made access to computers impossible. The announcement of Altair had called it the “world’s first minicomputer kit”: it was the fulfillment of the dream of owning your own minicomputer, a type of computer most of its buyers had already used. It could not have been successful if thousands of people hadn’t already gotten hooked on the experience of interacting directly with a time-sharing system or minicomputer. This self-confident hobby computer culture, however—with its clubs, its local shops, its magazines, and its myths—would soon be subsumed by a larger phenomenon. From this point forward, no longer will nearly every major character in the story of the personal computer have a background in hobby electronics or ham radio. No longer will nearly all the computer makers and buyers alike be computer lovers who found their passion on mainframe, minicomputer, or time-sharing systems. In 1977, the personal computer entered a new phase of growth, led by a new class of businessmen who targeted the mass market.

Read more
Microcomputers – The First Wave: Responding to Altair

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.] Don Tarbell: A Life in Personal Computing In August 1968, Stephen Gray, sole proprietor of the Amateur Computer Society (ACS), published a letter in the society newsletter from an enthusiast in Huntsville, Alabama named Don Tarbell. To help other would-be owners of home-built computers, Tarbell offered a mounting board for integrated circuits for sale for $8 from his own hobby-entrepreneur company, Advanced Digital Design. Tarbell worked for Sperry Rand on projects for NASA’s Marshall Space Flight Center, but had gotten hooked on computers through coursework at the University of Alabama at Huntsville, and found the ACS through a contact at IBM.[1] Over the ensuing years, integrated circuits became far cheaper and easier to come by, and building a real home computer on one’s own thus far more feasible (though still a daunting challenge, demanding a wide range of hardware and software skills). In June 1972, Tarbell had mastered enough of those skills to report to the ACS Newsletter that he (at last) had a working computer system, with an 8-bit processor built from integrated circuits, four-thousand bytes of memory, a text editor and a calculator program, a Teletype for input and output, and an eight-track-tape interface for long-term storage. Not long after this report to ACS, Tarbell decamped from Alabama and moved to the Los Angeles area to work for Hughes Aircraft.[2] Don Tarbell with his home-built computer system [Kilobaud: The Small Computer Magazine (May 1977), 132]. Three years after that, in 1975, the arrival of the Altair 8800 kit announced that anyone with the skills to assemble electronics could have the power of a minicomputer in their own home, and thousands heeded the call. A group of 150 of these personal computer hobbyists met in the commons of the apartment complex where Tarbell lived. They had come on Father’s Day for the inaugural meeting of the Southern California Computer Society (SCCS). Half of the participants already owned Altairs. Tarbell took on the position of secretary for the new society, and served on the board of directors. Within a few months, SCCS began producing its own magazine with a full editorial staff, a far more sophisticated operation than the old hand-typed ACS Newsletter; Tarbell eventually became one of its associate editors.[3] But an Altair kit by itself was far from a complete computer system like the one Tarbell had back in 1975. It had a piddling 256 bytes of memory, and no devices for reading or writing data other than lights and switches. Dozens of hobbyists founded their own companies to sell other computer buffs the additional equipment that would answer the deficiencies of their newly-purchased Altairs. Don Tarbell was one of them. Among the major problems was the inability to permanently store or load programs and data. Once you shut off the computer, everything you had entered into it was lost. A standard Teletype terminal came equipped with a paper tape punch and reader, but even a heavily used Teletype could cost $1000. In February 1976, Tarbell offered a much simpler and cheaper solution, the Tarbell cassette interface, a board that would slot into the Altair case and connect the computer to an ordinary cassette recorder, writing or reading data to or from the magnetic tape. Not only was a cassette machine much cheaper than a teletype, cassettes were more durable than paper, could store more data (up to 2200 bits per inch with Tarbell’s controller), and could be rewritten many times. Tarbell’s board sold for $150 assembled, $100 for a kit. He later branched out into floppy disk controllers and an interpreter for the BASIC computer language, and became a minor celebrity of the growing microcomputer scene.[4] Tarbell’s story offers a microcosm of the transition of personal computers, over the course of the 1970s, from an obscure niche hobby to a national industry. Like Hugo Gernsback in radio half a century before, home-computer tinkerers found themselves new roles in a growing hobby business as community-builders, publishers, and small-scale manufacturers. Like Tarbell, the first wave of these entrepreneurs responded directly to the Altair, offering supplemental hardware to offset its weaknesses or offering a more reliable or more capable hobby computer. The First Wave: Responding to Altair The Micro Instrumentation and Telemetry Systems (MITS) Altair came with a lot of potential, but it lay mostly unrealized in the basic kit MITS shipped out. This was partly intentional: the Altair sold on the basis of its exceptionally low price (less than $500), and it simply couldn’t remain so cheap if it had all the features of a full-fledged minicomputer system. Other deficiencies arose by accident, out of the amateurish nature of MITS. The good timing and negotiating skills of Ed Roberts, the company’s owner, had put him at the spearhead of the hobby computer revolution, but no one at his company had exceptional talent in electronics or product design. The Altair took hours to assemble, and the assembled machines often didn’t work. Follow-up accessories came out slowly as MITS technicians struggled to get them working. Tarbell’s cassette interface succeeded because it performed faster and more reliably than MITS’ equivalent. The most urgent need of the hobbyist other than easier input and output was additional memory beyond the scanty 256 bytes included with the base kit: far from enough to run a meaningful program, like a BASIC interpreter. In the spring of 1975, MITS started shipping a 4096-byte (4K) board designed by Roberts, but these boards simply didn’t work.[5] Unsurprisingly, other hobby-entrepreneurs began to step up quickly to fill the gaps. Several of them came from the most famous of the Altair-inspired hobby communities, the Homebrew Computer Club, which met in Silicon Valley and attracted attendees from around the Bay Area. Processor Technology was founded in Berkeley by Homebrew regular and electronics enthusiast Bob Marsh and his reclusive partner, Gary Ingram. In the spring of 1975, they began offering a 4K memory board for the Altair that actually worked. Later, the company came out with its own tape controller and a display board that would make Altair into a TV Typewriter, which they called VDM-1.[6] MITS’ 4K memory board compared to Processor Technology’s. Even without knowing anything about hardware design, it’s easy to see how sloppy the former is compared to the latter. [s100computers.com] Only one “authorized” Altair board maker existed, Cromemco, also located in the Bay Area. Cromemco founders Harry Garland and Roger Melen met as Ph.D. students in electrical engineering at Stanford (and named their company after their dormitory: Crothers Memorial). They contributed articles to Popular Electronics regularly, and found out about Altair while visiting the magazine’s offices in New York. They originally intended to build an interface board for the Altair that could read data from their “Cyclops” digital camera design. Despite the early partnership, no Cromemco board saw the light of day until 1976. Their slow start notwithstanding, Garland and Melen created two products of significance to MITS’ business and to the future of personal computing: the “Dazzler” graphics board and the “Bytesaver” read-only-memory (ROM). Unlike the TV Typewriter or the VDM-1, which could display only text, the Dazzler could paint arbitrary pixels onto the screen from an eight color palette (though only at a resolution of 64 x 64, or up to 128 x 128 in monochrome mode). Less sexy but equally significant, the Bytesaver board stored a program that would be immediately loaded into the Altair memory on power up; prior to that an Altair could do nothing until basic control instructions were keyed in manually to bootstrap it (instructing it, for example, to load another program from paper tape).[7] A 1976 ad for the Cromemco Dazzler [Byte (April 1976), 7] Roberts bristled at the competition from rival card makers. But more aggravating still were the rival computer makers cranking out Altair knock-offs. In 1974, Robert Suding and Deck Bemis had launched Digital Group out of Denver to support the Micro-8. After Altair came out, they decided to make their own, superior computer; Suding happily quit his steady but dull job at IBM to serve as the Woz to Bemis’ Jobs, avant la lettre. Digital Group computers came complete with an eight-kilobyte memory board, a cassette tape controller, and a ROM chip that could boot a program directly from tape. They also had a processor board independent of the backplane into which expansion cards slotted, which meant you could upgrade your processor without replacing any of your other boards. In short, they offered a computer hobbyist’s dream. The catch came in the form of poor quality control and very long waits for delivery, after paying cash up front.[8] Other would-be Altair-killers entered the market from around the country in 1975. Mike Wise, of Bountiful, Utah, created the Sphere, the first hobby computer with an integrated keyboard and display—although production was so limited that, decades later, vintage computer collectors would doubt whether any were actually built. The SWTPC 6800 came out of San Antonio, built by the same Southwest Technical Products Corporation that had sold parts for Don Lancaster’s TV Typewriter. A pair of Purdue graduate students in West Lafayette, Indiana wrote software for the SWTPC under the moniker of Technical Systems Consultants. A few hundred miles to the east, Ohio Scientific of Hudson, Ohio released a Microcomputer Trainer Board that put it, too, on the hobbyist map.[9] The SWTPC 6800. The bluntly rectangular cabinet design with the computer’s name prominent on the faceplate is typical of this era of microcomputers.[Michael Holley] But the real onslaught came in 1976. By that time hobbyists with entrepreneurial ambition had had time to fully absorb the lessons of the Altair, to hone their own skills at computer building, and to adopt new chips like the MOS Technology 6502 or Zilog Z80. The most significant releases of the year were the Apple Computer, MOS Technology KIM-1, IMSAI 8080, Processor Technology Sol-20, and, in the unkindest cut for Roberts, the Z-1 from former ally Cromemco. Most of these computer makers solved the upgrade problem in a more blunt fashion than the Digital Group’s sophisticated swappable boards: they simply copied the card interface protocol (known as the “bus”) of the Altair. Already own an Altair? Buy a Z-1 or Sol-20 and you could put all of the expansion cards for your old computer into the new. Cromemco founder Roger Melen encouraged the community to disassociate this interface from MITS by calling it the S100 bus, not the Altair bus—another twist of the knife.[10] Almost all of these businesses (excepting IMSAI, of whom more shortly) continued to exclusively target electronic hobbyists as their customers. The Z-1 looked just like an upmarket Altair, with a front panel now adorned with slightly nicer switches and lights. The Apple Computer and KIM-1 offered no frills at all, just a bare green printed circuit board festooned with chips and other components. Processor Technology’s Sol-20, inflected with Lee Felsenstein’s vision of a “Tom Swift” terminal for the masses, sported a handsome blue case with integrated keyboard and walnut side panels. This represented substantial progress in usability compared to the company’s first memory boards (which came only as a kit the buyer had to assemble), but the Sol-20 was still marketed via Popular Electronics as a piece of hobby equipment.[11] Software Entrepreneurs In early 1975, a computer hobbyist who wanted a minicomputer-like system of their own had only one low-price option: buy an Altair; then build, or wait for, or scrounge, the additional components that would make it into a functional system. Eighteen months later, abundance had replaced scarcity in the computer hobby hardware market, with many makes, models, and accessories to choose from. But what about software? A working computer consisted of metal, semi-conductor, and plastic, but also a certain quantity of “thought-stuff,” program text that would tell the computer what, exactly, to compute. A large proportion of the hobby community had a minicomputer background. They were accustomed to writing some software themselves and getting the rest (compilers, debuggers, math libraries, games, and more) from fellow users, often through organized community exchanges like the DEC user group program library. So, they expected to get microcomputer programs in the same way, through free exchange with fellow hobbyists. Even in the mainframe world, software was rarely sold independently of a hardware system prior to the 1970s.[12] It came as a shock, then, when, immediately on the heels of Altair, the first software entrepreneurs appeared. Paul Allen and Bill Gates—especially Gates—were roughly a decade younger than most of the early hardware entrepreneurs, at just 22 and 19, respectively. Compare to Ed Roberts of MITS at 33; Lee Felsenstein of Processor Technology, 29; Harry Garland of Cromemco, 28; Chuck Peddle of MOS Technology and Robert Suding of the Digital Group, both 37. These two young men from Seattle had caught the computer bug at the keyboard of their private school’s time-sharing terminal; they had finagled some computer time at a Seattle time-sharing company in exchange for finding bugs, but had no serious work experience that would have immersed them in the practices of the minicomputer world. For all their youth, though, Gates and Allen brimmed with ambition, and when they saw the Altair on the cover of Popular Electronics, they saw a business opportunity. Of course, everyone knew that a computer would need software to be useful, but it was not obvious that anyone would pay for that software. Gates and Allen, having not yet grown accustomed to getting software for free, had an easier time imagining that they would. They also knew that the first program any self-respecting hobbyist would want to get their hands on was a BASIC interpreter, so that they could run the huge existing library of BASIC software (especially games) and begin writing programs of their own. Gates and Allen in 1981. [MOHAI, King County News Photograph Collection, 2007.45.001.30.02, photo by Chuck Hallas] Like Cromemco, Gates and Allen started out as partners with MITS—within days of seeing they Altair cover, they contacted Ed Roberts promising a BASIC interpreter. They delivered in March, despite having no Altair, nor even an 8080 processor—they developed the program on a simulator written by Allen for the DEC PDP-10 at Harvard, where Gates was enrolled as a sophomore. In another debt to DEC, Gates based the syntax on Digital’s popular BASIC-PLUS. Allen moved to Albuquerque soon after, to head a new software division at MITS. Gates eventually followed to nurture their independent software venture, Micro-Soft, though he did not completely abandon Harvard until 1977.[13] Many hobbyists balked at the culture shock of paying for software, and freely exchanged paper tapes of Altair BASIC in defiance of Micro-Soft and MITS, prompting Gates’ famous “Open Letter to Hobbyists,” in February 1976. There he made the case that software writers deserved compensation for their work just as much as hardware builders did, prompting a flurry of amici curiae from various corners of the hobby (with far more weighing in for the defendants than the plaintiff). But, though this controversy is famous for its retrospective echoes of later debates over free software, Gates and Allen rendered the issue irrelevant almost immediately, by switching to a different business model. They began licensing BASIC to computer manufacturers at a flat fee, instead of a royalty on each copy sold. MITS paid $31,200, for example, for the BASIC for a new Altair model using the Motorola 6800 processor. The licensor could choose to charge for the software or not, Micro-Soft didn’t care, but they typically didn’t. This approach bypassed the cultural conflict altogether; BASIC interpreters and other systems software became a bullet point in a list of advertised features for a given piece of hardware rather than a separate item in the catalog.[14] Having a BASIC would let you run programs on your computer; but the other crucial linchpin for an easy-to-use microcomputer system was a program to manage your other programs and data. As faster and denser magnetic storage supplanted paper tape, computer users needed a way to quickly and easily move files between memory and their cassettes or floppy disks. By far the most popular tool for this purpose was CP/M, for Control Program for Microcomputers. CP/M was the creation of Gary Kildall, who got his hands on his first microcomputer directly from the source: Intel. Kildall grew up in Seattle and studied computer science at the University of Washington, where he had a brief run in with Gates and Allen, who at the time were teenagers who worked at a company part-owned by one of his professors, the Computer Center Corporation, in exchange for free computer time. Drafted into the army, Kildall used his connections at the University and his father’s position as a merchant marine instructor to get posted instead to naval officer training, and then a position as a math and computer science teacher at the Naval Postgraduate School in Monterey. After completing his obligations to the Navy in 1972, he stayed on as a civilian instructor.[15] Gary Kildall with his wife Dorothy, in 1978. [Computer History Museum] That same year, Kildall learned about the Intel 4004, and, like so many other computer enthusiasts, became enchanted with the idea of a computer of his own. The most obvious route was to get his hands on Intel’s development kit for the 4004, the SIM4-01, intended to be used by customers to write software for the new chip. So Kildall began talking to people at Intel, and then consulting at Intel, and in exchange for software written for Intel, managed to acquire microprocessor development kits for the 4004, and then later the 8008 and 8080 processors.[16] The most significant piece of software Kildall provided to Intel was PL/M, Programming Language for Microprocessors, which allowed developers to express code in a higher-level syntax that would then be compiled down to the 4004 (or 8008, or 8080) machine language. But you could not write PL/M on a microcomputer, it didn’t have the necessary mass storage interface or software tools; clients were expected to write programs on a minicomputer and then flash the final result onto a ROM chip that would power whatever microprocessor application they had in mind (a traffic light controller, for example, or a cash register.) What Kildall dreamed of was to “self-host” PL/M: that is, to author PL/M programs on the same computer on which they would run. By 1974 he had assembled everything he needed—a Intellec 8/80 development kit (for the 8080), a used hard drive and teletype, a disk controller board built by a friend—except for a program that could load and store the PL/M compiler, the code to be compiled, and the output of the compilation. It was for this reason, to complete his own personal quest, that he wrote CP/M.[17] Only after the fact did he think about selling it, just in time to catch the rising wave of hobby computers. Though Kildall later offered direct sales to users, he began with the same flat-fee license model that Micro-Soft had adopted: Kildall sold the software to Omron, a smart terminal maker, and then to IMSAI for their 8080 computer, each at a fee of $25,000. He incorporated his software business as Intergalactic Digital Research (later just Digital Research) in Pacific Grove, just west of Monterey. Gates visited in 1977 to float the idea of a California merger of the two (relative) giants of microcomputer software, but he and Allen decided to relocate to Seattle instead, leaving behind an intriguing what-if.[18] A CP/M command line interaction via a Tarbell disk controller, showing all the files on disk “A”. [Computer History Museum]      CP/M soon became the de-facto standard operating system for personal computers. Having an operating system made writing application software far easier, because basic routines like reading data from disk could be delegated to system calls instead of being re-written from scratch every time. CP/M in particular stood out for its quality in an often-slapdash hobby industry, and could easily be adapted to new platforms because of Kildall’s innovation of a Basic Input Output System (BIOS), which acted as a translation layer between the operating system and hardware. But what bootstrapped its initial popularity was the IMSAI deal, which attached Digital Research to the rising star in what up to that point had been Altair’s market to lose.[19] Getting Serious? There was one company thinking different about the microcomputer market in 1975: IMSAI, headquartered in San Leandro, California, intended to sell business machines. It had the right name for it, an acronym stuffed wall-to-wall with managerial blather: Information Management Sciences Associates, Inc. William (Bill) Millard was an IBM sales rep, then worked for San Francisco setting up computer systems, and founded IMS Associates to sell his services to companies who needed similar IT help. Bill Millard circa 1983. Provenance unknown. Despite the anodyne name he gave to his company, Millard, too, felt the influence of the ideologies of personal liberation that seemed to rise from San Francisco Bay like a fog. But unlike a Lee Felsenstein or a Bob Albrecht, he though mainly of liberating himself, not others: he was a devotee of Erhard Seminars Training, or est, a self-help seminar which promised paying customers access to an understanding of the world-changing power of their will in just two weekends; according to Erhard, “If you keep saying it \ the way it really is \ eventually your word \ is law in the universe.”[20] Neither Millard nor either of his technical employees (part-time programmer Bruce Van Natta and physicist-cum-electrical engineer Joseph Killian), had any prior interest or experience in home computers; they stumbled into the business almost by accident. Their primary contract, to build a computer networking hub for car dealerships based on a DEC computer, had begun spiraling towards failure. Casting about for some solution, they latched onto the news of Altair’s success: here was an inexpensive alternative to the DEC. When Altair refused to deliver on their timetable, they decided, in late summer of 1975, to clone it instead. And, to get cash flow going to pay their expenses and loans, they would sell their clone direct to consumers as well, while working to complete the big contract. When orders from hobbyists began to pour in, they abandoned the automotive scheme altogether to go all-in on their Altair clone.[21] The IMSAI 8080. It closely resembles the Altair, but with cleaner design and higher quality front-panel components. [Morn] The IMSAI 8080 began shipping in December 1975, at a kit price of $439. Millard cultivated an est culture at the company; employees with the “training” were favored, and total commitment to the work was expected. Some employees considered Millard a “genius or a prophet,” spouses and children of employees showed up after school to help assemble computers. By April, they were doing hundreds of thousands of dollars per month in sales. IMSAI was board-compatible with MITS but made improvements that stood out to the connoisseur: a more efficient internal layout, a cleaner and more professional exterior, and a seriously beefed-up power supply that could support a case fully loaded with expansion boards. These advantages appealed enough to buyers to make it Altair’s top competitor in 1976.[22] But what most set IMSAI apart in 1976 was the fact that it was not led by hobby entrepreneurs, but by a business man who wanted to build business machines. An advertisement in the May 1976 issue of BYTE magazine described the IMSAI as a “rugged, reliable, industrial computer with high commercial-type performance,” as opposed to “Altair’s hobbyist kit” (the IMSAI was of course also sold as a kit), along with obscure allusions to expensive IMSAI business products (Hypercube and Intelligent Disk) that never materialized. This was an odd pretense to put on while advertising in BYTE—a publication featuring articles such as “More to Blinking Lights than Meets the Eye” and “Save Money Using Mini Wire Wrap.”  This is not to say that IMSAI (or its contemporaries) had no commercial customers or applications. Alan Cooper, known later for creating Visual Basic, wrote a basic accounting program for the IMSAI in 1976 called General Ledger. But these applications remained a small minority among the mass of buyers who were computer-curious.[23] In 1977, IMSAI began advertising a “megabyte micro,” another fantasy. Such a powerful and expensive machine could sell in the higher end of the minicomputer market, but not to IMSAI’s actual buyers, hobbyists who were buying kits for less than a thousand dollars out of retail storefronts.IMSAI tried again to attract serious business customers with its second major product, the all-in-one VDP-80, which began shipping in late 1977 with an integrated keyboard, display, and dual disk drives, but it was plagued with quality defects, and lacked any application software for its would-be business customers to use.[24] Those customers did arrive in large numbers in good time, but only after a second wave of all-in-one computers appeared, aimed at the mass-market, and after the emergence of useful application software to run on them.

Read more
Internet Ascendant, Part 2: Going Private and Going Public

In the summer of 1986, Senator Al Gore, Jr., of Tennessee introduced an amendment to the Congressional Act that authorized the  budget of the National Science Foundation (NSF). He called for the federal government to study the possibilities for “communications networks for supercomputers at universities and Federal research facilities.” To explain the purpose of this legislation, Gore called on a striking analogy: One promising technology is the development of fiber optic systems for voice and data transmission. Eventually we will see a system of fiber optic systems being installed nationwide. America’s highways transport people and materials across the country. Federal freeways connect with state highways which connect in turn with county roads and city streets. To transport data and ideas, we will need a telecommunications highway connecting users coast to coast, state to state, city to city. The study required in this amendment will identify the problems and opportunities the nation will face in establishing that highway.1In the following years, Gore and his allies would call for the creation of an “information superhighway”, or, more formally, a national information infrastructure (NII). As he intended, Gore’s analogy to the federal highway system summons to mind a central exchange that would bind together various local and regional networks, letting all American citizens communicate with one another. However, the analogy also misleads – Gore did not propose the creation of a federally-funded and maintained data network. He envisioned that the information superhighway, unlike its concrete and asphalt namesake, would come into being through the action of market forces, within a regulatory framework that would ensure competition, guarantee open, equal access to any service provider (what would later be known as “net neutrality”), and provide subsidies or other mechanisms to ensure universal service to the least fortunate members of society, preventing the emergence of a gap between the information rich and information poor.2Over the following decade, Congress slowly developed a policy response to the growing importance of computer networks to the American research community, to education, and eventually to society as a whole. Congress’ slow march towards an NII policy, however, could not keep up with the rapidly growing NSFNET, overseen by the neighboring bureaucracy of the executive branch. Despite its reputation for sclerosis, bureaucracy was created exactly because of its capacity, unlike a legislature, to respond to events immediately, without deliberation. And so it happened that, between 1988 and 1993, the NSF crafted the policies that would determine how the Internet became private, and thus went public. It had to deal every year with novel demands and expectations from NSFNET’s users and peer networks. In response, it made decisions on the fly, decisions which rapidly outpaced Congressional plans for guiding the development of an information superhighway. These decisions rested largely in the hands of a single man – Stephen Wolff.Acceptable UseWolff earned a Ph.D. in electrical engineering at Princeton in 1961 (where he would have been a rough contemporary of Bob Kahn), and began what might have been a comfortable academic career, with a post-doctoral stint at Imperial College, followed by several years teaching at Johns Hopkins. But then he shifted gears, and took a position  at the Ballistics Research lab in Aberdeen, Maryland. He stayed there for most of the 1970s and early 1980s, researching communications and computing systems for the U.S. Army. He introduced Unix into the lab’s offices, and managed Aberdeen’s connection to the ARPANET.3In 1986, the NSF recruited him to manage the NSF’s supercomputing backbone – he was a natural fit, given his experience connecting Army supercomputers to ARPANET. He became the principal architect of NSFNET’s evolution from that point until his departure in 1994, when he entered the private sector as a manager for Cisco Systems. The original intended function of the net that Wolff was hired to manage had been to connect researchers across the U.S. to NSF-funded supercomputing centers. As we saw last time, however, once Wolff and the other network managers saw how much demand the initial backbone had engendered, they quickly developed a new vision of NSFNET, as a communications grid for the entire American research and post-secondary education community.However, Wolff did not want the government to be in the business of supplying network services on a permanent basis. In his view, the NSF’s role was to prime the pump, creating the initial demand needed to get a commercial networking services sector off the ground. Once that happened, Wolff felt it would be improper for a government entity to be in competition with viable for-profit businesses. So he intended to get NSF out of the way by privatizing the network, handing over control of the backbone to unsubsidized private entities and letting the market take over.This was very much in the spirit of the times. Across the Western world, and across most of the political spectrum, government leaders of the 1980s touted privatization and deregulation as the best means to unleash economic growth and innovation after the relative stagnation of the 1970s. As one example among many, around the same time that NSFNET was getting off the ground, the FCC knocked down several decades-old constraints on corporations involved in broadcasting. In 1985, it removed the restriction on owning print and broadcast media in the same locality, and two year later it nullified the fairness doctrine, which had required broadcasters to present multiple views on public-policy debates. From his post at NSF, Wolff had several levers at hand for accomplishing his goals. The first lay in the interpretation and enforcement of the network’s acceptable use policy (AUP). In accordance with NSF’s mission, the initial policy for the NSFNET backbone, in effect until June 1990, required all uses of the network to be in support of “scientific research and other scholarly activities.” This is quite restrictive indeed, and would seem to eliminate any possibility of commercial use of the network. But Wolff chose to interpret the policy liberally. Regularly mailing list postings about new product releases from a corporation that sold data processing software – was that not in support of scientific research? What about the decision to allow MCI’s email system to connect to the backbone, at the urging of Vint Cerf, who had left government employ to oversee the development of MCI Mail. Wolff rationalized this – and other later interconnections to commercial email systems such as CompuServe’s – as in support of research by making it possible for researchers to communicate digitally with a wider range of people that they might need to contact in the pursuit of their work. A stretch, perhaps. But Wolff saw that allowing some commercial traffic on the same infrastructure that was used for public NSF traffic would encourage the private investment needed to support academic and educational use on a permanent basis. Wolff’s strategy of opening the door of NSFNET as far as possible to commercial entities got an assist from Congress in 1992, when Congressman Rick Boucher, who helped oversee NSF as chair of the Science Subcommittee, sponsored an amendment to the NSF charter which authorized any additional uses of NSFNET that would “tend to increase the overall capabilities of the networks to support such research and education activities.” This was an ex post facto validation of Wolff’s approach to commercial traffic, allowing virtually any activity as long as it produced profits that encouraged more private investment into NSFNET and its peer networks.  Dual-Use NetworksWolff also fostered the commercial development of networking by supporting the regional networks’ reuse of their networking hardware for commercial traffic. As you may recall, the NSF backbone linked together a variety of not-for-profit regional nets, from NYSERNet in New York to Sesquinet in Texas to BARRNet in northern California. NSF did not directly fund the regional networks, but it did subsidize them indirectly, via the money it provided to labs and universities to offset the costs of their connection to their neighborhood regional net. Several of the regional nets then used this same subsidized infrastructure to spin off a for-profit commercial enterprise, selling network access to the public over the very same wires used for the research and education purposes sponsored by NSF. Wolff encouraged them to do so, seeing this as yet another way to accelerate the transition of the nation’s research and education infrastructure to private control. This, too, accorded neatly with the political spirit of the 1980s, which encouraged private enterprise to profit from public largesse, in the expectation that the public would benefit indirectly through economic growth. One can see parallels with the dual-use regional networks in the 1980 Bayh-Dole Act, which defaulted ownership of patents derived from government-funded research to the organization performing the work, not to the government that paid for it. The most prominent example of dual-use in action was PSINet, a for-profit company initially founded as Performance Systems International in 1988. William Schrader and Martin Schoffstall, the co-founder of NYSERNet and one of vice presidents’, respectively, created the company. Schofstall, a former BBN engineer and co-author of the Simple Network Management Protocol (SNMP) for managing the devices on an IP network, was the key technical leader. Schrader, an ambitious Cornell biology major and MBA who had helped his alma mater set up its supercomputing center and get it connected to NSFNET, provided the business drive. He firmly believed that NYSERNet should be selling service to businesses, not just educational institutions. When the rest of the board disagreed, he quit to found his own company, first contracting with NYSERNet for service, and later raising enough money to acquire its assets. PSINet thus became one of the earliest commercial internet service providers, while continuing to provide non-profit service to colleges and universities seeking access to the NSFNET backbone.4Wolff’s final source of leverage for encouraging a commercial Internet lay in his role as manager of the contracts with the Merit-IBM-MCI consortium that operated the backbone. The initial impetus for change in this dimension came not from Wolff, however, but from the backbone operators themselves.  A For-Profit BackboneMCI and its peers in the telecommunications industry had a strong incentive to find or create more demand for computer data communications. They had spent the 1980s upgrading their long-line networks from coaxial cable and microwave – already much higher capacity than the old copper lines – to fiber optic cables. These cables, which transmitted laser light through glass, had tremendous capacity, limited mainly by the technology in the transmitters and receivers on either end, rather than the cable itself. And that capacity was far from saturated. By the early 1990s, many companies had deployed OC-48 transmission equipment with 2.5 Gbps of capacity, an almost unimaginable figure a decade earlier. An explosion in data traffic would therefore bring in new revenue at very little marginal cost – almost pure profit.5The desire to gain expertise in the coming market in data communications helps explains why MCI was willing to sign on to the NSFNET bid proposed by Merit, which massively undercut the competing bids (at $14 million for five years, versus the $40 and $25 millions proposed by their competitors6), and surely implied a short-term financial loss for MCI and IBM. But by 1989, they hoped to start turning a profit from their investment. The existing backbone was approaching the saturation point, with 500 million packets a month, a 500% year-over-year increase.7 So, when NSF asked Merit to upgrade the backbone from 1.5 Mbps T1 lines to 45Mbps T3, they took the opportunity to propose to Wolff a new contractual arrangement.T3 was a new frontier in networking – no prior experience or equipment existed for digital networks of this bandwidth, and so the companies argued that more private investment would be needed, requiring a restructuring that would allow IBM and Merit to share the new infrastructure with for-profit commercial traffic – a dual-use backbone. To achieve this, the consortium would from a new non-profit corporation, Advanced Network & Services, Inc. (ANS), which would supply T3 networking services to NSF. A subsidiary called ANS CO+RE systems would sell the same services at a profit to any clients willing to pay. Wolff agreed to this, seeing it as just another step in the transition of the network towards commercial control. Moreover, he feared that continuing to block commercial exploitation of the backbone would lead to a bifurcation of the network, with suppliers like ANS doing an end-run around NSFNET to create their own, separate, commercial Internet. Up to that point, Wolff’s plan for gradually getting NSF out of the way had no specific target date or planned milestones. A workshop on the topic held at Harvard in March 1990, in which Wolff and many other early Internet leaders participated, considered a variety of options without laying out any concrete plans.8 It was ANS’ stratagem that triggered the cascade of events that led directly to the full privatization and commercialization of NSFNET.It began with a backlash. Despite Wolff’s good intentions, IBM and MCI’s ANS maneuver created a great deal of disgruntlement in the networking community. It became a problem exactly because of the for-profit networks attached to the backbone that Wolff had promoted. So far they had gotten along reasonably with one another, because they all operated as peers on the same terms. But with ANS, a for-profit company held a de-facto monopoly on the backbone at the center of the Internet.9 Moreover, despite Wolff’s efforts to interpret the AUP loosely, ANS chose to interpret it strictly, and refused to interconnect the non-profit portion of the backbone (for NSF traffic) with any of their for-profit networks like PSI, since that would require a direct mixing of commercial and non-commercial traffic. When this created an uproar, they backpedaled, and came up with a new policy, allowing interconnection for a fee based on traffic volume.PSINet would have none of this. In the summer of 1991, they banded together with two other for-profit Internet service providers – UUNET, which had begun by selling commercial access to Usenet before adding Internet service; and the California Education and Research Federation Network, or CERFNet, operated by General Atomics – to form their own exchange, bypassing the ANS backbone. The Commercial Internet Exchange (CIX) consisted at first of just a single routing center in Washington D.C. which could transfer traffic among the three networks. They agreed to peer at no charge, regardless of the relative traffic volume, with each network paying the same fee to CIX to operate the router. New routers in Chicago and Silicon Valley soon followed, and other networks looking to avoid ANS’ fees also joined on.DivestitureRick Boucher, the Congressman whom we met above as a supporter of NSF commercialization, nonetheless requested an investigation of the propriety of Wolff’s actions in the ANS affair by the Office of the Inspector General. It found NSF’s actions precipitous, but not malicious or corrupt. Nevertheless, Wolff saw that the time had come to divest control of the backbone. With ANS + CORE and CIX privatization and commercialization had begun in earnest, but in a way that risked splitting the unitary Internet into multiple disconnected fragments, as CIX and ANS refused to connect with one another. NSF therefore drafted a plan for a new, privatized network architecture in the summer of 1992, released it for public comment, and finalized it in May of 1993. NSFNET would shut down in the spring of 1995, and its assets would revert to IBM and MCI. The regional networks could continue to operate, with financial support from the NSF gradually phasing out over a four year period, but would have to contract with a private ISP for internet access.But in a world of many competitive internet access providers, what would replace the backbone? What mechanism would link these opposed private interests into a cohesive whole? Wolff’s answer was inspired by the exchanges already built by cooperatives like CIX – NSF would contract out the creation of four Network Access Points (NAPs), routing sites where various vendors could exchange traffic. Having four separate contracts would avoid repeating the ANS controversy, by preventing a monopoly on the points of exchange. One NAP would reside at the pre-existing, and cheekily named, Metropolitan Area Ethernet East (MAE-East) in Vienna, Virginia, operated by Metropolitan Fiber Systems (MFS). MAE-West, operated by Pacific Bell, was established in San Jose, California; Sprint operated another NAP in Pennsauken, New Jersey, and Ameritech one in Chicago. The transition went smoothly10, and NSF decommissioned the backbone right on schedule, on April 30, 1995.11The Break-upThough Gore and others often invoked the “information superhighway” as a metaphor for digital networks, there was never serious consideration in Congress of using the federal highway system as a direct policy model. The federal government paid for the building and maintenance of interstate highways in order to provide a robust transportation network for the entire country. But in an era when both major parties took deregulation and privatization for granted as good policy, a state-backed system of networks and information services on the French model of Transpac and Minitel was not up for consideration.12Instead, the most attractive policy model for Congress as it planned for the future of telecommunication was the long-distance market created by the break-up of the Bell System between 1982 and 1984. In 1974, the Justice Department filed suit against AT&T, its first major suit against the organization since the 1950s, alleging that it had engaged in anti-competitive behavior in violation of the Sherman Antitrust Act. Specifically, they accused the company of using its market power to exclude various innovative new businesses from the market – mobile radio operators, data networks, satellite carriers, makers of specialized terminal equipment, and more. The suit thus clearly drew much of its impetus from the ongoing disputes since the early 1960s (described in an earlier installment), between AT&T and the likes of MCI and Carterfone.When it became clear that the Justice Department meant business, and intended to break the power of AT&T, the company at first sought redress from Congress. John de Butts, chairman and CEO since 1972, attempted to push a “Bell bill” – formally the Consumer Communications Reform Act – through Congress. It would have enshrined into law AT&T’s argument that the benefits of a single, universal telephone network far outweighed any risk of abusive monopoly, risks which in any case the FCC could already effectively check. But the proposal received stiff opposition in the House Subcommittee on Communications, and never reached a vote on the floor of either Congressional chamber. In a change of tactics, in 1979 the board replaced the combative de Butts – who had once declared openly to an audience of state telecommunications regulators the heresy that he opposed competition and espoused monopoly – with the more conciliatory Charles Brown. But it was too late by then to stop the momentum of the antitrust case, and it became increasingly clear to the company’s leadership that they would not prevail. In January 1982, therefore, Brown agreed to a consent decree that would have the presiding judge in the case, Harold Greene, oversee the break-up of the Bell System into its constituent parts.The various Bell companies that brought copper to the customer’s premise, which generally operated by state (New Jersey Bell, Indiana Bell, and so forth) were carved up into seven blocks called Regional Bell Operating Companies (RBOCs). Working clockwise around the country, they were NYNEX in the northeast, Bell Atlantic, Bell South, Southwestern Bell, Pacific Telesis, US West, and Ameritech. All of them remained regulated entities with an effective monopoly over local traffic in their region, but were forbidden from entering other telecom markets. AT&T itself retained the “long lines” division for long-distance traffic. Unlike local phone service, however, the settlement opened this market to free competition from any entrant willing and able to pay the interconnection fees to transfer calls in and out of the RBOCs. A residential customer in Indiana would always have Ameritech as their local telephone company, but could sign up for long-distance service with anyone.However, splitting apart the local and long-distance markets meant forgoing the subsidies that AT&T had long routed to rural telephone subscribers, under-charging them by over-charging wealthy long-distance users. A sudden spike in rural telephone prices across the nation was not politically tenable, so the deal preserved these transfers via a new organization, the non-profit National Exchange Carrier Association, which collected fees from the long-distance companies and distributed them to the RBOCS.   The new structure worked. Two major competitors entered the market in the 1980s, MCI and Sprint, and cut deeply into AT&T’s market share. Long-distance prices fell rapidly. Though it is arguable how much of this was due to competition per se, as opposed to the advent of ultra-high-bandwidth fiber optic networks, the arrangement was generally seen as a great success for de-regulation and a clear argument for the power of market forces to modernize formerly hidebound industries. This market structure, created ad hoc by court fiat but evidently highly successful, provided the template from which Congress drew in the mid-1990s to finally resolve the question of what telecom policy for the Internet era would look like. Second Time Isn’t The CharmPrior to the main event, there was one brief preliminary. The High Performance Computing Act of 1991 was important tactically, but not strategically. It advanced no new major policy initiatives. Its primary significance lay in providing additional funding and Congressional backing for what Wolff and the NSF already were doing and intended to keep doing – providing networking services for the research community, subsidizing academic institutions’ connections to NSFNET, and continuing to upgrade the backbone infrastructure.  Then came the accession of the 104th Congress in January 1995. Republicans took control of both the Senate and the House for the first time in forty years, and they came with an agenda to fight crime, cut taxes, shrink and reform government, and uphold moral righteousness. Gore and his allies had long touted universal access as a key component of the National Information Infrastructure, but with this shift in power the prospects for a strong universal service component to telecommunications reform diminished from minimal to none. Instead, the main legislative course would consist of regulatory changes to foster competition in telecommunications and Internet access, with a serving of bowdlerization on the side. The market conditions looked promising. Circa 1992, the major players in the telecommunications industry were numerous. In the traditional telephone industry there were the seven RBOCs, GTE, and three large long distance companies – AT&T, MCI, and Sprint – along with many smaller ones. The new up-and-comers included Internet service providers, such as UUNET, and PSINET as well as the IBM/MCI backbone spin-off, ANS; and other companies trying to build out their local fiber networks, such as Metropolitan Fiber Systems (MFS). BBN, the contractor behind ARPANET, had begun to build its own small Internet empire, snapping up some of the regional networks that orbited around NSFNET – Nearnet in New England, BARRNet in the Bay area, and SURANet in the southeast of the U.S. To preserve and expand this competitive landscape would be the primary goal of the 1996 Telecommunications Act, the only major rewrite of communications policy since the Communications Act of 1934. It intended to reshape telecommunications law for the digital age. The regulatory regime established by the original act siloed industries by their physical transmission medium – telephony, broadcast radio and television, cable TV; in each in its own box, with its own rules, and generally forbidden to meddle in each other’s business. As we have seen, sometimes regulators even created silos within silos, segregating the long-distance and local telephone markets. This made less and less sense as media of all types were reduced to fungible digital bits, which could be commingled on the same optical fiber, satellite transmission, or ethernet cable. The intent of the 1996 Act, shared by Democrats and Republicans alike, was to tear down these barriers, these “Berlin Walls of regulation”, as Gore’s own summary of the act put it.13 A complete itemization of the regulatory changes in this doorstopper of a bill is not possible here, but a few examples provide a taste of its character. Among other things it:allowed the RBOCs to compete in long-distance telephone markets,lifted restrictions forbidding the same entity from owning both broadcasting and cable services,axed the rules that prevented concentration of radio station ownership.The risk, though, of simply removing all regulation, opening the floodgates and letting any entity participate in any market, was to recreate AT&T on an even larger scale, a monopolistic megacorp that would dominate all forms of communication and stifle all competitors. Most worrisome of all was control over the so-called last mile – from the local switching office to the customer’s home or office. Building an inter-urban network connecting the major cities of the U.S. was expensive but not prohibitive, several companies had done so in recent decades, from Sprint to UUNET. To replicate all the copper or cable to every home in even one urban area, was another matter. Local competition in landline communications had scarcely existed since the early wildcat days of the telephone, when tangled skeins of iron wire criss-crossed urban streets. In the case of the Internet, the concern centered especially on high-speed, direct-to-the-premises data services, later known as broadband. For years, competition had flourished among dial-up Internet access providers, because all the end user required to reach the provider’s computer was access to a dial tone. But this would not be the case by default for newer services that did not use the dial telephone network. The legislative solution to this conundrum was to create the concept of the “CLEC” – competitive local exchange carrier. The RBOCs, now referred to as “ILECs” (incumbent local exchange carriers), would be allowed full, unrestricted access to the long-distance market only once the had unbundled their networks by allowing the CLECs, which would provide their own telecommunications services to homes and businesses, to interconnect with and lease the incumbents’ infrastructure. This would enable competitive ISPs and other new  service providers to continue to get access to the local loop even when dial-up service became obsolete – creating, in effect, a dial tone for broadband. The CLECs, in this model, filled the same role as the long-distance providers in the post-break-up telephone market. Able to freely interconnect at reasonable fees to the existing local phone networks, they would inject competition into a market previously dominated by the problem of natural monopoly. Besides the creation of the CLECS, the other major part of the bill that affected the Internet addressed the Republicans’ moral agenda rather than their economic one. Title V, known as the Communications Decency Act, forbade the transmission of indecent or offensive material – depicting or describing “sexual or excretory activities or organs”, on any part of the Internet accessible to minors. This, in effect, was an extension of the obscenity and indecent rules that governed broadcasting into the world of interactive computing services. How, then, did this sweeping act fare in achieving its goals? In most dimensions it proved a failure. Easiest to dispose with is the Communications Decency Act, which the Supreme Court struck down quickly (in 1997) as a violation of the First Amendment. Several parts of Title V did survive review however, including Section 230, the most important piece of the entire bill for the Internet’s future. It allows websites that host user-created content to exist without the fear of constant lawsuits, and protects the continued existence of everything from giants like Facebook and Twitter to tiny hobby bulletin boards. The fate of the efforts to promote competition within the local loop took longer to play out, but proved no more successful than the controls on obscenity. What about the CLECs, given access to the incumbent cable and telephone infrastructure so that they could compete on price and service offerings? The law required FCC rulemaking to hash out the details of exactly what kind of unbundling had to be offered. The incumbents pressed the courts hard to dispute any such ruling that would open up their lines to competition, repeatedly winning injunctions on the FCC, while threatening that introducing competitors would halt their imminent plans for bringing fiber to the home. Then, with the arrival of the Bush Administration and new chairman Michael Powell in 2001, the FCC became actively hostile to the original goals of the Telecommunications Act. Powell believed that the need for alternative broadband access would be satisfied by intermodal competition among cable, telephone, power communications networks, cellular and wireless networks. No more FCC rules in favor of CLECs would be forthcoming. For a brief time around the year 2000, it was possible to subscribe to third-party high-speed internet access using the infrastructure of your local telephone or cable provider. After that, the most central of the Telecom Act’s  pro-competitive measures became, in effect, a dead letter. The much ballyhooed fiber-to-the home only began to actually reach a significant number of homes after 2010, and the only with reluctance on the part of the incumbents.14 As author Fred Goldstein put it, the incumbents had “gained a fig leaf of competition without accepting serious market share losses.”15During most of the twentieth century, networked industries in the U.S. had sprouted in a burst of entrepreneurial energy and then been fitted into the matrix of a regulatory framework as they grew large and important enough to affect the public interest. Broadcasting and cable television had followed this pattern. So had trucking and the airlines. But with the CLECs all but dead by the early 2000s, the Communications Decency Act revoked, and other attempts to control the Internet such as the Clipper chip16 stymied, the Internet would follow an opposite course. Having come to life under the guiding hand of the state, it would now be allowed to develop in an almost entirely laissez-faire fashion. The NAP framework established by the NSF at the hand-off of the backbone would be the last major government intervention in the structure of the Internet. This was true at both the transport layer – the networks such as Verizon and AT&T that transported raw data, and the applications layer – software services from portals like Yahoo! to search engines like Google to online stores like Amazon.  In our last chapter, we will look at the consequences of this fact, briefly sketching the evolution of the Internet in the U.S. from the mid-1990s onward. [Previous] [Next]Quoted in Richard Wiggins, “Al Gore and the Creation of the Internet” 2000.“Remarks by Vice President Al Gore at National Press Club“, December 21, 1993.Biographical details on Wolff’s life prior to NSF are scarce – I have recorded all of them that I could find here. Notably I have not been able to find even his date and place of birth.Schrader and PSINet rode high on the Internet bubble in the late 1990s, acquiring other businesses aggressively, and, most extravagantly, purchasing the naming rights to the football stadium of the NFL’s newest expansion team, the Baltimore Ravens. Schrader tempted fate with a 1997 article entitled “Why the Internet Crash Will Never Happen.” Unfortunately for him, it did happen, bringing about his ouster from the company in 2001 and PSINet’s bankruptcy the following year.To get a sense of how fast the cost of bandwidth was declining – in the mid-1980s, leasing a T1 line from New York to L.A. would cost $60,000 per month. Twenty years later, a OC-3 circuit with 100 times the capacity cost only $5,000, more than a thousand-fold reduction in price per capacity. See Fred R. Goldstein, The Great Telecom Meltdown, 95-96. Goldstein states that the 1.55 mpbs T1/DS1 line has 1/84th the capacity of OC-3, rather than 1/100th, a discrepancy I can’t account for. But this has little effect on the overall math.Office of Inspector General, “Review of NSFNET,” March 23, 1993.Fraser, “NSFNET: A Partnership for High-Speed Networking, Final Report”, 27.Brian Kahin, “RFC 1192: Commercialization of the Internet Summary Report,” November 1990.John Markoff, “Data Network Raises Monopoly Fear,” New York Times, December 19, 1991.Though many other technical details had to be sorted out, see  Susan R. Harris and Elise Gerich, “Retiring the NSFNET Backbone Service: Chronicling the End of an Era,” ConneXions, April 1996.The most problematic part of privatization proved to have nothing to do with the hardware infrastructure of the network, but instead with handing over control over the domain name system (DNS). For most of its history, its management had depended on the judgment of a single man – Jon Postel. But businesses investing millions in a commercial internet would not stand for such an ad hoc system. So the government handed control of the domain name system to a contractor, Network Solutions. The NSF had no real mechanism for regulatory oversight of DNS (though they might have done better by splitting the control of different top-level domains (TLDs) among different contractors), and Congress failed to step in to create any kind of regulatory regime. Control changed once again in 1998 to the non-profit ICANN (Internet Corporation for Assigned Names and Numbers), but the management of DNS still remains a thorny problem.The only quasi-exception to this focus on fostering competition was a proposal by Senator Daniel Inouye to reserve 20% of Internet traffic for public use: Steve Behrens, “Inouye Bill Would Reserve Capacity on Infohighway,” Current, June 20, 1994. Unsurprisingly, it went nowhere. Al Gore, “A Short Summary of the Telecommunications Reform Act of 1996”.Jon Brodkin, “AT&T kills DSL, leaves tens of millions of homes without fiber Internet,” Ars Technica, October 5, 2020.Goldstein, The Great Telecom Meltdown, 145.The Clipper chip was a proposed hardware backdoor that would give the government the ability to bypass any U.S.-created encryption software.Further ReadingJanet Abatte, Inventing the Internet (1999)Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996)Shane Greenstein, How the Internet Became Commercial (2015)Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (2018)Rajiv Shah and Jay P. Kesan, “The Privatization of the Internet’s Backbone Network,” Journal of Broadcasting & Electronic Media (2007)Share this: Click to share on X (Opens in new window) X Click to share on Facebook (Opens in new window) Facebook Like Loading...

Read more
Steamships, Part 2: The Further Adventures of Isambard Kingdom Brunel

Iron Empire As far back as 1832, Macgregor Laird had taken the iron ship Alburkah to Africa and up the Niger, making it among the first ship of such construction to take the open sea. But the use of iron hulls in British inland navigation can be traced decades earlier, beginning with river barges in the 1780s. An iron plate had far more tensile strength than even an oaken board of the same thickness. This made an iron-hulled ship stronger, lighter, and more spacious inside than an equivalent wooden vessel: a two-inch thickness of iron might replace two-foot’s thickness of timber.[1]  The downsides included susceptibility to corrosion and barnacles, interference with compasses, and, at least at first, the expense of the material. As we have already seen, the larger the ship, the smaller the proportion of its cargo space that it would need for fuel; but the Great Western and British Queen pushed the limits of the practical size of a wooden ship (in fact, Brunel had bound Great Western’s hull with iron straps to bolster its longitudinal strength and prevent it from breaking in heavy seas).[2] The price of wood in Britain grew ever more dear as her ancient forests disappeared, but to build more massive ships economically also required iron prices to fall: and they did just that, starting in the 1830s, because of a surprisingly simple change in technique. Ironmongers had noticed long ago that their furnaces produce more metal from the same amount of fuel in the winter months. They assumed that the cooler air produced this result, and so by the nineteenth century it had become a basic tenet of the iron-making business that one should blast cool air into the furnace with the bellows to maximize its efficiency.[3] This common wisdom was mistaken; entirely backwards, in fact. In 1825, a Glasgow colliery engineer named James Neilson found that a hotter blast made the furnaces more efficient (it was the dryness, not the coolness, of the winter air that had made the difference). Neilson was asked to consult at an ironworks in the village of Muirkirk which was having difficulty with its furnace. He realized that heating the blast air would expand it, and thus increase the pressure of the air flowing into the furnace, strengthening the blast. In 1828 he patented the method of using a stove to heat the blast air. He convinced the Clyde Ironworks to adopt it, and together they perfected the method over the following few years. The results were astounding. A 600° F blast reduced coal consumption of the furnace by two-thirds and increased output from about five-and-a-half tons of pig iron per day to over eight.[4] On top of all that, this simple innovation allowed the use of plain coal as fuel in lieu of (more expensive) refined coke. Ironmakers had adopted coke in the 1750s because when iron was smelted with raw coal the impurities (especially sulfur) in the fuel made the resulting metal too brittle. But the hot blast sent the temperature inside the furnace so high that it drove the sulfur out in the slag waste rather than baking it into the iron. During the 1830s and 40s, Neilson’s hot blast technique spread from Scotland across all of Great Britain, and drove a rapid increase in iron production, from 0.7 million tons in 1830 to over two million in 1850. This cut the market price per ton of pig iron in half.[5] With its vast reserves of coal and iron, made accessible with the power of steam pumps (themselves made in Britain of British iron and fueled by British coal), Britain was perfectly placed to supply the demand induced by this decline in price. Much of the growth in iron output went to exports, strengthening the commercial sinews of the British empire while providing the raw material of industrialization to the rest of the world. The frenzies of railroad building in the United States and continental Europe in the middle of the nineteenth century relied heavily on British rails made from British iron: in 1849, for example, the Baltimore and Ohio railroad secured 22,000 tons of rails from a Welsh trading concern.[6] The hunger of the rapidly growing United States for iron proved insatiable; circa 1850 the young nation imported about 450,000 tons of British iron per year.[7] Good Engineering Makes Bad Business The virtues of iron were also soon on the brain of Isambard Kingdom Brunel. The Great Western Steam Ship Company’s plan for a successor to Great Western began sensibly enough; they would build a slightly improved sister ship of similar design. But Brunel and his partners were seduced, in the fall of 1838, by the appearance in Bristol harbor of an all-iron channel steamer called Rainbow, the largest such ship yet built. Brunel’s associates Claxton and Patterson took a reconnaissance voyage on her to Antwerp and upon their return all three men became convinced that they should build in iron.[8] As if that were not enough novelty to take on in one design, in May 1840 another innovative ship steamed into Bristol harbor, leaving Brunel and his associates swooning one more. The aptly named Archimedes, designed by Francis Petit Smith, swam through the water with unprecedented smoothness and efficiency, powered by a screw propeller rather than paddle wheels.[9] Any well-educated nineteenth-century engineer knew that paddles wasted a huge amount of energy pushing water down at the front of the wheel and lifting it up at the back. Nor was screw propulsion a surprising new idea in 1840. As we have seen, early steamboat inventors tried out just about every imaginable means of pushing or pulling a ship. In his very thorough Treatise on the Screw Propeller, the engineer John Bourne cites fifty some-odd proposals, patents, or practical attempts at screw propulsion prior toSmith’s.[10] After so many failures, most practical engineers assumed (reasonably enough) that the screw could never replace the proven (albeit wasteful) paddlewheel. The difficulties were numerous, including reducing vibration, transmitting power effectively to the screw, and choosing its shape, size, and angle among many potential alternatives. Most fundamental though, was producing sufficient thrust: early steam engines operated at modest speed, cycling every three seconds or so. At twenty revolutions per minute, a screw would have to be of an impractical diameter to actually push a ship forward rapidly. Smith overcame this last problem with a gearing system to allow the propeller shaft to turn 140 times per minute. His propeller design at first consisted of a true helical screw, of two turns (which created excessive friction), then later a single turn. Then, in 1840 he refitted Archimedes with a more recognizably modern propeller with two blades (each of half a turn).[11] Even with these design improvements, Brunel found that noise and vibration made the Archimedes of 1840 “uninhabitable” for passengers.[12]  But he had unshakeable faith in its potential. No doubt, advocates of the screw could tout many potential advantages over the paddlewheel: a lower center of gravity, a more spacious interior, more maneuverability in narrow channels, and more efficient use of fuel  (especially in headwinds, which caught the paddles full on, and rolling sidelong waves, which would lift one paddlewheel or the other out of the water).[13]  So, the weary investors of the Great Western Steam Ship Company saw the timetable of the  Great Britain’s construction set back once more, in order to incorporate a screw. As steamship historian Stephen Fox put it, “[i]n commercial terms, what the Great Western company needed in that fall of 1840 was a second ship, as soon as possible, to compete with the newly established Cunard line,” but that is not what they would get.[14] The completed ship finally launched in 1843, but did not take to sea for a transatlantic voyage until July 1845, having already cost the company some £200,000 pounds in total. With 322 feet of black iron hull driven by a 1000 horsepower Maudslay engine and a massive 36-ton propeller shaft, she dwarfed Great Western. Her all-iron construction gave an impression of gossamer lightness that fascinated a public used to burly wood.[15] The Launching of the Great Britain. But if her appearance impressed, her performance at sea did not. Her propeller fell apart, her engine failed to achieve the expected speed and she rolled badly in a swell. After major, expensive renovations in the winter of 1845, she ran aground at the end of the 1846 sailing season at Dundrum Bay off Ireland. Her iron hull proved sturdier than the organization that had constructed it: by the time she was at last floated free in August 1847, the Great Western Steam Company had already sunk. Another concern bought Great Britain for £25,000, and she ended up plying the route to Australia, operating mostly by sail.[16] In the long run, Brunel and his partners were right that iron hulls and screw propulsion would surpass wood and paddles, but Great Britain failed to prove it. The upstart Inman steamer line launched the iron-hulled, screw-powered City of Glasgow in 1850, which did prove that the ideas behind Great Britain could be turned to commercial success. But the more conservative Cunard line did not dispatch its first iron-hulled ship on its maiden voyage until 1856. Though even larger than Great Britain, at 376 feet and 3600 tons, the Persia still sported paddlewheels. This did not prevent her from booking more passengers than any other steamship to date, nor from setting a transatlantic speed record.[17] Not until the end of the 1860s did oceanic paddle steamers become obsolete. The Archimedes. Without any visible wheels, she looked deceptively like a typical sailing schooner, but for the telltale smokestack. A Glorious Folly For a time, Brunel walked away from shipbuilding. Then, late in 1851, he began crafting plans for a new liner to far surpass even Great Britain, one large enough to ply the routes to Indian and Australia without coaling stops on the African coast. Stopping to refuel wasted time but also quite a lot of money: coal in Africa cost far more than in Europe, because another ship had to bring it there in the first place.[18]    Because it would sail around Africa, not towards America, the new ship was christened Great Eastern. Monstrous in all its dimensions, the Great Eastern, can only be regarded as a monster in truth, in the archaic sense of “a prodigy birthed outside the natural order of things”; it was without precedent and without issue.[19] Given the total failure of Brunel’s last steam liner company, not to mention other examples of excessive exuberance in his past, such as an atmospheric railway project that shut down within a year, it is hard to conceive of how he was able to convince new backers to finance this wild new idea. He did have the help of one new ally, an ambitious Scottish shipbuilder named John Russell, who was also wracked by career disappointment and eager for a comeback. Together they built an astonishing vessel: at 690 feet long and over 22,000 tons, it exceeded in size every other ship built to its time, and also every other ship built in the balance of the nineteenth century. It would carry (in theory) 4,000 passengers and 18,000 tons of coal or cargo, and mount both paddlewheels and a propeller, the latter powered by the largest steam engine ever built, of 1600 horsepower. Brunel died of a stroke in 1859, and never saw the ship take to sea. That is just as well, for it failed even more brutally than the Great Britain. It was slow, rolled badly, maneuvered poorly, and demanded prodigious quantities of labor and fuel.[20] Like Great Britain, after a brief service its owners auctioned it off to new buyers at a crushing loss. Great Eastern did, however, have still in its future a key role to play in the extension of British imperial and commercial power, as we shall see. The Great Eastern in harbor in Wales in 1860. Note the ‘normal-size’ three-masted ship in the foreground for scale. I have lingered on Brunel’s career for so long not because he was of unparalleled import to the history of the age of steam (he was not), but because his character and his ambition fascinate me. He innovated boldly, but rarely as effectively as his more circumspect peers, such as Samuel Cunard. Much—though certainly not all—of his career consists of glorious failure. Whether you, dear reader, emphasize the glory or the failure, may depend on the width of the romantic streak that runs through your soul.

Read more
ARPANET, Part 2: The Packet

By the end of 1966, Robert Taylor, had set in motion a project to interlink the many computers funded by ARPA, a project inspired by the “intergalactic network” vision of J.C.R. Licklider. Taylor put the responsibility for executing that project into the capable hands of Larry Roberts. Over the following year, Roberts made several crucial decisions which would reverberate through the technical architecture and culture of ARPANET and its successors, in some cases for decades to come. The first of these in importance, though not in chronology, was to determine the mechanism by which messages would be routed from one computer to another. The Problem If computer A wants to send a message to computer B, how does the message find its way from the one to the other? In theory, one could allow any node in a communications network to communicate with any other node by linking every such pair with its own dedicated cable. To communicate with B, A would simply send a message over the outgoing cable that connects to B. Such a network is termed fully-connected. At any significant size, however, this approach quickly becomes impractical, since the number of connections necessary increases with the square of the number of nodes.1 Instead, some means is needed for routing a message, upon arrival at some intermediate node, on toward its final destination. As of the early 1960s, two basic approaches to this problem were known. The first was store-and-forward message switching. This was the approach used by the telegraph system. When a message arrived at an intermediate location, it was temporarily stored there (typically in the form of paper tape) until it could be re-transmitted out to its destination, or another switching center closer to that destination. Then the telephone appeared, and a new approach was required. A multiple-minute delay for each utterance in a telephone call to be transcribed and routed to its destination would result in an experience rather like trying to converse with someone on Mars. Instead the telephone system used circuit switching. The caller began each telephone call by sending a special message indicating whom they were trying to reach. At first this was done by speaking to a human operator, later by dialing a number which was processed by automatic switching equipment. The operator or equipment established a dedicated electric circuit between caller and callee. In the case of a long-distance call, this might take several hops through intermediate switching centers. Once this circuit was completed, the actual telephone call could begin, and that circuit was held open until one party or the other terminated the call by hanging up. The data links that would be used in ARPANET to connect time-shared computers partook of qualities of both the telegraph and the telephone. On the one hand, data messages came in discrete bursts, like the telegraph, unlike the continuous conversation of a telephone. But these messages could come in a variety of sizes for a variety of purposes, from console commands only a few characters long to large data files being transferred from one computer to another. If the latter suffered some delays in arriving at their destination, no one would particularly mind. But remote interactivity required very fast response times, rather like a telephone call. One important difference between computer data networks and bout the telephone and the telegraph was the error-sensitivity of machine-processed data. A single character in a telegram changed or lost in transmission, or a fragment of a word dropped in a telephone conversation, were matters unlikely to seriously impair human-to-human communication. But if noise on the line flipped a single bit from 0 to 1 in a command to a remote computer, that could entirely change the meaning of that command. Therefore every message would have to be checked for errors, and re-transmitted if any were found. Such repetition would be very costly for large messages, which would be all the more likely to be disrupted by errors, since they took longer to transmit. A solution to these problems was arrived at independently on two different occasions in the 1960s, but the later instance was the first to come to the attention of Larry Roberts and ARPA. The Encounter In the fall of 1967, Roberts arrived in Gatlinburg, Tennessee, hard by the forested peaks of the Great Smoky Mountains, to deliver a paper on ARPA’s networking plans. Almost a year into his stint at the Information Processing Technology Office (IPTO), many areas of the network design were still hazy, among them the solution to the routing problem. Other than a vague mention of blocks and block size, the only reference to it in Roberts’ paper is in a brief and rather noncommittal passage at the very end: “It has proven necessary to hold a line which is being used intermittently to obtain the one-tenth to one second response time required for interactive work. This is very wasteful of the line and unless faster dial up times become available, message switching and concentration will be very important to network participants.”2 Evidently, Roberts had still not entirely decided whether to abandon the approach he had used in 1965 with Tom Marrill, that is to say, connecting computers over the circuit-switched telephone network via an auto-dialer. Coincidentally, however, someone else was attending the same symposium with a much better thought-out idea of how to solve the problem of routing in data networks. Roger Scantlebury had crossed the Atlantic, from the British National Physical Laboratory (NPL), to present his own paper. Scantlebury took Roberts aside after hearing his talk, and told him all about something called packet-switching. It was a technique his supervisor at the NPL, Donald Davies had developed. Davies’ story and achievements are not generally well-known in the U.S, although in the fall of 1967, Davies’ group at the NPL was at least a year ahead of ARPA in its thinking. Davies, like many early pioneers of electronic computing, had trained as a physicist. He graduated from Imperial College, London in 1943, when he was only 19 years old, and was immediately drafted into the “Tube Alloy” program – Britain’s code name for its nuclear weapons project. There he was responsible for supervising a group of human computers, using mechanical and electric calculators to crank out numerical solutions to problems in nuclear fission.3 After the war, he learned from the mathematician John Womersley about a project he was supervising out at the NPL, to build an electronic computer that would perform the same kinds of calculations at vastly greater speed. The computer, designed by Alan Turing, was called ACE, for “automatic computing engine.” Davies was sold, and got himself hired at NPL as quickly as he could. After contributing to the detailed design and construction of the ACE machine, he remained heavily involved in computing as a research leader at NPL. He happened in 1965 to be in the United States for a professional meeting in that capacity, and used the occasion to visit several major time-sharing sites to see what all the buzz was about. In the British computing community time-sharing in the American sense of sharing a computer interactively among multiple users was unknown. Instead, time-sharing meant splitting a computer’s workload across multiple batch-processing programs (to allow, for example, one program to proceed while another was blocked reading from a tape).4 Davies’ travels took him to Project MAC at MIT, RAND Corporation’s JOSS Project in California, and the Dartmouth Time-Sharing System in New Hampshire. On the way home one of his colleagues suggested they hold a seminar on time-sharing to inform the British computing community about the new techniques that they had learned about in the U.S. Davies agreed, and played host to a number of major figures in American computing, among them Fernando Corbató (creator of the Compatible Time-Sharing System at MIT), and Larry Roberts himself. During the seminar (or perhaps immediately after), Davies was struck with the notion that the time-sharing philosophy could be applied to the links between computers, as well as to the computers themselves. Time-sharing computers gave each user a small time slice of the processor before switching to the next, giving each user the illusion of an interactive computer at their fingertips. Likewise, by slicing up each message into standard-sized pieces which Davies called “packets,” a single communications channel could be shared by multiple computers or multiple users of a single computer. And moreover, this would address all the aspects of data communication that were poorly served by telephone- or telegraph-style switching. A user engaged interactively at a terminal, sending short commands and receiving short responses, would not have their single-packet messages blocked behind a large file transfer, since that transfer would be broken into many packets. And any corruption in such large messages would only affect a single packet, which could easily be re-transmitted to complete the message. Davies wrote up his ideas in an unpublished 1966 paper, entitled “Proposal for a Digital Communication Network.” The most advanced telephone networks were then on the verge of computerizing their switching systems, and Davies proposed building packet-switching into that next-generation telephone network, thereby creating a single wide-band communications network that could serve a wide variety of uses, from ordinary telephone calls to remote computer access. By this time Davies had been promoted to Superintendent of NPL, and he formed a data communications group under Scantlebury to flesh out his design and build a working demonstration. Over the year leading up to the Gatlinburg conference, Scantlebury’s team had thus worked out details of how to build a packet-switching network. The failure of a switching node could be dealt with by adaptive routing with multiple paths to the destination, and the failure of an individual packet by re-transmission. Simulation and analysis indicated an optimal packet size of around 1000 bytes – much smaller and the loss of bandwidth from the header metadata required on each packet became too costly, much larger and the response times for interactive users would be impaired too often by large messages. The paper delivered by Scantlebury contained details such as a packet layout format… And an analysis of the effect of packet size on network delay. Meanwhile, Davies’ and Scantlebury’s literature search turned up a series of detailed research papers by an American who had come up with roughly the same idea, several years earlier. Paul Baran, an electrical engineer at RAND Corporation, had not been thinking at all about the needs of time-sharing computer users, however. RAND was a Defense Department-sponsored think tank in Santa Monica, California, created in the aftermath of World War II to carry out long-range planning and analysis of strategic problems in advance of direct military needs.[^sdc] Baran’s goal was to ward off nuclear war by building a highly robust military communications net, which could survive even a major nuclear attack. Such a network would make a Soviet preemptive strike less attractive, since it would be very hard to knock out America’s ability to respond by hitting a few key nerve centers. To that end, Baran proposed a system that would break messages into what he called message blocks, which could be independently routed across a highly-redundant mesh of communications nodes, only to be reassembled at their final destination.  [^sdc]: System Development Corporation (SDC), the primary software contractor to the SAGE system and the site of one of the first networking experiments, as discussed in the last segment, had been spun off from RAND. ARPA had access to Baran’s voluminous RAND reports, but disconnected as they were from the context of interactive computing, their relevance to ARPANET was not obvious. Roberts and Taylor seem never to have taken notice of them. Instead, in one chance encounter, Scantlebury had provided everything to Roberts on a platter: a well-considered switching mechanism, its applicability to the problem of interactive computer networks, the RAND reference material, and even the name “packet.” The NPL’s work also convinced Roberts that higher speeds would be needed than he had contemplated to get good throughput, and so he upgraded his plans to 50 kilobits-per-second lines. For ARPANET, the fundamentals of the routing problem had been solved.5 The Networks That Weren’t As we have seen, not one, but two parties beat ARPA to the punch on figuring out packet-switching, a technique that has proved so effective that its now the basis of effectively all communications. Why, then, was ARPANET the first significant network to actually make use of it? The answer is fundamentally institutional. ARPA had no official mandate to build a communications network, but they did have a large number of pre-existing research sites with computers, a “loose” culture with relatively little oversight of small departments like the IPTO, and piles and piles of money. Taylor’s initial 1966 request for ARPANET came to $1 million, and Roberts continued to spend that much or more in every year from 1969 onward to build and operate the network6. Yet for ARPA as a whole this amount of money was pocket change, and so none of his superiors worried too much about what Roberts was doing with it, so long as it could be vaguely justified as related to national defense.  By contrast, Baran at RAND had no means or authority to actually do anything. His work was pure research and analysis, which might be applied by the military services, if they desired to do so. In 1965, RAND did recommend his system to the Air Force, which agreed that Baran’s design was viable. But the implementation fell within the purview of the Defense Communications Agency, who had no real understanding of digital communications. Baran convinced his superiors at RAND that it would be better to withdraw the proposal than allow a botched implementation to sully the reputation of distributed digital communication. Davies, as Superintendent of the NPL, had rather more executive authority than Baran, but a more limited budget than ARPA, and no pre-existing social and technical network of research computer sites. He was able to build a prototype local packet-switching “network” (it had only one node, but many terminals) at NPL in the late 1960s, with a modest budget of £120,000 pounds over three years.7 ARPANET spent roughly half that on annual operational and maintenance costs alone at each of its many network sites, excluding the initial investment in hardware and software.8 The organization that would have had the power to build a large-scale British packet-switching network was the post office, which operated the country’s telecommunications networks in addition to its traditional postal system. Davies managed to interest a few influential post office officials in his ideas for a unified, national digital network, but to change the  momentum of such a large system was beyond his power. Licklider, through a combination of luck and planning, had found the perfect hothouse for his intergalactic network to blossom in. That is not to say that everything except for the packet-switching concept was a mere matter of money. Execution matters, too. Moreover, several other important design decisions defined the character of ARPANET. The next we will consider is how responsibilities would be divided between the host computers sending and receiving a message, versus the network over which they sent it. [previous] [next] Further Reading Janet Abbate, Inventing the Internet (1999) Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late (1996) Leonard Kleinrock, “An Early History of the Internet,” IEEE Communications Magazine (August 2010) Arthur Norberg and Julie O’Neill, Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986 (1996) M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)

Read more
Coda: Steam’s Last Stand

In the year 1900, automobile sales in the United States were divided almost evenly among three types of vehicles: automakers sold about 1,000 cars powered by internal combustion engines, but over 1,600 powered by steam engines, and almost as many by batteries and electric motors. Throughout all of living memory (at least until the very recent rise of electric vehicles), the car and the combustion engine have gone hand in hand, inseparable. Yet, in 1900, this type claimed the smallest share.For historians of technology, this is the most tantalizing fact in the history of the automobile, perhaps the most tantalizing fact in the history of the industrial age. It suggests a multiverse of possibility, a garden of forking, ghostly might-have-beens. It suggests that, perhaps, had this unstable equilibrium tipped in a different direction, many of the negative externalities of the automobile age—smog, the acceleration of global warming, suburban sprawl—might have been averted. It invites the question, why did combustion win? Many books and articles, by both amateur and professional historians, have been written to attempt to answer this question.However, since the electric car, interesting as its history certainly is, has little to tell us about the age of steam, we will consider here a narrower question—why did steam lose? The steam car was an inflection point where steam power, for so long an engine driving technological progress forward, instead yielded the right-of-way to a brash newcomer. Steam began to look like relic of the past, reduced to watching from the shoulder as the future rushed by. For two centuries, steam strode confidently into one new domain after another: mines, factories, steamboats, railroads, steamships, electricity. Why did it falter at the steam car, after such a promising start?The Emergence of the Steam CarThough Germany had given birth to experimental automobiles in the 1880s, the motor car first took off as successful industry in France. Even Benz, the one German maker to see any success in the early 1890s, sold the majority of its cars and motor-tricycles to French buyers. This was in large part due to the excellent quality of French cross-country roads – though mostly gravel rather than asphalt, they were financed by taxes and overseen by civil engineers, and well above the typical European or American standard of the time. These roads…made it easier for businessmen [in France] to envisage a substantial market for cars… They inspired early producers to publicize their cars by intercity demonstrations and races. And they made cars more practical for residents of rural areas and small towns.[1] The first successful motor car business arose in Paris, in the early 1890s. Émile Levassor and René Panhard (both graduates of the École centrale des arts et manufactures, an engineering institute in Paris), met as managers at a machine shop that made woodworking and metal-working tools. They became the leading partners of the firm and took it into auto making after becoming licensors for the Daimler engine.The 1894 Panhard & Levassor Phaeton already shows the beginning of the shift from horseless carriages with an engine under the seats to the modern car layout with a forward engine compartment. [Jörgens.mi / CC BY-SA 3.0]Before making cars themselves, they looked for other buyers for their licensed engines, which led them to a bicycle maker near the Swiss border, Peugeot Frères Aînés, headed by Armand Peugeot. Though bicycles seem very far removed from cars today, they made many contributions to the early growth of the auto industry. The 1880s bicycle boom (stimulated by the invention of the chain-driven “safety” bicycle) seeded expertise in the construction of high-speed road vehicles with ball bearings and tubular metal frames. Many early cars resembled bicycles with an additional wheel or two, and chain drives for powering the rear wheels remained popular throughout the first few decades of automobile development. Cycling groups also became very effective lobbyists for the construction of smooth cross-country roads on which to ride their machines, literally paving the way for the cars to come.[2]Armand Peugeot decided to purchase Daimler engines from Panhard et Levassor and make cars himself. So, already by 1890 there were two French firms making cars with combustion engines. But French designers had not altogether neglected the possibility of running steam vehicles on ordinary roads. In fact, before ever ordering a Daimler engine, Peugeot had worked on a steam tricycle with the man who would prove to be the most persistent partisan of steam cars in France, Léon Serpollet.A steam-powered road vehicle was not, by 1890, a novel idea. It had been proposed countless times, even before the rise of steam locomotives: James Watt himself had first developed an interest in engines, all the way back in the 1750s, after his friend John Robison suggested building a steam carriage. But those who had tried to put the idea into practice had always found the result wanting. Among the problems were the bulk and weight of the engine and all its paraphernalia (boiler, furnace, coal), the difficulty of maintaining a stoked furnace and controlling steam levels (including preventing the risk of boiler explosion), and the complexity of operating the engine. The only kinds of steam road vehicles to find any success, were those that inherently required a lot of weight, bulk, and specialized training to operate—fire engines and steamrollers—and even those only appeared in the second half of the nineteenth century.[3]Consider Serpollet’s immediate predecessor in steam carriage building, the debauched playboy Comte Albert de Dion. He commissioned two toymakers, George Bouton and Charles Trépardoux to make several small steam cars in the 1880s. These coal-fueled machines took thirty minutes or more to build up a head of steam. In 1894 a larger Dion steam tractor finished first in one of the many cross-country auto races that had begun to spring up to help carmakers promote their vehicles. But the judges disqualified Dion’s vehicle on account of its impracticality: requiring both a driver and a stoker for its furnace, it was in a very literal sense a road locomotive. A discouraged Comte de Dion gave up the steam business, but De Dion-Bouton went on to be a successful maker of combustion automobiles and automobile engines.[4]This De Dion-Bouton steam tractor was disqualified from an auto race in 1894 as impractical.Coincidentally enough, Léon Serpollet and his brother Henri were, like Panhard and Levassor, makers of woodworking machines, and like Peugeot, they came from the Swiss borderlands in East-central France. Also like Panhard and Levassor, Léon studied engineering in Paris, in his case at the Conservatoire national des arts et métiers. But by the time he reached Paris, he and his brother had already concocted the invention that would lead them to the steam car: a “flash” boiler that instantly turned water to steam by passing it through a hot metal tube. This would allow the vehicle to start more quickly (though it still took time to heat the tube before the boiler could be used) and also alleviate safety concerns about a boiler explosion.The most important step to the (relative) success of the Serpollets’ vehicles, however, was when they replaced the traditional coal furnace with a burner for liquid, petroleum-based fuel. This went a long way towards removing the most disqualifying objections to the practicality of steam cars. Kerosene or gasoline weighed less and took up less space than an energy-equivalent amount of coal, and an operator could more easily throttle a liquid-fuel burner (by supplying it with more or less fuel) to control the level of steam.Figure 68: A 1902 Gardner-Serpollet steam car.With early investments from Peugeot and a later infusion of cash from Frank Gardner, an American with a mining fortune, the Serpollets built a business, first selling steam buses in Paris, then turning to small cars. Their steam powerplants generated more power than the combustion vehicles of the time, and Léon promoted them by setting speed records. In 1902, he surpassed seventy-five miles-per-hour along the promenade in Nice. At that time, a Gardner-Serpollet factory in eastern Paris was turning out about 100 cars per year. Though impressive numbers by the standards of the 1890s, already this was becoming small potatoes. In 1901 7,600 cars were produced in France, and 14,000 in 1903; the growing market left Gardner-Serpollet behind as a niche producer. Léon Serpollet made one last pivot back to buses, then died of cancer in 1907 at age forty-eight. The French steam car did not survive him.[5]Unlike in the U.S., steam car sales barely took off in France, and never had parity with the total sales of combustion engine cars from the likes of Panhard et Levassor, Peugeot, and many other makes. There was no moment of balance when it appeared that the future of automotive technology was up for grabs. Why this difference? We’ll have more to say about that later, after we consider the American side of the story.The Acme of the Steam CarAutomobile production in the United States lagged roughly five years behind France; and so it was in 1896 that the first small manufacturers began to appear. Charles and George Duryea (bicycle makers, again), were first off the block. Inspired by an article about Benz’ car, they built their own combustion-engine machine in 1893, and, after winning several races, they began selling vehicles commercially out of Peoria, Illinois in 1896. Several other competitors quickly followed.[6]Steam car manufacturing came slightly later, with the Whitney Motor Wagon Company and the Stanley brothers, both in the Boston area. The Stanleys, twins named Francis and Freelan (or F.E. and F.O.), were successful manufacturers of photographic dry plates, which used a dry emulsion that could be stored indefinitely before use, unlike earlier “wet” plates. They fell into the automobile business by accident, in a similar way to many others—by successfully demonstrating a car they had constructed as a hobby, drawing attention and orders. At an exhibition at the Charles River Park Velodrome in Cambridge, F.E. zipped around the field and up an eighty-foot ramp, demonstrating greater speed and power than any other vehicle present, including an imported combustion-engine De Dion tricycle, which could only climb the ramp halfway.[7]The Stanley brothers mounted in their 1897 steam car.The rights to the Stanley design, through a complex series of business details, ended up in possession of Amzi Barber, the “Asphalt King,” who used tar from Trinidad’s Pitch Lake to pave several square miles worth of roads across the U.S.[8] It was Barber automobiles, sold under the Locomobile brand, that formed the plurality of the 1,600 steam cars sold in the U.S. in 1900: the company sold 5,000 total between 1899 and 1902, at the quite-reasonable price of $600. Locomobiles were quiet and smooth in operation, produced little smoke or odor (though they did breathe great clouds of steam), had the torque required to accelerate rapidly and climb hills, and could smoothly accelerate by simply increasing the speed of the piston, without any shifting of gears. The rattling, smoky, single-cylinder engines of their combustion-powered competitors had none of these qualities.[9]Why then, did the steam car market begin to collapse after 1902? Twenty-seven makes of steam car first appeared in the U.S. in 1899 or 1900, mostly concentrated (like the Locomobile) in the Northeast—New York, Pennsylvania, and (especially) Massachusetts. Of those, only twelve continued making steam cars beyond 1902, and only one—the Lane Motor Vehicle Company of Poughkeepsie, New York—lasted beyond 1905. By that year, the Madison Square Garden car show had 219 combustion models on display, as compared to only twenty electric and nine steam.[10]Barber, the Asphalt King, was interested in cars, regardless of what made them go. As the market shifted to combustion, so did he, abandoning steam at the height of his own sales in 1902. But the Stanleys loved their steamers. Their contractual obligations to Barber being discharged in 1901, they went back into business on their own. One of the longest lasting holdouts, Stanley sold cars well into the 1920s (even after the death of Francis in a car accident in 1918), and the name became synonymous with steam. For that reason, one might be tempted to ascribe the death of the steam car to some individual failing of the Stanleys: “Yankee Tinkerers,” they remained committed to craft manufacturing and did not adopt the mass-production “Fordist” methods of Detroit. Already wealthy from their dry plate business, they did not commit themselves fully to the automobile, allowing themselves to be distracted by other hobbies, such as building a hotel in Colorado so that people could film scary movies there.[11]Some of the internal machinery of a late-model Stanley steamer: the boiler at top left, burner at center left, engine at top right, and engine cutaway at bottom right. [Stanley W. Ellis, Smogless Days: Adventures in Ten Stanley Steamers (Berkeley: Howell-North Books, 1971), 22]But, as we have seen, there were dozens of steam car makers, just as there were dozens of makers of combustion cars; no idiosyncrasies of the Stanley psychology or business model can explain the entire market’s shift from one form of power train to another—if anything it was the peculiar psychology of the Stanleys that kept them making steam cars at all, rather than doing the sensible thing and shifting to combustion. Nor did the powers that be put their finger on the scale to favor combustion engines.[12] How, then, can we explain both the precipitous rise of steam in the U.S. (as opposed to its poor showing in France) as well as its sudden fall?The steam car’s defects were as obvious as its advantages. Most annoying was the requirement to build up a head of steam before you could go anywhere: this took about ten minutes for the Locomobile. Whether starting or going, the controls were complex to manage. Scientific American described the “quite simple” steps required to get a Serpollet car going:A small quantity of alcohol is used to heat the burner, which takes about five minutes; then by the small pump a pressure is made in the oil tank and the cock opened to the burner, which lights up with a blue flame, and the boiler is heated up in two or three minutes. The conductor places the clutch in the middle position, which disconnects the motor from the vehicle and regulates the motor to the starting position, then puts his foot on the admission pedal, starting the motor with the least pressure and heating the cylinders, the oil and water feed working but slightly. When the cylinders are heated, which takes but a few strokes of the piston, the clutch is thrown on the full or wean speed and the feed-pumps placed at a maximum, continuing to feed by hand until the vehicle reaches a certain speed by the automatic feed, which is then regulated as desired.[13]Starting a combustion car of that era also required procedures long-since streamlined away—cranking the engine to life, adjusting the carburetor choke and spark plug timing—but even at the time most writers considered steamers more challenging to operate. Part of the problem was that the boilers were intentionally small (to allow them to build steam quickly and reduce the risk of explosion), which meant lots of hands-on management to keep the steam level just right. Nor had the essential thermodynamic facts changed – internal combustion, operating over a larger temperature gradient, was more efficient than steam. The Model T could drive fifteen to twenty miles on a gallon of fuel, the Stanley could go only ten, not to mention its constant thirst for water, which added another “fueling” requirement.[14]The rather arcane controls of a 1912 Stanley steamer. [Ellis, Smogless Days: Adventures in Ten Stanley Steamers, 26]The steam car overcame these disadvantages to achieve its early success in the U.S. because of the delayed start of the automobile industry there. American steam car makers, starting later, skipped straight to petroleum-fueled burners, bypassing all the frustrations of dealing with a traditional coal-fueled firebox, and banishing all associations between that cumbersome appliance and the steam car.At the same time, combustion automobile builders in the U.S. were still early in their learning curve compared to those in France. A combustion engine was a more complex and temperamental machine than a steam engine, and it took time to learn how to build them well, time that gave steam (and electric) cars a chance to find a market. The builders of combustion engines, as they learned from experience, rapidly improved their designs, while steam cars improved relatively little year over year.Most importantly, they never could get up and running as quickly as a combustion engine. In one of those ironies which history graciously provides to the historian, the very impatience that the steam age had brough forth doomed its final progeny, the steam car. It wasn’t possible to start up a steam car and immediately drive; you always had to wait for the car to be ready. And so drivers turned to the easier, more convenient alternative, to the frustration of steam enthusiasts, who complained of “[t]his strange impatience which is the peculiar quirk of the motorist, who for some reason always has been in a hurry and always has expected everything to happen immediately.”[15] Later Stanleys offered a pilot light that could be kept burning to maintain steam, but “persuading motorists, already apprehensive about the safety of boilers, to keep a pilot light burning all night in the garage proved a hard sell.”[16] It was too late, anyway. The combustion-driven automotive industry had achieved critical mass.The Afterlife of the Steam CarThe Ford Model T of 1908 is the most obvious signpost for the mass-market success of the combustion car. But for the moment that steam was left in the dust, we can look much earlier, to the Oldsmobile “curved dash,” which first appeared in 1901 and reached its peak in 1903, when 4,000 were produced, three times the total output of all steam car makers in that pivotal year of 1900. Ransom Olds, son of a blacksmith, grew up in Lansing, Michigan, and caught the automobile bug as a young man in 1887. Like many contemporaries, he built steamers at first (the easier option), but after driving a Daimler car at the 1893 Chicago World’s Fair, he got hooked on combustion. His Curved Dash (officially the Model R) still derived from the old-fashioned “horseless carriage” style of design, not yet having adopted the forward engine compartment that was already common in Europe by that time. It had a modest single-cylinder, five-horsepower engine tucked under the seats, and an equally modest top speed of twenty miles-per-hour. But it was convenient and inexpensive enough to outpace all of the steamers in sales.[17]The Oldsmobile “Curved Dash” was celebrated in song.The market for steam cars was reduced to driving enthusiasts, who celebrated its near-silent operation (excepting the hiss of the burner), the responsiveness of its low-end torque, and its smooth acceleration without any need for clunky gear-shifting. (There is another irony in the fact that late-twentieth century driving enthusiasts, disgusted by the laziness of automatic transmissions, would celebrate the hands-on responsiveness of manual shifters.) The steam partisan was offended by the unnecessary complexity of the combustion automobile. They liked to point out how few moving parts the steam car had.[18] To imagine the triumph of steam is to imagine a world in which the car remained an expensive hobby for this type of car enthusiast.Several entrepreneurs tried to revive the steamer over the years, most notably the Doble brothers, who brought their steam car enterprise to Detroit in 1915, intent on competing head-to-head with combustion. They strove to make a car that was as convenient as possible to use, with a condenser to conserve water, key-start ignition, simplified controls, and a very fast-starting boiler.But, meanwhile, car builders were steadily scratching off all of the advantages of steam within the framework of the combustion car. Steam cars, like electric cars, did not require the strenuous physical effort to get running that early, crank-started combustion engines did. But by the second decade of the twentieth century, car makers solved this problem by putting a tiny electric car powertrain (battery and motor) inside every combustion vehicle, to bootstrap the starting of the engine. Steam cars offered a smoother, quieter ride than the early combustion rattletraps, but more precisely machined, multi-cylinder engines with anti-knock fuel canceled out this advantage (the severe downsides of lead as an anti-knock agent were not widely recognized until much later). Steam cars could accelerate smoothly without the need to shift gears, but then car makers created automatic transmissions. In the 1970s, several books advocated a return to the lower-emissions burners of steam cars for environmental reasons, but then car makers adopted the catalytic converter.[19]It’s not that a steam car was impossible, but that it was unnecessary. Every year more and more knowledge and capital flowed into the combustion status quo, the cost of switching increased, and no sufficiently convincing reason to do so ever appeared. The failure of the steam car was not due to accident, not due to conspiracy, and certainly not due to any individual failure of the Stanleys, but due to the expansion of auto sales to people who cared more about getting somewhere than about the machine that got them there. Impatient people, born, ironically, of the steam age.

Read more
The Era of Fragmentation, Part 4: The Anarchists

Between roughly 1975 and 1995, access to computers accelerated much more quickly than access to computer networks. First in the United States, and then in other wealthy countries, computers became commonplace in the homes of the affluent, and nearly ubiquitous in institutions of higher education. But if users of those computers wanted to connect their machines together – to exchange email, download software, or find a community where they could discuss their favorite hobby, they had few options. Home users could connect to services like CompuServe. But, until the introduction of flat monthly fees in the late 1980s, they charged by the hour at rates relatively few could afford. Some university students and faculty could connect to a packet-switched computer network, but many more could not. By 1981, only about 280 computers had access to ARPANET. CSNET and BITNET would eventually connect hundreds more, but they only got started in the early 1980s. At that time the U.S. counted more than 3,000 institutions of higher education, virtually all of which would have had multiple computers, ranging from large mainframes to small workstations. Both communities, home hobbyists and those academics who were excluded from the big networks, turned to the same technological solution to connect to one another. They hacked the plain-old telephone system, the Bell network, into a kind of telegraph, carrying digital messages instead of voices, and relaying messages from computer to computer across the country and the world. These were among the earliest peer-to-peer computer networks. Unlike CompuServe and other such centralized systems, onto which home computers latched to drink down information like so many nursing calves, information spread through these networks like ripples on a pond, starting from anywhere and ending up everywhere. Yet they still became rife with disputes over politics and power. In the late 1990s, as the Internet erupted into popular view, many claimed that it would flatten social and economic relations. By enabling anyone to connect with anyone, the middle men and bureaucrats who had dominated our lives would find themselves cut out of the action. A new era of direct democracy and open markets would dawn, where everyone had an equal voice and equal access. Such prophets might have hesitated had they reflected on what happened on Usenet and Fidonet in the 1980s. Be its technical substructure ever so flat, every computer network is embedded within a community of human users. And human societies, no matter how one kneads and stretches, always seem to keep their lumps. Usenet In the summer of 1979, Tom Truscott was living the dream life for a young computer nerd. A grad student in computer science at Duke University with an interest in computer chess, he landed an internship at Bell Labs’ New Jersey headquarters, where he got to rub elbows with the creators of Unix, the latest craze to sweep the world of academic computing. The origins of Unix, like those of the Internet itself, lay in the shadow of American telecommunications policy. Ken Thompson and Dennis Ritchie of Bell Labs decided in the late 1960s to build a leaner, much pared-down version of the massive MIT Multics system to which they had contributed as software developers. The new operating system quickly proved a hit within the labs, popular for its combination of low overhead (allowing it to run on even inexpensive machines) and high flexibility. However, AT&T could do little to profit from their success. A 1956 agreement with the Justice Department required AT&T to license non-telephone technologies to all comers at a reasonable rate, and to stay out of all business sectors other than supplying common carrier communications. So AT&T began to license Unix to universities for use in academic settings on very generous terms. These early licensees, who were granted access to the source code, began building and selling their own Unix variants, most notably the Berkeley Software Distribution (BSD) Unix created at the the University of California’s flagship campus. The new operating system quickly swept academia. Unlike other popular operating systems, such as the DEC TENEX / TOPS-20, it could run on hardware from a variety of vendors, many of them offering very low-cost machines. And Berkeley distributed the software for only a nominal fee, in addition to the modest licensing fee from AT&T.1 Truscott felt that he sat at the root of all things, therefore, when he got to spend the summer as Ken Thompson’s intern, playing a few morning rounds of volleyball before starting work at midday, sharing a pizza dinner with his idols, and working late into the night slinging code on Unix and the C programming language. He did not want to give up the connection to that world when his internship ended, and so as soon as he returned to Duke in the fall, he figured out how to connect the computer science department’s Unix-equipped PDP 11/70 back to the mothership in Murray Hill, using a program written by one of his erstwhile colleagues, Mike Lesk. It was called uucp – Unix to Unix copy – and it was one of a suite of “uu” programs new to the just-released Unix Version 7, which allowed one Unix system to connect to another over a modem. Specifically, uucp allowed one to copy files back and forth between the two connected computers, which allowed Truscott to exchange email with Thompson and Ritchie. Undated photo of Tom Truscott It was Truscott’s fellow grad student, Jim Ellis, who had installed the new Version 7 on the Duke computer, but even as the new upgrade gave with one hand, it took away with the other. The news program that was distributed by the Unix users’ group, USENIX, which would broadcast news items to all users of a given Unix computer system, no longer worked on the new operating ssytem. Truscott and Ellis decided they would replace it with their own 7-compatible news program, with more advanced features, and return their improved software back to the community for a little bit of prestige. At this same time, Truscott was also using uucp to connect with a Unix machine at the University of North Carolina ten miles to the southwest in Chapel Hill, and talking to a grad student there named Steve Bellovin.2 Bellovin had also started building his own news program, which notably included the concept of topic-based newsgroups, to which one could subscribe, rather than only having a single broadcast channel for all news. Bellovin, Truscot and Ellis decided to combine their efforts and build a networked news system with newsgroups, that would use uucp to share news between sites. They intended to distributed provide Unix-related news for USENIX members, so they called their system Usenet.  Duke would serve as the central clearinghouse at first, using its auto-dialer and uucp to connect to each other site on the network at regular intervals, in order to pick up it local news updates and deposit updates from its peers. Bellovin wrote the initial code, but it used shell scripts that operated very slowly, so Stephen Daniel, another Duke grad student, rewrote the program in C. Daniel’s version became know as A News. Ellis promoted the program at the January 1980 Usenix conference in Boulder, Colorado, and gave away all eighty copies of the software that he had brought with him. By the next Usenix conference that summer, the organizers had added A News to the general software package that they distributed to all attendees. The creators described the system, cheekily, as a “poor man’s ARPANET.” Though one may not be accustomed to thinking of Duke as underprivileged, it did not have the clout in the world of computer science necessary at the time to get a connection to that premiere American computer network. But access to Usenet required no one’s permission, only a Unix system, a modem, and the ability to pay the phone bills for regular news transfers, requirements that virtually any institution of higher education could meet by the early 1980s. Private companies also joined up with Usenet, and helped to facilitate the spread of the network. Digital Equipment Corporation (DEC) agreed to act as an intermediary between Duke and UC Berkeley, footing the long-distance telephone bills for inter-coastal data transfer. This allowed Berkeley to become a second, west-coast hub for Usenet, connecting up UC San Francisco, UC San Diego, and others, including Sytek, an early LAN business. The connection to Berkeley, an ARPANET site, also enabled cross-talk between ARPANET and Usenet (after a second re-write by Mark Horton and Matt Glickman to create B News). ARPANET sites began picking up Usenet content and vice versa, though ARPA rules technically forbid interconnection with other networks. The network grew rapidly, from fifteen sites carrying ten posts a day in in 1980, to 600 sites and 120 posts in 1983, and 5000 sites and 1000 posts in 1987.3 Its creators had originally conceived Usenet as a way to connect the Unix user community and discuss Unix developments, and to that end they created two groups, net.general and net.v7bugs (the latter for discussing problems with the latest version of Unix). However they left the system entirely open for expansion. Anyone was free to create a new group under “net”, and users very quickly added non-technical topics such as net.jokes. Just as one was free to send whatever one chose, recipients could also ignore whatever groups they chose, e.g. a system could join Usenet and request data only for net.v7bugs, ignoring the rest of the content. Quite unlike the carefully planned ARPANET, Usenet self-organized, and grew in an anarchic way overseen by no central authority. Yet out of this superficially democratic medium a hierarchical order quickly emerged, with a certain subset of highly-connected, high-traffic sites recognized as the “backbone” of the system. This process developed fairly naturally. Because each transfer of data from one site to the next incurred a communications delay, each new site joining the network had a strong incentive to link itself to an already highly-connected node, to minimize the number of hops required for their messages to span the network. The backbone sites were a mix of educational and corporate sites, usually led by one headstrong individual willing to take on the thankless tasks involved in administering all the activity crossing their computer. Gary Murakami at Bell Labs’ Indian Hills lab in Illinois, for example, or Gene Spafford at Georgia Tech. The most visible exercise of the power held by this backbone administrators came in 1987, when they pushed through a re-organization of the newsgroup namespace into seven top-level buckets. comp, for example, for computer-related topics, and rec for recreational topics. Sub-topics continued to be organized hierarchically underneath the “big seven”, such as comp.lang.c for discussion of the C programming language, and rec.games.board for conversations about boardgaming. A group of anti-authoritarians, who saw this change as a coup by the “Backbone Cabal,” created their own splinter hierarchy rooted at alt, with its own parallel backbone. It included topics that were considered out-of-bounds for the big seven, such as sex and recreational drugs (e.g. alt.sex.pictures)4, as well as quirky groups that simply rubbed the backbone admins the wrong way (e.g. alt.gourmand; the admins preferred the anodyne rec.food.recipes). Despite these controversies, by the late 1980s, Usenet had become the place for the computer cognoscenti to find trans-national communities of like-minded individuals. In 1991 alone, Tim Berners-Lee announced the creation of the World Wide Web on alt.hypertext; Linus Torvalds solicited comp.os.minix for feedback on his new pet project, Linux; and Peter Adkison, due to a post on rec.games.design about his game company, connected with Richard Garfield, a collaboration that would lead to the creation of the card game Magic: The Gathering. FidoNet But even as the poor man’s ARPANET spread across the globe, microcomputer hobbyists,  with far fewer resources than even the smallest of colleges, were still largely cut off from the experience of electronic communication. Unix, a low-cost, bare-bones option by the standards of academic computing, was out of reach for hobbyists with 8-bit microprocessors, running an operating system called CP/M that barely did anything beyond managing the disk drive. But they soon began their own shoe-string experiments in low-cost peer-to-peer networking, starting with something called bulletin boards. Given the simplicity of the idea and the number of computer hobbyists in the wild at the time, it seems probable that the computer bulletin board was invented independently several times. But tradition gives precedence to the creation of Ward Christensen and Randy Suess of Chicago, launched during the great blizzard of 1978.  Christensen and Suess were both computer hobbyists in their early thirties, and members of their local computer club. For some time they had been considering creating a server where computer club members could upload news articles, using the modem file transfer software that Christensen had written for CP/M – the hobbyist equivalent of uucp. The blizzard, which kept them housebound for several days, gave them the impetus to actually get started on the project, with Christensen focusing on the software and Suess on the hardware. In particular, Suess devised a circuit that automatically rebooted the computer into the BBS software each time it detected an incoming caller, a necessary hack to ensure the system was in a good state to receive the call, given the flaky state of hobby hardware and software at the time. They called their invention CBBS, for Computerized Bulletin Board System, but most later system operators (or sysops) would drop the C and call their service a BBS.5 They published the details of what they had built in a popular hobby magazine, Byte, and a slew of imitators soon followed. Another new piece of technology, the Hayes Modem, fertilized this flourishing BBS scene. Dennis Hayes was another computer hobbyist, who wanted to use a modem with his new machine, but the existing commercial offerings fell into two categories: devices aimed at business customers that were too expensive for hobbyists, and acoustically-coupled modems. To connect a call on an acoustically-coupled modem you first had to dial or answer the phone manually, and then place the handset onto the modem so they could communicate. There was no way to automatically start a call or answer one. So, in 1977, Hayes designed, built, and sold his own 300 bit-per-second modem that would slot into the interior of a hobby computer. Suess and Christensen used one of these early-model Hayes modems in their CBBS. Hayes’ real breakthrough product, though, was the 1981 Smartmodem, which sat in its own external housing with its own built-in microprocessor and connected to the computer through its serial port. It sold for $299, well within reach of hobbyists who habitually spent a few thousand dollars on their home computer setups. The 300 baud Hayes Smartmodem One of those hobbyists, Tom Jennings, set in motion what became the Usenet of BBSes. A programmer for Phoenix Software in San Francisco, Jennings decided in late 1983 to write his own BBS software, not for CP/M, but for the latest and greatest microcomputer operating system, Microsoft DOS. He called it Fido, after a computer he had used at his work, so-named for its mongrel-like assortment of parts. John Madill, a salesman at ComputerLand in Baltimore, learned about Fido and called all the way across the country to ask Jennings for help in tweaking Fido to make it run on his DEC Rainbow 100 microcomputer. The two began a cross-country collaboration on the software, joined by another Rainbow enthusiast, Ben Baker of St. Louis. All three racked up substantial long-distance phone bills as they logged into one another’s machines for late-night BBS chats. With all of this cross-BBS chatter, an idea began to buzz forward from the back of Jennings’ mind, that he could create a network of BBSes that would exchange messages late at night, when long-distance rates were low. The idea was not new. Many hobbyists had imagined that BBSes could route messages in this way, all the way back to Christensen and Suess’ Byte article. But they generally had assumed that for the scheme to work, you would need very high BBS density and complex routing rules, to ensure that all the calls remained local, and thus toll-free, even when relaying messages from coast to coast. But Jennings did some back-of-the-envelope math and realized that, given increasing modem speeds (now up to 1200 bits per second for hobby modems) and falling long-distance costs, no such cleverness was necessary. Even with substantial message traffic, you could pass text between systems for a few bucks per night. Tom Jennings in 2002 (still from the BBS documentary) So he added a new program to live alongside Fido. Between one to two o’clock in the morning, Fido would shut down and FidoNet would start up. It would check Fido’s outgoing messages against a file called the node list. Each outgoing message had a node number, and each entry in the list represented a network node – a Fido BBS – and provided the phone number for that node number. If there were pending outgoing messages, FidoNet would dial up each of the corresponding BBSes on the node list and transfer the messages over to the FidoNet program waiting on the other side. Suddenly Madill, Jennings and Baker could collaborate easily and cheaply, though at the cost of higher latency – they wouldn’t receive any messages sent during the day until the late night transfer began. Formerly, hobbyists rarely connected with others outside their immediate area, where they could make toll-free calls to their local BBS. But if that BBS connected into FidoNet, users could suddenly exchange email with others all across the country. And so the scheme proved immensely popular, and the number of FidoNet nodes grew rapidly, to over 200 within a year. Jennings’ personal curation of the node list thus became less and less manageable. So during the first “FidoCon” in St. Louis, Jennings and Baker met in the living room of Ken Kaplan, another DEC Rainbow fan who would take an increasingly important role in the leadership of FidoNet. They came up with a new design that divided North America into nets, each consisting of many nodes. Within each net, one administrative node would take on the responsibility of  managing its local nodelist, accepting inbound traffic to its net, and forwarding those messages to the correct local node. Above the layer of nets were zones, which covered an entire continent. The system still maintained one global nodelist with the phone numbers of every FidoNet computer in the world, so any node could theoretically directly dial any other to deliver messages. This new architecture allowed the system to continue to grow, reaching almost 1,000 nodes by 1986 and just over 5,000 by 1989. Each of these nodes (itself a BBS) likely averaged 100 or so active users. The two most popular applications were the basic email service that Jennings had built into FidoNet and Echomail, created by Jeff Rush, a BBS sysop in Dallas. Functionally equivalent to Usenet newsgroups, Echomail allowed the thousands of users of FidoNet to carry out public discussions on a variety of topics. Echoes, the term for individual groups, had mononyms rather than the hierarchical names of Usenet, ranging from AD&D to MILHISTORY to ZYMURGY (home beer brewing). Jennings, philosophically speaking, inclined to anarchy, and wanted to build a neutral platform governed only by its technical standards6: I said to the users that they could do anything they wanted …I’ve maintained that attitude for eight years now, and I have never had problems running BBSs. It’s the fascist control freaks who have the troubles. I think if you make it clear that the callers are doing the policing–even to put it in those terms disgusts me–if the callers are determining the content, they can provide the feedback to the assholes. Just as with Usenet, however, the hierarchical structure of FidoNet made it possible for some sysops to exert more power than others, and rumors swirled of a powerful cabal (this time headquartered in St. Louis), seeking to take control of the system from the people. In particular, many feared that Kaplan or others around him would try to take the system commercial and start charging access to FidoNet. Of particular suspicion was the International FidoNet Association (IFNA), a non-profit that Kaplan had founded to help defray some of the costs of administering the system (especially the long-distance telephone charges). In 1989 those suspicions seemed to be realized when a group of IFNA leaders pushed through a referendum to make every FidoNet sysop a member of IFNA and turn it into the official governing body of the net, responsible for its rules and regulations. The measure failed, and IFNA was dissolved instead. Of course, the absence of any symbolic governing body did not eliminate the realities of power; the regional nodelist administrators instead enacted policy on an ad hoc basis. The Shadow of Internet From the late 1980s onward, FidoNet and Usenet gradually fell under the looming shadow of the Internet. By the second half of that same decade, they had been fully assimilated by it. Usenet became entangled within the webs of the Internet through the creation of NNTP – Network News Transfer Protocol – in early 1986. Conceived by a pair of University of California students (one in San Diego and the other in Berkeley), NNTP allowed TCP/IP network hosts on the Internet to create Usenet-compatible news servers. Within a few years, the majority of Usenet traffic flowed across such links, rather than uucp connections over the plain-old telephone network. The independent uucp network gradually fell into disuse, and Usenet became just another application atop TCP/IP transport. The immense flexibility of the Internet’s layered architecture made it easy to absorb a single-application network in this way.  Although by the early 1990s, several dozen gateways between FidoNet and Internet existed, allowing the two networks to exchange messages, FidoNet was not a single application, and so its traffic did not migrate onto the internet in the same way as Usenet. Instead, as people outside academia began looking for Internet access for the first time in the second half of the 1990s, BBSes gradually found themselves either absorbed into the Internet or reduced to irrelevance. Commercial BBSes generally fell into the first category. These mini-CompuServes offered BBS access for a monthly fee to thousands of users, and had multiple modems for accepting simultaneous incoming connections. As commercial access to the Internet became possible, these businesses connected their BBS to the nearest Internet network and began offering access to their customers as part of a subscription package. With more and more sites and services becoming available on the burgeoning World Wide Web, fewer and fewer users signed on to the BBS per se, and thus these commercial BBSes gradually became pure internet service providers, or ISPs. Most of the small-time hobbyist BBSes, on the other hand, became ghost towns, as users wanting to tap into the Internet flocked to their local ISPs, as well as to larger, nationally known outfits such as America Online. That’s all very well, but how did the Internet become so dominant in the first place? How did an obscure academic system, spreading gradually across elite universities for years while systems like Minitel, CompuServe and Usenet were bringing millions of users online, suddenly explode into the foreground, enveloping like kudzu all that had come before it? How did the Internet become the force that brought the era of fragmentation to an end? [Previous] [Next] Further Reading / Watching Ronda Hauben and Michael Hauben, Netizens: On the History and Impact of Usenet and the Internet, (online 1994, print 1997) Howard Rheingold, The Virtual Community (1993) Peter H. Salus, Casting the Net (1995) Jason Scott, BBS: The Documentary (2005)

Read more