In 1965, Patrick Haggerty, president of Texas Instruments (TI), wanted to make a new bet on the future of electronics. In that future, he believed, in a theme he frequently expounded, the use of electronic would become “pervasive.” A decade before, he had pushed for the development of a transistor-based pocket radio, to demonstrate the potential of solid-state electronics (those based on small pieces of semiconductor material cut from a wafer, instead of bulbous vacuum tubes). Now a new form of electronic component was on the rise: integrated circuits that packed dozens, or even hundreds, of components onto a single semiconductor chip. With Jack Kilby, an integrated circuit pioneer and now one of TI’s top research director, Haggerty conceived of a new consumer product worthy of this new age: a calculator that could “fit in a coat pocket and sell for less than $100.” At the time, a typical calculator had the size, shape, and weight of a typewriter and cost well over $1000 (over $10,000 in 2024 dollars.)[1]
You could call it prescient or premature, either way Haggerty’s goal proved just out of reach. By the end of 1966, Kilby and his team had built a prototype calculator (code named “Cal Tech”): the most compact ever made, at less than three pounds and about four by six by two inches. But it contained four half-inch integrated circuits of about 1,000 transistors each. TI could produce such dense chips only at an experimental scale: each wafer of silicon printed with copies of the Cal Tech circuits yielded only a small number of viable, error-free chips. No one had yet built a semiconductor plant that could realize Haggerty’s dream. But the world’s electronic manufacturing capabilities were advancing at an unprecedented speed. By 1972, what had been impossible suddenly became commonplace, and pocket calculators began selling by the millions.[2]
The story of the pocket calculator provides the perfect preamble to the story of the personal computer. Many of the actors that figure in it—semiconductor manufacturers Intel and MOS Technology, calculator makers Commodore, MITS, and Sinclair; electronics retailer Radio Shack; and even individual calculator enthusiasts like Steve Wozniak—will recur in the story of the personal computer.
Just like the computer, calculators that had served as specialized tools purchased by organizations became consumer goods that could be found in millions of homes. In the process, they raised cultural questions that would reappear with the personal computer, as in both cases many people found themselves acquiring the latest electronic fad without a clear sense of what it was actually for.
Most importantly, the calculator put in place the commercial conditions for the personal computer’s emergence: it not only served as the main engine for the relentless drop in semiconductor price-per-component in the first half of the 1970s, it also induced the creation of the first commercial microprocessor, which put all the core functionality of a computer on a single chip.
However, the calculator differs from the personal computer in one very significant way: calculators slid directly down the market from pricey machines owned by organizations to birthday gifts handed out by middle-class parents. At incredible speed (far faster than computers) calculators became as commonplace as wristwatches; indeed, it wasn’t long before manufacturers put calculators in wristwatches. Though the market leaders changed rapidly as the technology advanced, there was no disruption from below, no new path blazed by a doughty band of rugged entrepreneurs. We will have to consider later just why that was the case.
Integrated Circuits
As is obvious from the story of the Cal Tech, the essential prerequisite to all of these developments lay in the microchip, or integrated circuit. The pocket calculator posed a few other engineering challenges, most notably how to display numbers while running off of a small battery, but none of them mattered without access to chips that could pack hundreds or thousands of circuit components (and the wires to connect them) into a tiny area. The primary logic component for calculators (and most advanced electronics) was the transistor: a tiny sliver of semiconductor, doped with impurities to let it act as an electronic switch, then typically packaged into a metal or plastic container about the size of a pencil eraser. Three wires protruded from the package for making connections to other components (two for the main voltage flow passing through the transistor, and a third control wire to turn that flow on or off). No matter how tightly packed, thousands of such independent transistors could never fit inside a case that would then fit inside your pocket, not to mention how much it would cost the manufacturer to pay workers to assemble those components together.
Everyone in the industry knew that long-term progress in electronics depended on some kind of solution to the assembly problem. As the number of components in circuits grew, the number of manufacturing steps also grew, and manufacturing error rates multiplied—a device with one thousand components, each of which a skilled worker could connect with 99.9% reliability, had a 63% chance of having at least one defective connection. The search for an end to this “tyranny of numbers” drove many research projects in the late 1950s, most of them funded by the various arms of the United States military, all of whom foresaw an unending appetite for ever-more-sophisticated electronics to control their weaponry and defense systems. The military-funded projects included “micro-modules” (individual components that would snap together like tinkertoys), “microcircuits” (wires and passive components etched onto a ceramic substrate into which active components, like transistors, could be connected), and “molecular electronics” (nanotechnology avant la lettre).[3]
Fairchild Semiconductor finally cracked the puzzle in 1959, drawing on a new transistor design with a flat surface and silicon dioxide coating developed at Bell Labs. Jean Hoerni, one of the so-called “traitorous eight” who had recently defected from Shockley Semiconductor to form Fairchild, figured out how to print transistors onto a wafer by first letting the silicon grow a protective coating of oxide, then etching away holes which could be doped with impurities to create the transistors: the doping would not affect the areas still covered with oxide. Fairchild’s head of research and development, Robert Noyce, realized that this “planar process” of transistor manufacture would enable integrated circuits. No one had been able to deposit metal wires directly onto raw semiconductor because it would destroy the components underneath. But Noyce saw that the protective oxide layer would prevent that, while still allowing the wires to link up the transistors through the carefully-etched windows.[4]
The integrated circuit eliminated the tyranny of numbers by eliminating effectively all human labor from the manufacturing process, reducing circuit-building to a chemical process of deposition, etching, and doping. Making ever-smaller components and ever-denser circuits became a mere matter of process improvements, with no fundamental barrier to higher density and improved yields other than “engineering effort,” as Fairchild researcher Gordon Moore observed in the paper that gave birth to “Moore’s Law.”[5]
As late as 1965, when Moore wrote his famous paper, integrated circuits remained an expensive, niche technology used mainly in aerospace systems for the military and NASA, where reliability and reducing size and weight were all-important. But because of the ever-greater density and reduced costs that he had predicted, that changed very quickly. By the end of the decade, it became reasonable to consider putting integrated circuits into a mere calculator.
The Calculator Business
At mid-century, the typical calculator very much resembled a typewriter, complete with a moving carriage, and cost somewhere in the neighborhood of $1000. It operated mechanically, usually with the aid of an electric motor to drive the machinery.[6] Then, over the course of the 1950s and 1960s, the industry recapitulated the previous two decades of the history of the programmable computer, developing calculators based on relays (electromagnetic switches), vacuum tubes, and then transistors.[7]
In the mid-1960s, the market for electronic calculators amounted to a bit over one hundred million dollars. They cost more than their mechanical equivalents and were no smaller—their advantage lay in speed of calculation, quiet operation, and the ability to compute non-linear functions. At first, American office equipment makers like SCM (Smith-Corona Marchant), Friden (a division of Singer, once famous for its sewing machines), and Burroughs dominated the market, but new competitors appeared later in the decade, especially from Japan, just as microchips reached the crossover point where they became economical for mass-market applications.[8]
Throughout the 1960s and into the 1970s, the U.S. retained a dominating lead in semiconductor manufacturing. Only American factories owned by American companies could produce the most advanced chips. Japanese manufacturers lagged behind, but offered significantly cheaper labor for assembly of electronic components. These relative economic advantages proved important to how the pocket calculator market played out. In the late 1960s, even the most compact electronic calculators still required the assembly of many chips and other components, and so Japanese companies combined their growing manufacturing expertise with lower labor costs to undercut American calculator makers—buying chips from U.S. factories, shipping them to Japan for assembly, and then shipping the assembled calculators back to American buyers.
Casio, Sharp, and Canon (formerly makers of electro-mechanical calculators, radios and televisions, and cameras, respectively) all became major players in the calculator market in this way. Driven by competition and the ever-improving economics of semiconductor manufacturing, prices plummeted and the market ballooned. By 1970, half of all metal-oxide semiconductors (MOS) (the most rapidly growing manufacturing process, and soon to become the only one that mattered), went into calculators.[9] Thus the calculator became both the prime beneficiary and the prime mover of the virtuous cycle of Moore’s Law: greater sales volume funded production improvements which reduced costs and produced greater sales volume. Already by 1971, the dream of the Cal Tech came within reach: calculators the size of, if not yet a pocket, at least a paperback, that cost $200 or less.
A bevy of small calculator makers rushed into the market to soak up the growing demand—including MITS, the small New Mexican electronics outfit that would later produce the Altair, and Sinclair Radionics, a small British firm whose principals would go on to found two of the most successful personal computer businesses in the United Kingdom. An important player in the early personal computer industry, MOS Technology, grew its initial business on the back of the calculator market. Calculator maker Allen-Bradley didn’t want to be solely dependent on Texas Instruments for chips, so it turned to the upstart MOS as a second-source supplier. Shortly thereafter, MOS became a supplier for another calculator maker, Commodore Business Machines, which would later try its hand at personal computers as well. We will have more to say about all of these companies later.
Many of the new entrants were American firms. Bowmar, a small electronics outfit from Fort Wayne, Indiana became, for a few years, the largest producer of pocket calculators. For the pendulum had swung again: by this time integrated circuits had become so dense (sometimes containing thousands of transistors) that calculators only required a handful of chips, and so labor became a smaller factor in production costs. It no longer paid to ship chips calculators across the ocean for assembly in Japan. American calculator-makers boomed as the market continued to swell with incredible speed: in 1973, seven million pocket calculators were sold worldwide (a figure that personal computers did not reach until 1985), and by this time many models were indeed truly pocket-sized.[10]
Then, in 1974, the market shifted yet again. Semiconductor manufacturers, tired of watching their chips go out the door to calculator makers who profited by throwing them into a plastic case along with a few buttons, decided to do it themselves and cut out the middle man. Prices plummeted yet further, squeezing profit margins and threatening even roaring successes like Bowmar with destruction if they could not vertically integrate and start making their own chips. Smaller companies had no hope of doing even that. Caught in a vise between falling prices and the wholesale cost of chips, all of the personal-computer-adjacent companies we met earlier—MITS, Sinclair, MOS Technologies, and Commodore—came to a crisis point.
The Calculator and the Microprocessor
All of this churn in the calculator market had a very important side effect: the introduction of the first commercial microprocessors. Though sometimes called “a computer on a chip,” this is a slight exaggeration. These chips did not by themselves constitute a complete computer, but they contained all the basic logic and arithmetic functions needed to perform any computation, and (unlike most chips at the time, which were hard-wired for a particular task), they accepted programmed instructions, allowing a single chip to support many different applications.
It began with a Japanese company. The Nippon Calculating Machine Corporation was one of many mechanical calculator makers who pivoted (or tried to pivot) to electronic calculators in the late 1960s. To project a high-tech image more in line with their new products, they rebranded in 1967 as the Business Computer Corporation, or Busicom.[11]
Almost immediately, however, Busicom faced a new challenge that required another pivot: rival Japanese calculator maker Sharp partnered with American chipmaker Rockwell to create a calculator that crammed all the necessary components into just four chips. Busicom began looking for its own U.S. partner who could work the same kind of microchip magic, and found two: Mostek for its high-volume calculator designs (not to be confused with MOS Technology), and Intel for fancier models. Both had been recently founded by employees breaking away from established companies: Mostek from Texas Instruments and Intel from Fairchild Semiconductor (where the integrated circuit had been born).
Both companies intended to exploit the new metal-oxide semiconductor (MOS) manufacturing process, which could cram hundreds or even thousands of transistors onto a single chip. The Intel founders, who included Robert Noyce, intended to mass produce MOS semiconductor memories for computers, intending to displace the then-dominant magnetic core memory. But that business would take time and money to grow; they were happy to have side gigs like the Busicom contract to generate income in the meantime.
In June 1969, three Busicom employees arrived at Intel’s Santa Clara, California offices to kick off the collaboration, among them thirty-five-year-old Masatoshi Shima. Shima had studied chemistry at university, but couldn’t find a job in that field when he graduated in 1967, so he became a programmer at Busicom instead, then transferred to a hardware engineering position at their Osaka plant. Due to his programming experience, he was assigned to develop the “high-end” design for what became the Intel collaboration: a new set of chips that would use programmed logic (much like a computer, but with a fixed program stored on a read-only memory chip, or ROM) rather than logic hardcoded into the circuits. The same chipset, Busicom hoped, could be re-used in a variety of different calculator models and other devices, by simply supplying a different ROM.
Now Shima presented Busicom’s design to Intel, with about eight chips (the exact number varies depending on the account): two to perform decimal arithmetic; a shift register to store intermediate results; chips for interfacing with the display, keyboard, and printer; and the ROM. Intel gave the responsibility for executing the plan to one of its experienced engineers, Ted Hoff—but Ted didn’t like it.
Hoff, an engineer in his early 30s from New York with experience working on computers through post-graduate work at Stanford, believed that the many large chips required by the Busicom design would make it impossible to build at the contracted price. He came up with an alternative design based on the streamlined architecture of the computer he had been working with most recently, Digital Equipment’s PDP-8. It would have a much leaner instruction set for its programs than the Busicom design, offloading complexity by storing intermediate data into a memory chip, Intel’s bread-and-butter. In total, Hoff’s proposal called for only four chips: the ROM (later designated 4001), the memory (4002), a register for storing the active working result (4003), and the processor to execute instructions (4004). This was half the number of chips proposed by Busicom, and, given the greater simplicity of the chips, it would sell for less than half as much. With just a single chip to execute all of the programmed logic (the 4004), this was the first microprocessor to go into commercial development.[12]
Noyce loved the microprocessor concept, and his backing gave Hoff the cover to push the idea forward, even though his official mandate consisted only of ushering the Busicom design through to production. Though Noyce made some enthusiastic pronouncements about how everyone would someday own their own computer, the slow, barely capable 4004 in no way threatened the computer industry. What was significant about the microprocessor for Intel in 1971 was not that it was an inexpensive computer, but that it computerized electronics.[13] The distinction between fixed hardware and malleable software made computers incredibly flexible; introducing the same distinction into the world of electronics made it possible to create new devices without the expense in time and money (to buyer or seller) of designing and manufacturing new chips. Instead, a client could simply write new instructions and flash them onto a ROM to customize an already existing set of chips to their needs. Integrated circuits had solved the problem of scaling circuit production, now microprocessors could solve the problem of scaling circuit design, by moving most of the design work into cheap software.
As of the summer of 1969, though, Intel only had a design sketch. Hoff hadn’t solved the many concrete engineering problems needed to complete implementation, nor did he have the time or the expertise to solve them. In early 1970, Intel hired Italian-born engineer Federico Faggin to work out the exact chip layouts for the four chips, and finally, in 1971, Busicom was able to sell their product, the 141-PF. It was the first calculator powered by a microprocessor.
Momentous as this may seem in retrospect, at the time, it hardly made a ripple. Instead Busicom’s breakthrough product came from its other collaboration, with Mostek, which resulted in the LE-120A “Handy LE”, also introduced in 1971. At just five-by-three-by-one inches, it was the first truly pocket-sized calculator.[14] Instead of a microprocessor, it used a “calculator-on-a-chip,” a single piece of silicon with over two thousand transistors that could perform all of the functions needed by the calculator. Despite emerging from the dynamics of the calculator market, the microprocessor didn’t stick there: sales volume had grown so high in the early 1970s that it continued to be more economical to make custom calculator chips not designed as general-purpose processors.
But this doesn’t mean the microprocessor was a flop. It found a growing market in a wide variety of other applications, from machine tools to automobiles. Intel almost immediately followed up the 4004 with the more powerful 8008 processor, developed in parallel with the 4004 as part of another client project with computer terminal maker Datapoint. Competitors launched their own microprocessor designs soon after.[15] For the personal computer, of course, the microprocessor was sine qua non, a necessary (but not sufficient) precondition for its creation.
Calculator Culture
Meanwhile, within just a couple of years, the pocket calculator had exploded in market size, reaching a breadth of audience that it would take the personal computer a decade of growth to match. The industry developed very differently also, expanding to a totally different set of buyers and sales channels than those traditionally served by desktop calculator makers, without the need for any disruptive entrepreneurs to show the way. Some of the same firms that had made desktop calculators transitioned smoothly to making pocket models. Many calculator makers failed, but mainly due to the shifting dynamics of semiconductor production, not from failing to see new market opportunities.
To understand how this happens, we have to look at who was buying calculators, and why. The market developed in several waves, each reaching different groups of purchasers. Initial sales, in 1971 and early 1972, predominantly went to businessmen and -women, who sought relief from the daily grind of arithmetical drudgery that pervaded nearly every profession. A device in their pocket or on their desk could now give instant, accurate answers: areas, ratios, and price estimates for building contractors; interest calculations for bankers, discounts and commissions for salesmen. On top of that, every small business owner had invoices, bills, and payrolls to tote up—the large corporations had long-since computerized these operations, but the likes of bodega owners, hairdressers, and roofers could not afford to do so.
A key contrast with the personal computer, as will become clearer later, is that the source of demand was obvious—anyone who considered the various use cases could see that millions of people could justify a $100 or $200 purchase to automate their daily dose of arithmetic. The pocket calculator was also very simple. Unlike early PCs, the very first models already had enough functionality to satisfy the needs of a wide array of mass market buyers, and there was virtually nothing to them but a handful of microchips, a small display, and a cheap plastic case. For basic four-function calculators all of the benefit of Moore’s Law scaling went into reducing prices, not increasing capabilities, and so prices fell very, very fast.
Low prices then produced to a second wave of buyers, hard on the heels of the first: ordinary middle-class households picking up calculators for themselves or for their children, often as birthday or Christmas gifts. In the spring of 1973, for example, Bowmar introduced a $59 model, “aimed specifically at housewives and students at about the junior high school level” that weighed just six ounces. By the following Christmas season, 10% of Americans already owned a calculator, and they sold for as little as $17, twenty times less than typical prices in 1971.[16]
Meanwhile, yet another group of buyers fell in love with the more advanced pocket models that followed the basic four-function models—people involved in math-intensive business (such as finance), or in science or engineering work, whether as professionals or students. Steve Wozniak, for example, “drooled” over the Hewlett-Packard HP-35 scientific calculator, released in 1972, and bought one as soon as he could despite the steep price of $395. He was not the only one: the HP-35 became an instant campus status symbol in science and engineering departments, and sold far beyond the manufacturer’s expectations. In addition to the nerdy cultural cachet it provided, this group of users also had a clear use for a more powerful calculator. Unlike its basic low-cost brethren, the HP-35 could calculate trigonometric functions, exponentials, logarithms, and more. It could also render very large or very small numbers in scientific notation. The need for such capabilities came up often in scientific work, and had previously required tedious table look-ups or the use of a slide rule, a more laborious and less precise tool than the calculator. Within a handful of years, the slide rule, once the signature accessory of the engineering set, all but disappeared.[17]
Other than figuring taxes each spring and toting up simple checkbook balances, the day-to-day usefulness of the calculator to ordinary consumers was less clear. Prices had fallen so far by 1973 that middle-class families could afford to buy a calculator on a whim, and fad probably drove at least as many sales as pragmatism. As calculators proliferated by the tens of millions, schools had to decide whether to embrace them, reject them, or seek some kind of wary truce. Meanwhile, a whole sub-genre of books appeared to advise the befuddled on what to do with their new devices “after you’ve balanced your checkbook, added up your expenses, or done your math homework”: eight different general-audience books on pocket calculators appeared in 1975 alone, including Oleg D. Jefimenko’s How to Entertain with Your Pocket Calculator: Pastimes, Diversions, Games, and Magic Tricks, Len Buckwalter’s, 100 Ways to Use Your Pocket Calculator, and James Rogers’ The Calculating Book: Fun and Games with Your Pocket Calculator.[18]
It should come as no surprise that some of these authors also inhabited the very same electronic hobby community that would create the personal computer. Buckwalter, for example, wrote a book in the early 60s called Having Fun with Transistors, maintained a regular column on CB radio in the magazine Electronics Illustrated, and in 1978 would go on to publish The Home Computer Book: A Complete Guide for Beginners. Calculators fascinated electronic hobbyists, and their magazines teemed with advertisements for calculators, articles about calculators, and ideas for building, using, or modifying calculators. This hobby interest is how a small outfit like MITS got involved with calculator manufacture in the first place.
Intersecting with this hobby community was a more loose-knit group of dreamers, mostly young men like Steve Wozniak, who had seen what a real computer could do and wanted to bring that power home. For them, the pocket calculator, especially the more sophisticated scientific or programmable models, represented a step in the right direction, a powerful almost-computer that they could hold in their hands. It is to this dream of a computer to call your own that we will turn next.