Interactive Computing: A Counterculture
In 1974, Ted Nelson self-published a very unusual book. Nelson lectured on sociology at the University of Illinois at Chicago to pay the bills, but his true calling was as a technological revolutionary. In the 1960s, he had dreamed up a computer-based writing system which would preserve links among different documents. He called the concept “hypertext” and the system to realize it (always half-completed and just over the horizon) “Project Xanadu.” He had become convinced in the process that his fellow radicals had computers all wrong, and he wrote his book to explain why. Among the activist youth of the 1960s counterculture, the computer had a wholly negative image as a bureaucratic monster, the most advanced technology yet for allowing the strong to dominate the weak. Nelson agreed that computers were mostly used in a brutal way, but offered an alternative vision for what the computer could be: an instrument of liberation. His book was really two books bound together, each with its own front cover—Computer Lib and Dream Machines—allowing the book to be read from either side until the two texts met in the middle. Computer Lib explained what computers are and why it is important for everyone to understand them, and Dream Machines explained what they could be, when fully liberated from the tyranny of the “priesthood” that currently controlled not only the machines themselves, but all knowledge about them. “I have an axe to grind,” Nelson wrote,  I want to see computers useful to individuals, and the sooner the better, without necessary complication or human servility being required. …THIS BOOK IS FOR PERSONAL FREEDOM AND AGAINST RESTRICTION AND COERCION. … A chant you can take to the streets: COMPUTER POWER TO THE PEOPLE! DOWN WITH CYBERCRUD![1]  If the debt Nelson’s cri de coeur owed to the 1960s counterculture wasn’t clear enough, Nelson made it explicit by listing his “Counterculture Credentials” as a writer, showman, “Onetime seventh-grade dropout,” “Attendee of the Great Woodstock Festival,” and more, including his astrological sign.[2]   The front covers of Ted Nelson’s “intertwingled” book, Computer Lib / Dream Machines. Nelson’s manifesto is the most powerful piece of evidence of one popular way to tell the story of the rise of the personal computer: as an outgrowth of the 1960s counterculture. Surely more than geographical coincidence accounts for the fact that Apple Computer was born on the shores of the same bay where, not long before, Berkeley radicals had protested and Haight-Ashbury deadheads had partied? The common through line of personal liberation is clear, and Nelson was not the only countercultural figure who wanted to bring computer power to the people. Lee Felsenstein, a Berkeley engineering drop-out (and then eventual graduate) with much stronger credentials in radical politics than Nelson, invested much of his time in the 1970s on projects to make computers more accessible such as Community Memory, which offered a digital bulletin board via public computer terminals set up at several locations in the Bay Area. In Menlo Park, likewise, anyone off the street could come in and use a computer at Bob Albrecht’s People’s Computer Company. Both Felsenstein and Albrecht had clear and direct ties to the early personal computer industry, Felsenstein as a hardware designer and Albrecht as a publisher. The two most seminal early accounts of the personal computer’s history, Steven Levy’s Hackers: Heroes of the Computer Revolution, andPaul Freiberger and Michael Swaine’s, Fire in the Valley: The Making of The Personal Computer, both argued that the personal computer came into existence because of people like Felsenstein and Albrecht (whom Levy called long-haired, West Coast, “hardware hackers”), and their emphasis on personal liberation through technology. John Markoff extended this argument to book length with What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer. Stewart Brand put it succinctly in a 1995 article in Time magazine: “We Owe it All to the Hippies.”[3] This story is appealing, but not quite right. The influence of countercultural figures in promoting personal computing was neither necessary, nor sufficient, to explain the sudden explosion of interest in the personal computer caused by the Altair. Not necessary, because the Altair existed primarily because of two people who had nothing to do with the radical left or hippie idealism: the Albuquerque Air Force veteran and electronics lover Ed Roberts, and the New York hobby magazine editor Les Solomon. Not sufficient because it addresses only supply, not demand: why, when personal computers did become available, were there many thousands of takers out there looking to buy the personal liberation that men like Nelson and Albrecht were selling? These people were not, for the most part, hippies or radicals either. The countercultural narrative seems plausible when one zooms in on the activities happening around the San Francisco Bay, but the personal computer was a national phenomenon; orders for Altairs poured in to Albuquerque from across the country. Where did all of these computer lovers come from? Getting Hooked In the 1950s, researchers working at a laboratory affiliated with MIT synthesized an electronic concoction in their labs that, in the decades to come, transformed the world. The surprising byproduct of work on an air defense system, it proved to be highly addictive, at least to those of a certain personality type: inquisitive and creative, but also fascinated by logic and mathematics. The electronic computer, as originally conceived in the 1940s, emulated a room full of human computers. You provided it with a set of instructions for performing a complex series of calculations—a simulation of an atomic explosion, say, or the proper angle and explosive charge required to get an artillery piece to hit a target at a given distance, and then came back later to pick up the result. A “batch-processing” culture of computing developed around this model, where computer users brought a computer program and data to the computer’s operators in the form of punched cards. These operators fed in batches of these cards and fed them to the computer for processing, and then later extracted the results on a new set of punched cards. The user then picked up the results and then either walked away happy or (more often), noticed an error, scrutinized their program for bugs, made adjustments, and tried again. By the early 1960s, this batch-processing culture had become strongly associated with IBM, which had parlayed its position as the leader in mechanical data-processing equipment into dominance of electronic computing as well. However, the military faced many problems that could not be pre-calculated, and required an instantaneous decision, calling for a “real-time” computer that could provide an answer to one question after another, with seconds or less between each response. The first fusion of real-time problem solving with the electronic computer came in the form of a flight simulator project at MIT under the leadership of electrical engineer Jay Forrester, which, through a series of twists and turns and the stimulus of the Cold War, evolved into an air defense project with the backronym of Semi-Automated Ground Environment (SAGE). Housed at Lincoln Laboratory, a government facility about thirty miles to the northwest of MIT, SAGE became a mammoth project that spawned an entirely new form of computing as an accidental side effect.    An operator interacting with a SAGE terminal with a light gun. The SAGE system demanded a series of powerful computers (to be constructed by IBM), two for each of the air defense centers to be built across North America (one acted as a back-up in case the other failed). Each would serve multiple cathode-ray screen terminals showing an image of incoming radar blips, which the operator could select to learn more information and possibly marshal air defense assets against them. At first, the project leads assumed these computer centers would use vacuum tubes, the standard logic component for almost all computers throughout the 1950s. But the invention of the transistor offered the opportunity to make a smaller and more reliable solid-state computer. So, in 1955-56, Wesley Clark and Ken Olsen oversaw the design and construction of a small, experimental transistor-based computer, TX-0, as a proof-of-concept for a future SAGE computer. Another, larger test machine called TX-2 followed in 1957-58.[4] The most historically significant feature of these computers, however, was the fact that, after being completed, they had no purpose. Having proved that they could be built, their continued existence was superfluous to the SAGE project, so these very expensive prototypes became Clark’s private domain, to be used more or less as he saw fit. Most computers operated in batch-processing mode because it was the most efficient way to use a very expensive piece of capital equipment, keeping it constantly fed with work to do. But Clark didn’t particularly care about that. Lincoln Lab computers had a tradition of hands-on use, going all the way back to the original flight simulator design which was intended for real-time interaction with a pilot, and Clark believed that real-time access to a computer assistant could be a powerful means for advancing scientific research.[5]    The TX-0 at MIT, likely taken in the late 1950s. And so, a number of people at MIT and Lincoln Lab got to have the experience of simply sitting down and conversing directly with the TX-0 or TX-2 computer. Many of them got hooked on this interactive mode of computing. The process of instant feedback from the computer when trying out a program, which could then be immediately adjusted and tried again, felt very much like playing a game or solving a puzzle. Unlike the batch-processing mode of computing that was standard by the late 1950s, in interactive computing the speed at which you got a response from the computer was limited primarily by the speed at which you could think and type. When a user got into the flow, hours could disappear like minutes. J.C.R. Licklider was a psychologist employed to help with SAGE’s interface with its human operators. The experience of interacting with the TX-0 at Lincoln Lab struck him with the force of revelation. He thereafter became an evangelist for the power of interactive computers to multiply human intellectual power via what he called “man-computer symbiosis”:  Men will set the goals and supply the motivations, of course, at least in the early years. They will formulate hypotheses. They will ask questions. They will think of mechanisms, procedures, and models. … The equipment will answer questions. It will simulate the mechanisms and models, carry out the procedures, and display the results to the operator. It will transform data, plot graphs …In general, it will carry out the routinizable, clerical operations that fill the intervals between decisions.[6]  Ivan Sutherland was another convert: he developed a drafting program called Sketchpad on the TX-2 at Lincoln Lab for his MIT doctoral thesis and later moved to the University of Utah, where he became the founding father of the field of computer graphics. Lincoln also shipped the TX-0, entirely surplus to its needs after the arrival of TX-2, to the MIT Research Laboratory for Electronics (RLE), where it became the foundation—the temple, the dispensary—for a new “hacker” subculture of computer addicts, who would finagle every spare minute they could on the machine, roaming the halls of the RLE well past midnight. The hackers compared the experience of being in total control of a computer to “getting in behind the throttle of a plane,” “playing a musical instrument,” or even “having sex for the first time”: hyperbole, perhaps, similar to Arnold Schwarzenegger’s famous claim about the pleasures of pumping iron.[7]   It is worth pausing to note here the extreme maleness of this group: not a single woman is mentioned among the MIT hackers in Steven Levy’s eponymous book the topic. This is unsurprising since very few women attended MIT; until 1960 they were technically allowed but not encouraged to enroll. But this severe imbalance of the sexes did not change much with time. Almost all the people who got hooked on computers as interactive computing spread beyond MIT were also men. It was certainly not the case that the computing profession as a whole was overwhelmingly male circa 1960: at that time women probably occupied a third or more of all programming jobs. But at the time, almost all of those jobs involved neatly coiffed business people running data processing workloads in large corporate or government offices, not disheveled hackers clacking away at a console into the wee hours. For whatever reason, men showed a much greater predilection than women to get lost in the rational yet malleable corridors of the digital world, to enjoy using computers for the sake of using computers. This fact likely produced the eventual transformation of computer science into an overwhelmingly male field, a development we may revisit later in this story. But for now, back to the topic at hand.[8] Minicomputers: The DIY Computer While Clark was exploring the potential of computers as a scientific instrument, his engineering partner, Ken Olsen, saw the market potential for selling small computers like the TX-0. Having worked closely with IBM on the SAGE contract, he came away unimpressed with their bureaucratic inefficiency. He thought he could do better, and, with help from one of the first venture capital firms and Harlan Anderson, another Lincoln alum, he went into business. Warned by the head of the firm to avoid the term “computer,” which would frighten investors with the prospects of an expensive uphill struggle against established players like IBM, Olsen called his company Digital Equipment Corporation, or DEC.[9] In 1957, Olsen set up shop in an old textile mill on the Assabet River about a half-hour west of Lincoln Lab. There the company remained until the early 1990s, at the end of Olsen’s tenure and the beginning of the company’s terminal decline. Olsen, an abstemious, church-going Scandinavian, stayed in suburban Massachusetts for nearly all of his adult life; he and his wife lived out their last years with a daughter in Indiana. It is hard to imagine someone who less embodies the free-wheeling sixties counterculture than Ken Olsen. But his business became the vanguard for and symbol of a computer counterculture; one that would raise a black flag of rebellion against the oppressive regime of IBM-ism and spread the joy of interactive computing far beyond MIT, sprinkling computer addicts across the country. DEC began selling its first computer, the PDP-1 (for Programmed Data Processor) in 1959. Its design bore a fair resemblance to that of the TX-0, and proved similarly addictive to young hackers when one was donated to MIT in 1961. A whole series of other models followed, but the most ground-breaking was the PDP-8, released in 1965: a computer about the size of a blue USPS collection box for just $18,000 dollars.  Not long after, someone (certainly not the straightlaced Olsen), began calling this kind of small computer a minicomputer, by analogy to the newly-popular miniskirt. A DEC ad campaign described PDP-8 computers as “approachable, variable, easy to talk to, personal machines.”   A 1966 advertisement depicting various PDP-8 models juxtaposed with cuddly teddy bears. [Datamation, October 1966] Up to that point, the small, relatively inexpensive computers that did exist typically stored their short-term memory on the magnetized surface of a spinning mechanical drum. This put a hard ceiling on how fast they could calculate. But the PDP-8 used fast magnetic core memory, bringing high-speed electronic computing within reach of even quite small science and engineering firms, departments and labs. PDP-8s were also deployed as control systems on factory floors, and even placed on a tractor. They sold in large numbers, for a computer—50,000, all told, over a fifteen-year lifespan—and became hugely influential, spawning a whole industry of competing minicomputer makers, and later inspiring the design for Intel’s 4004 microprocessor.[10] In the early 1960s, IBM, under Thomas Watson, Jr., established itself as the dominant manufacturer of mainframe computers in the United States (and therefore, in effect, the world). Its commissioned sales force cultivated deep relationships with customers, which lasted well beyond the closing of the deal. IBM users leased their machines on a monthly basis, and in return they got access to an extensive support and service network, a wide array of peripheral devices (many of which derived from IBM’s pre-existing business as a maker of punched-card processing machinery), system software, and even application software for common business needs like payroll and inventory tracking. IBM expected their mainframe customers to have a dedicated data processing staff, independent from the actual end users of the computer, people responsible for managing the computer’s hardware and software and their firm’s ongoing relationship with IBM.[11] DEC culture dispensed with all of that; it became a counter-culture, representing everything that IBM was not. Olsen expected end users take full ownership of their machine in every sense. The typical buyer was expected to be an engineer or scientist; an expert on their own needs, who could customize the system for their application, write their own software, and administer the machine themselves. IBM had technical staff with the interest and skills needed to build interactive systems. Andy Kinslow, for example, led a time-sharing project (more on time-sharing shortly) at IBM in the mid-1960s; he wanted to give engineers like himself that hands-on-the-console experience that the MIT hackers had fallen in love with.  But the eventual product, TSS/360, had serious technical limitations at launch in 1967, and was basically ignored by IBM afterwards.[12] This came down to culture: IBM’s product development and marketing focused on the needs of their core data-processing customers who wanted more powerful batch-processing systems with better software and peripheral support, not by the interests of techies and academics who wanted hands-on computer systems and didn’t mind getting their hands dirty. And so, the latter bought from DEC and other smaller outfits. As an employee of Scientific Data Systems (another successful computer startup of the 1960s) put it:  There was, of course, heavy spending on scientific research throughout the sixties, and researchers weren’t like the businessmen getting out the payroll. They wanted a computer, they were enchanted with what we had, they loved it like Ferrari or a woman. They were very forgiving. If the computer was temperamental you’d forgive it, the way you forgive a beautiful woman.[13]  DEC customers included federally-funded laboratories, engineering firms, technical divisions of major corporate conglomerates, and, of course, universities. They worked predominantly onreal-time projects in which a computer interacted directly with human users or some kind of industrial or scientific equipment: doing on-demand engineering calculations for a chemical manufacturer, controlling tracing machinery for physics data analysis, administering experiments for psychological research, and more.[14] They shared knowledge and software through a community organization called DECUS, the Digital Equipment Computer Users’ Society. IBM users had founded a similar organization, SHARE, in 1955, but it had a different culture from the start, one that derived from the data-processing orientation of IBM. SHARE’s structure assumed that each participating organization had a computing center, distinct from its other operational functions, and it was the head of that computing center who would participate in SHARE and collaborate with other sites on building systems software (operating systems, assemblers, and the like). The end users of computers, who worked outside the computing center, could not participate in SHARE at all, in the beginning. At most DEC sites, no such distinction between users and operators existed.[15]   My father, a researcher specializing in computerized medical records, was part of the DEC culture, and co-authored at least one paper for DECUS, CJ McDonald and B Bhargava, “Ambulatory Care Information Systems Written in BASIC-Plus,” DECUS Proceedings (Fall 1973). Here he is pictured at top left, in 1973, in the terminal room for his research institute’s PDP-11 [Regenstrief Institute] DECUS, like SHARE, maintained an extensive program library: for reading and writing to peripheral devices, assembling and compiling human-readable code into machine language, debugging running programs, calculating math functions not supported by hardware (e.g., trigonometric functions, logarithms, and exponents), and more. Maintaining the library required procedures for reviewing and distributing software: In 1963, for example, users contributed fifty programs, most of which were reviewed by at least two other users, and seventeen of which were certified by the DECUS Programming Committee.[16] Aflame with the possibilities of interactive computing to revolutionize their fields of expertise, from education to clinical medicine, the reach of the DEC devotee sometimes exceeded their grasp: at one DECUS meeting, Air Force doctor Joseph Mundie reminded “the computer enthusiasts,” with gentle understatement, “that even the PDP computer had a few shortcomings when making medical diagnoses.”[17] Though none achieved the market share of DEC, a number of competing minicomputer makers also flourished in the late 1960s in the wake of the PDP-8. They included start-ups like Data General (founded by defectors from DEC, just up the Assabet river in Hudson, Massachusetts), but also established electronics firms like Honeywell, Hewlett-Packard, and Texas Instruments. Many thousands of units were sold, exposing many more thousands of scientists and engineers to the thrill of getting their hands dirty on a computer in their own lab or office. Even among the technical elite at MIT, administrators had considered the hackers’ playful antics with the TX-0 and PDP-1 in the late 1950s and early 1960s a grotesque “misappropriation of valuable machine time.” But department heads acquiring a small ten- or twenty-thousand-dollar computer had much less reason to worry about wastage of spare cycles, and even if they did, most lacked a dedicated operational staff to oversee the machine and ensure its efficient use. Users were left to decide for themselves how to use the computer, and they generally favored their own convenience: hands on, interactive, at the terminal. But even while minis were allowing thousands of ordinary scientists and engineers to enjoy the thrill of having an entire computer at their disposal, another technological development began spreading a simulacrum of that experience among an even wider audience.[18] Time-Sharing: Spreading The Love As we have already seen, a number of people got hooked on interactive computing in and around MIT by 1960, well before the PDP-8 and other cheaper computers became available. Electronic computers could perform millions of operations per second, but in interactive mode, all of that capacity sat unused while the human at the console was thinking and typing. Most administrators—those with the responsibility for allocating limited organizational budgets—recoiled at the idea of allowing a six- or seven-figure machine to sit around idle, wasting that potential processing power, just to make the work of engineers and scientists a bit more convenient. But what if it wasn’t wasted? If you attached four, or forty, or four hundred, terminals to the same computer, it could process the input from one user while waiting for the input from the others, or even process offline batch jobs in the interim between interactive requests. From the point-of-view of a given terminal user, as long as the computer was not overloaded with work, it would still feel as if they had interactive access to their own private machine. The strongest early proponent of this idea of time-sharing a computer was John McCarthy, a mathematician and a pioneer in artificial intelligence who came from Dartmouth College to MIT primarily to get closer access to a computer (Dartmouth had no computer of its own at the time). Unsatisfied with the long turnaround that batch-processing imposed on his exploratory programming, he proposed time-sharing as a way of squaring interactive computing with the other demands on MIT’s computation center.[19] McCarthy’s campaigning eventually led an MIT group led by Fernando “Corby” Corbató to develop the Compatible Time-Sharing System (CTSS)—so-called because it could operate concurrently with the existing batch-processing operations on the Computation Center’s IBM computer. McCarthy also directed the construction of a rudimentary time-sharing system on a PDP-1 at Bolt, Beranek, and Newman, a consulting firm with close ties to MIT. This proved that a less powerful computer than an IBM mainframe could also support time-sharing (albeit on a smaller scale), and indeed even PDP-8s would later host their own time-sharing systems: a PDP-8 could support up to twenty-four separate terminals, if configured with sufficient memory.[20] The most important next steps taken to extend the reach of time-sharing specifically, and interactive computing generally, occurred at McCarthy’s former employer, Dartmouth. John Kemeny, head of the Dartmouth math department, enlisted Thomas Kurtz, a fellow mathematician and liaison to MIT’s Computation Center, to build a computing center of their own at Dartmouth. But they would do it in a very different style. Kemeny was one of several brilliant Hungarian Jews who fled to the U.S. to avoid Nazi persecution. Though of a younger generation than his more famous counterparts such as John von Neumann, Eugene Wigner, and Edward Teller, he stood out enough as a mathematician to be hired onto the Manhattan Project as a mere Princeton undergraduate in 1943. His partner, Kurtz, came from the Chicago suburbs, but also passed through Princeton’s elite math department, as a graduate student. He began doing numerical analysis on computers right out of college in the early 1950s, and his loyalties lay more with the nascent field of computer science than with traditional mathematics.   Kurtz (left) and Kemeny (right), inspecting a GE flyer for a promotional shot. The pair started in the early 1960s with a small drum-based Librascope LGP-30 computer, operated in a hands-on, interactive mode. By this time both men were convinced that computers had acquired a civilizational import that would only grow. Having now also seen undergraduates write successful programs in LGP-30 assembly, they also became convinced that understanding and programming computers should be a required component of a liberal education. This kind of expansive thinking about the future of computing was not unusual at the time: other academics at the time were writing about the impact of computers on libraries, education, commerce, privacy, politics, and law. As early as 1961, John McCarthy was giving speeches about how time-sharing would lead to an all-encompassing computer utility that would offer a wide variety of electronic services served up from computers to home and office terminals via the medium of the telephone network.[21] Kurtz proposed that a new, more powerful computer by brought to Dartmouth that would be time-shared (at the suggestion of McCarthy), with terminals directly accessible to all undergraduates: the computer equivalent of an open-stack library. Kemeny applied his political skills (which would eventually bring him the presidency of the university), to sway Dartmouth’s leaders while Kurtz secured grants from the NSF to cover the costs of a new machine. General Electric, which was trying to elbow its way into IBM’s market, agreed to a 60% discount on the two computers Kemeny and Kurtz wanted: a GE-225 mainframe for executing user programs and a Datanet-30 (designed as a message-switching computer for communication networks) for exchanging data between the GE-225 and the user terminals. They called the combined system the Dartmouth Time-Sharing System (DTSS). It did not only benefit Dartmouth students: the university became a regional time-sharing hub via which students at other New England colleges and even high schools got access to computing via remote terminals connected to DTSS by telephone: by 1971 this included fifty schools in all, encompassing a total user population of 13,000[22]    Kemeny teaching Dartmouth students about the DTSS system in a terminal room. Beyond this regional influence, DTSS made two major contributions of wider significance to the later development of the personal computer. First was a new programming language called BASIC. Though some students had proved apt with machine-level assembly language, it was certainly too recondite for most. Both Kemeny and Kurtz agreed that to serve all undergraduates, DTSS would need a more abstract, higher-level language that students could compile into executable code. But even FORTRAN, the most popular language of the time in science and engineering fields, lacked the degree of accessibility they strove for. As Kurtz later recounted, by way of example, it had an “almost impossible-to-memorize convention for specifying a loop: ‘DO 100, I = 1, 10, 2’. Is it ‘1, 10, 2’ or ‘1, 2, 10’, and is the comma after the line number required or not?” They devised a more approachable language, implemented with the help of some exceptional undergraduates. The equivalent BASIC loop syntax, FOR I = 1 TO 10 STEP 2, demonstrates the signature feature of the language, the use of common English words to create a syntax that reads somewhat like natural language.[23] The second contribution was DTSS’ architecture itself, which General Electric borrowed to set up its own time-sharing services, not once, but twice: The GE-235 and Datanet-30 architecture became GE’s Mark I time-sharing system, and a later DTSS design based on the GE-635 became GE’s Mark II time-sharing system. By 1968, many firms had set up time-sharing computer centers to which customers could connect computer terminals over the telephone network, paying for time by the hour. Over 40% of this $70 million dollar market (comprising ten of thousands of users) belonged to GE and its Dartmouth-derived systems. The paying customers included Lakeside School in Seattle, whose Mother’s Club raised the funds in 1968 to purchase a terminal with which to access a GE time-sharing center. Among the students exposed to programming BASIC at Lakeside were eighth-grader Bill Gates and tenth-grader Paul Allen.[24]    Architecture of the second-generation DTSS system at Dartmouth, circa 1971. GE’s marketing of BASIC through its time-sharing network accelerated the language’s popularity, and BASIC implementations followed for other manufacturers’ hardware, including DEC and even IBM. By the 1970s, helped along by GE, BASIC had established itself as the lingua franca of the interactive computing world. And what BASIC users craved, above all, were games.[25] A Culture of Play Everywhere that the culture of interactive computing went, play followed. This came in the obvious form of computer games, but also in a general playful attitude towards the computer, with users treating the machine as a kind of toy and the act of programming and using it as an end in itself, rather than a means towards accomplishing serious business.   The most famous instance of this culture of play in the early years of MIT hacking came in the form of the contest of reflexes and wills known as Spacewar!. The PDP-1 was unusual for its time in having a two-dimensional graphical display in the form of a circular cathode-ray-tube (CRT) screen. Until the mid-1970s, most people who interacted with computers did so via a teletype. Originally invented for two-way telegraphic messaging, these machines could take in user input like a normal typewriter, send that input over the wire to a remote recipient (the computer in this case), and then automatically type out the characters received over the wire in response. Because of its origins in the SAGE air defense program, however, the MIT PDP-1 also came equipped with a screen designed for radar displays. The MIT hackers had already exercised their playfulness in the form of several earlier games and graphical demos on the TX-0, but it was a hanger-on with no official university affiliation named Stephen “Slug” Russell who created the initial version of Spacewar!, inspired by the space romances of E.E. “Doc” Smith. The game reached a useable form by about February 1962, allowing two players controlling rocket ships to battle across the screen, hurling torpedoes at one another’s spaceships. Other hackers quickly added enhancements, however: a star background that matched Earth’s actual night sky, a sun with gravity, hyperspace warps to escape danger, a score counter, and more. The resulting game was visually exciting, tense, and skill-testing, encouraging the MIT hackers to spend many late nights blasting each other out of the cosmos.[26] Spacewar!’s dependence on a graphical display limited its audience, but Stanford became a hotbed of Spacewar! after John McCarthy moved there in 1962, and its use is also well-attested at the University of Minnesota. In 1970, Nolan Bushnell started his video game business (originally called Syzygy, later Atari), to create an arcade console version of the game, which he called Computer Space. The game’s influence lasted into the 1990s, with the release of the game Star Control and its epic sequel (The Ur-Quan Masters), which introduced the classic duel around a star to my generation of hobbyists.[27] The large majority of minicomputers users who lacked a screen did not, however, lack for games. Teletype games relied on text input and output, but could be just as compelling, ranging from simple guessing games up to rich strategy games like chess. Enthusiasts exchanged paper tapes among themselves, but DECUS also helped to spread information about games and game programs among the DEC user base. The very first volume of the DECUS newsletter, DECUSCOPE, from 1962, contains an homage to SpaceWar!, and a simple dice game appeared in the program library available to all members in 1964. By November 1969, the DECUS software catalog listed thirty-seven games and demos, including simple games like hangman and blackjack, but also more sophisticated offerings like SpaceWar! and The Sumer Game, a Bronze Age resource-management simulation. The catalog of scientific and engineering applications, the primary reason for most owners to have a minicomputer in the first place, numbered fifty-eight.[28] Playfulness could also be expressed in forms other than actual games. The MIT hackers, for example, wrote a variety of programs simply for the fun of it: a tinny music generator, an Arabic to Roman numeral converter, an “Expensive Desk Calculator” for doing simple arithmetic on the $120,000 PDP-1, an “Expensive Typewriter” for composing essays. Using the computer to efficiently achieve some real-world outcome did not necessarily enter their minds: many worked on tools for writing and debugging programs without much thought to using the tools for anything other than more play; often “the process of debugging was more fun than using a program you’d debugged.” As the interactive computing culture expanded from minicomputers to time-sharing systems, fewer and fewer of its acolytes had the heightened taste and technical skill required to extract joy from the creation of compilers and debuggers; but many of these new users could create computer games in BASIC, and all could play them. By about 1970, BASIC gaming had become by far the most widespread culture of computer-based play (though not the only one; the University of Illinois / Control Data Corporation PLATO system, for example, constituted its own, distinct sub-culture). As with the earlier minicomputer teletype games, almost all of these BASIC games had textual interfaces, because hardly anyone yet had access to a graphical display. Dave Ahl, who worked at DEC as an educational marketing manager, began including code listings for BASIC games in his promotional newsletter, EDU. Some were of his own creation (like a conversion of The Sumer Game called Hammurabi), others were contributed by high school and college students using DEC systems at school. They proved so popular that DEC published a compilation in 1973, 101 BASIC Computer Games, which went through three printings. After leaving the company, Ahl wisely retained the rights, and went on to sell over a million copies to computer buyers in the 1980s.[29] While many of these games were derivative of existing board or card games, others, like SpaceWar!, created whole new forms of play, unique to the computer. Unlike SpaceWar!, most of these were single-player experiences that relied on the computer to hide information, gradually revealing a novel world to the user as they explored. Hide and Seek, for example, a simple game written by high school students about searching a grid for a group of hiders, evolved into a more complex searching game called Hunt the Wumpus, with many later variants. Computer addicts overlapped substantially with Star Trek fans, and so a genre of Star Trek strategy games also emerged. The most popular version, in which the player hunts Klingons across the randomly-populated quadrants of the galaxy, originated with Mike Mayfield, an engineer who originally wrote it for a Hewlett-Packard (HP) minicomputer (presumably the one he used at work). DECUS was not the only organization sharing program libraries, and Mayfield’s Star Trek became part of the HP library, from whence it found its way to Ahl, who converted it to BASIC. Other versions followed, such as Bob Leedom’s 1974 Super Star Trek.[30] The practices of the BASIC gaming community made it very easy for gaming lineages to evolve in this way, because every game was distributed textually, as BASIC code. If you were lucky, you got a paper or magnetic tape from which you could automatically read the code into your computer’s memory. If not (if you wanted to try out a game from Ahl’s book, for example), you were in for hours of tedious and error-prone typing. But in either case, you had total access to the raw source code. You could read it, understand it, and modify it. If you wanted to make Ahl’s Star Trek slightly easier, you could modify the phaser subroutine on line 3790 to do more damage. If you were more ambitious, you could go to line 1270 and add a new command to the main menu—make an inspiring speech to the crew, perhaps?   A selection of the code listing for Civil War, a simulation game created by high school students in Lexington, Massachusetts in 1968, and included in Ahl’s 101 BASIC Computer Games book. Typing something like this into your own computer required a great deal of patience. [Ahl, 101 Basic Computer Games, 81] Perhaps the most prolific game author of the era, Don Daglow, got hooked on a DEC PDP-10 in 1971 through a time-sharing terminal installed in his dorm at Pomona College, east of Los Angeles. Over the ensuing years he authored his own version of Star Trek, a baseball game, a dungeon-exploration game based on Dungeons & Dragons, and more. His extended career owed to his extended time at Pomona where he had consistent access to the computer: nine years in total as an undergraduate, graduate student, and then instructor.[31] By the early 1970s, many thousands of people like Daglow had discovered the malleable digital world that lived inside of computers. If you could master its rules, it became an infinite erector set, out of which you could reconstruct an ancient long-dead civilization, or fashion a whole galaxy full of hostile Klingons. But unlike Daglow, most of these computer lovers were kept at arm’s length from the object of their desire. Perhaps they could use the university computer at night while they were an undergraduate, but lost that privilege upon graduation a few years later. Perhaps they could afford to rent a few hours of access to a time-sharing service each week, perhaps they could visit a community computing center (like Bob Albrecht’s in Menlo Park), perhaps, like Mike Mayfield, they could cadge a few hours on the office computer for play after hours. But best of all would be a computer at home, to call their own, to use whenever the impulse struck. Out of such longings came the demand for the personal computer. Next time we will look in detail at the story of how that demand was satisfied, and by whom.