splitbrains mac + Tools For Thought by Howard Rheingold
April, 2000: a revised edition of Tools for Thought is available from MIT Press including a revised chapter with 1999 interviews of Doug Engelbart, Bob Taylor, Alan Kay, Brenda Laurel, and Avron Barr. ISBN: 9780262681155, 1985 ed, ISBN: 0262681153
The idea that people could use computers to amplify thought and communication, as tools for intellectual work and social activity, was not an invention of the mainstream computer industry or orthodox computer science, nor even homebrew computerists; their work was rooted in older, equally eccentric, equally visionary, work. You can't really guess where mind-amplifying technology is going unless you understand where it came from.
- HLR

index
Chapter One: The Computer Revolution Hasn't Happened Yet
Chapter Two: The First Programmer Was a Lady
Chapter Three: The First Hacker and his Imaginary Machine
Chapter Four: Johnny Builds Bombs and Johnny Builds Brains
Chapter Five: Ex-Prodigies and Antiaircraft Guns
Chapter Six: Inside Information
Chapter Seven: Machines to Think With
Chapter Eight: Witness to History: The Mascot of Project Mac
Chapter Nine: The Loneliness of a Long-Distance Thinker
Chapter Ten: The New Old Boys from the ARPAnet
Chapter Eleven: The Birth of the Fantasy Amplifier
Chapter Twelve: Brenda and the Future Squad
Chapter Thirteen: Knowledge Engineers and Epistemological Entrepreneurs
Chapter Fourteen: Xanadu, Network Culture, and Beyond
Footnotes
Chapter Eleven:
The Birth of the Fantasy Amplifier

When millions of portable, affordable, imagination amplifiers fall into the hands of eight-year-old children, look for Alan Kay somewhere in the plot. He has always been too impatient to wait for someone else to bring him what he wanted. And he's always found ways to create what he wanted if it didn't exist. For the past fifteen years, his sights have been set on handheld, full-color, stereophonic, artificially intelligent, information representation toys. And he wants them by the tens of millions. They don't exist yet, so he's enlisted some formidable allies to help him create them.

Fame, fortune, or even the more esoteric career ambitions of top-notch software professionals do not seem to motivate Dr. Kay, now a "research fellow" for Apple, formerly "chief scientist" at Atari Corporation. Becoming another Silicon Valley millionaire or accepting the offer of an endowed chair at MIT have not interested him as much as the prospect of putting the power to imagine into the hands of every bright kid who got thrown out of a classroom.

Ever since he learned to read at the age of two and a half, Alan Kay has been accustomed to doing things his own way and letting the rest of the world catch up later. At the same time he was close to flunking out of the eighth grade, primarily for insubordination, he was one of television's original "Quiz Kids." Ten years before he coined the term "personal computer," before Atari or PARC existed, and before another pair of bright insubordinates named Wozniak and Jobs created a new meaning for that good old American word "Apple," Alan Kay was demonstrating FLEX, a personal computer in all but name, to the ARPA graduate students' conference.

Alan is now in his early forties, and is acknowledged by his peers, if not yet the general public, as one of the contemporary prophets of the personal computer revolution. Now his goal is to build a "fantasy amplifier," a "dynamic tool for creative thought" that is powerful enough, small enough, easy enough to use, and inexpensive enough for every schoolkid in the world to have one. He has the resources and the track record to make you believe he'll do it.

Alan Kay doesn't fit the popular image of the arrogant, antisocial hacker, the fast-lane nouveau micromillionaire, or the ivory tower computer scientist. He wears running shoes and corduroys. He has a small, meticulous moustache and short, slightly tousled dark hair. He's so imageless you could pass him in the halls of the places he works and not notice him, even though he's the boss. Which isn't to say that he's egoless or even modest. He loves to quote himself, and often prefaces his homilies with phrases like "Kay's number one law states . . . ."

When I first encountered him, between his stint as director of the legendary "Learning Research Group" at Xerox PARC, and his present position as a kind of "visionary at large" for Apple, Dr. Kay and his handpicked team at Atari were working under tight secrecy, with a budget that was rumored to be somewhere between $50 million and $100 million, to produce something that nobody in the corporation ever described to anybody outside the corporation. But anybody who has ever talked to him, or read something he has written about his dreams, can guess the general thrust of Kay's Atari project, and the probable direction of his current work at Apple. He has been moving toward realizing his dream, project by project, prototype by prototype, innovation by innovation, ever since he was a graduate student.

Being the kind of person he is didn't make it easy for Alan to get an education. At the beginning, he knew more than all of his classmates and most of his teachers, and he didn't mind demonstrating it aloud — a trait that got him thrown out of classrooms and beaten up on playgrounds.

Fortunately for him and for all of us who may benefit from his creations in the future, Alan was already well armored in his mind and imagination, where it really counted, by the time his teachers and classmates got ahold of him. For Alan, being ahead of everybody else started out as a pleasure and quickly turned into a survival trait — which meant he didn't do too well in school, or anyplace else, until the summer of his fifteenth year, when "a music camp in Oneonta, New York, changed my entire life."

Music became the center of his life. In many ways, it still is. He commutes to Silicon Valley from his home in Brentwood, 300 miles away, mostly because he doesn't want to be away from his homemade pipe organ for too long. And he still goes to music camp every summer. He never understood why his two favorite toys — books and musical instruments — could not be combined into a single medium capable of dealing with both sounds and symbols. He worked as a professional jazz and rock guitarist for ten years. When it looked like he was about to be drafted, Kay joined the U.S. Air Force as a navigational cadet. In the Air Force, he "wore out a pair of shoes doing insubordination duty," but he also learned that he had a knack for computer programming.

After he finished his Air Force duty, the National Center for Atmospheric Research was eager to use Kay's programming talent to pay his way through the University of Colorado. He earned a degree in biology, but his college grades were as mixed as they had always been, because of his habit of concentrating intently on only those things that interested him. Through what Alan now calls "sheer luck," he came to the attention of somebody smart enough to actually teach something to a smartass like Alan Kay — and bold enough to admit a student with an undergraduate record that read more like a rap sheet than a transcript.

The man who gambled on Kay's checkered history in academia was David Evans, the chairman of the computer science department at the University of Utah, a place that was to become one of the centers of the augmentation community by the mid-1960s. Like so many others who assumed positions of leadership in the field of interactive computer systems design, Evans had been involved in early commercial computer research and with the ARPA-funded groups that created time-sharing.

"Those career pathways of ARPA project leaders and their graduate students repeatedly intertwined," Kay recalls. "An enormous amount of work was done by a few people who kept reappearing in different places over the years. People frequently migrated back and forth from one ARPA project or another. ARPA funded people rather than projects, and they didn't meddle for an extended period. Part of the genius of Licklider and Bob Sproull was the way this moving around contributed to the growth of a community."

One of the people Evans managed to recruit for the Utah department who had an impact, not only on Alan Kay but on the entire course of personal computing was Ivan Sutherland, the graduate student and prot of Claude Shannon and J. C. R. Licklider who single-handedly created the field of computer graphics as a part of his MIT Ph.D. thesis — the now legendary program known as "Sketchpad."

People like Alan Kay still get excited when they talk about Sketchpad: "Sketchpad had wonderful aspects, besides the fact that it was the first real computer graphics program. It was not just a tool to draw things. It was a program that obeyed laws that you wanted to be held true. So to draw a square in Sketchpad, you drew a line with a lightpen and said: 'Copy-copy-copy, attach-attach-attach. That angle is 90 degrees, and these four things are to be equal.' Sketchpad would go zap! and you'd have a square."

Another computer prophet who saw the implications of Sketchpad and other heretofore esoteric wonders of personal computing was an irreverent, unorthodox, counterculture fellow by the name of Ted Nelson, who has long been in the habit of self-publishing quirky, cranky, amazingly accurate commentaries on the future of computing. In The Home Computer Revolution Nelson had this to say about Sutherland's pioneering program, in a chapter entitled "The most important computer Program Ever Written":

You could draw a picture on the screen with the lightpen — and then file the picture away in the computer's memory. You could, indeed, save numerous pictures in this way.

For example, you could make a picture of a rabbit and a picture of a rocket, and then put little rabbits all over a large rocket. Or, little rockets all over a large rabbit.

The screen on which the picture appeared did not necessarily show all the details; the important thing was that the details were in the computer; when you magnified a picture sufficiently, they would come into view.

You could magnify and shrink a picture to a spectacular degree. You could fill a rocket picture with rabbit pictures, then shrink that until all that was visible was a tiny rocket; then you could make copies of that, and dot them all over a large copy of the rabbit picture. So that when you expanded the big rabbit till only a small part showed (so it would be the size of a house, if the screen were large enough), then the foot-long rockets on the screen would each have rabbits the size of a dime.

Finally, if you changed the master picture — say, by putting a third ear on the big rabbit — all the copies would change correspondingly.

Thus Sketchpad let you try things out before deciding. Instead of making you position a line in one specific way, it was set up to allow you to a number of different positions and arrangements, with the ease of moving cut-outs around on a table.

It allowed room for human vagueness and judgment. Instead of forcing the user to divide things into sharp categories, or requiring the data to be precise from the beginning — all those stiff restrictions people say "the computer requires" — it let you slide things around to your heart's content. You could rearrange till you got what you wanted, no matter for what reason you wanted it.

There had been lightpens and graphical computer screens before, used in the military, but Sketchpad was historic in its simplicity — a simplicity, it must be added, that had been deliberately crafted by a cunning intellect — and its lack of involvement with any particular field. Indeed, it lacked any complications normally tangled with what people actually do. It was, in short, an innocent program, showing how easy human work could be if a computer were set up to be really helpful.

As described here, this may not seem very useful, and that has been part of the problem. Sketchpad was a very imaginative, novel program, in which Sutherland invented a lot of new techniques; and it takes imaginative people to see its meaning.

Admittedly the rabbits and rockets are a frivolous example, suited only to a science-fiction convention at Easter. But many other applications are obvious: this would do so much for blueprints, or electronic diagrams, or other areas where large and precise drafting is needed. Not that drawings of rabbits, or even drawings of transistors, mean the millennium; but that a new way of working and seeing was possible.

The techniques of the computer screen are general and applicable to everything — but only if you can adapt your mind to thinking in terms of computer screens.

Sutherland was Twenty-six when he succeeded Licklider as director of ARPA's Information Processing Techniques Office. Then he was succeeded by Bob Taylor when he left for Harvard in the mid-1960s, to work on 3-D head-mounted displays (like miniature televisions in eyeglass frames) and other exotic graphics systems. When David Evans tried to lure him to Utah, Sutherland said he would do it if Evans agreed to become a business partner — and thus the pioneering computerized flight simulation and image generation company of Evans & Sutherland was born.

Kay showed up at Utah in November of 1966. His first task was to read a pile of manuscript Evans gave him — Ivan Sutherland's thesis. The way Evans ran the graduate program, you weren't supposed to be around campus very long or very much. You were supposed to be a professional and move on to high-level consulting jobs in industry. The job he found for Alan Kay was with a hardware genius named Ed Cheadle. Ed had an idea about doing a tabletop computer. Kay worked on FLEX — his first personal computer software design — from 1967 to 1969. While some of the founders of today's personal computer industry were still in high school, Kay was learning how to design personal computers.

Technically, Cheadle and Kay were not the first to attempt to build a personal computer. Wes Clark, from Whirlwind and Lincoln Lab's TX-2 and "imps," had constructed a desk-size machine a few years before, known as "LINC." FLEX was an attempt to use the more advanced electronic components that had recently become available to bring more of the computer's power out where the individual user could interact with it. FLEX was a significant innovation technically, but it was complicated and delicate, and in Kay's words, "users found it repellent to learn." The problem wasn't in the machinery as much as it was in the special language the user had to master in order to command the power of the machine to accomplish useful tasks. That was when Kay first vowed to make sure his personal computer would come at least part of the way toward the person who was to use it, and when he realized that software design would be the area in which this desire could be fulfilled.

Although he didn't fully realize it yet, Alan Kay was beginning to think about designing a new programming language. The kind of language he began to yearn for would be a tool for using the computer as a kind of universal simulator. The problem was that programming languages were demonically esoteric. "There are two ways to think about building an instrument," Kay asserts. "You can build something like a violin that only a few talented artists can play. Or you can make something like a pencil that can be used quickly and easily for anything from learning the alphabet to drawing to writing a computer program." He was convinced that 99 percent of the problem to be solved in making a truly usable personal computer program were software problems: "By 1966, everyone knew where the silicon was going."

Besides FLEX, Kay's other project at Utah was to make some software work. He got a pile of tape canisters on his desk, along with a note that the tapes were supposed to contain a scientific programming language known as Algol 60, but they didn't work. It was a maddening software puzzle that was still far from solved when Kay figured out that it wasn't Algol 60 but a language from Norway, of all places, called Simula. In a 1984 interview, Kay described what happened when he finally printed out on paper the program listings stored in those mysterious canisters and figured out what was on those tapes:

We couldn't understand any of the papers, they were sort of transliterated from the Norwegian. . . . We spread out the program listings and actually went through the machine code to try to figure out what was happening — and I suddenly realized that Simula was a programming language to do what Sketchpad did. I had never really understood what Sketchpad was. I get shivers now thinking of it. It rotated my point of view through a different dimension and nothing has been the same since. I suddenly understood the purpose of higher level languages.
Alan was one of the enthralled audience at Engelbart's 1968 media show. He was excited by it because it demonstrated what you could really do with a computer augmented representation system. It also made it clear to Alan what he didn't want to do. "The Engelbart crew were all ace pilots of their NLS system," Kay remembers. They had almost instant response — like a very good video game. You could pilot your way through immense fields of information. It was, unfortunately for my purposes, something elegant and elaborate that these experts had learned how to play. It was too complex for my tastes, and I wasn't interested in the whole notion of literacy as a kind of fluency.
Logo
In the course of preparing his Ph.D. thesis, Alan began to explore the world of artificial intelligence research, which brought him into closer contact with two more computer scientists who were to heavily influence his own research — Marvin Minsky and Seymour Papert, who were then codirectors of MIT's pioneering artificial intelligence research project. In the late 1960s, Papert in particular was doing something that irrevocable influenced Alan's goals. Papert was creating a new computer language. For children.

Papert, a mathematician and one of the early heroes of the myth-shrouded Project Mac, had spent five years in Switzerland, working with the developmental psychologist Jean Piaget. Piaget had triggered his own revolution in learning theory by spending time — years and decades — watching children learn. He concluded that learning is not simply something adults impose upon their offspring through teachers and classrooms, but is a deep part of the way children are innately equipped to react to the world, and that children construct their notions of how the world works, from the material available to them, in definite stages.

Piaget was especially interested in how different kinds of knowledge are acquired by children, and concluded that children are scientists — they perform experiments, formulate theories, and test their theories with more experiments. To the rest of us, this process is known as "playing," but to children it is a vital form of research.

Papert recognized that the responsiveness and representational capacity of computers might allow children to conduct their research on a scale never possible in a sandbox or on a blackboard. LOGO, the computer language developed by Papert, his colleague Wallace Fuerzing, and others at MIT and at the consulting firm of Bolt, Bernack & Newman, was created for a purpose that was shockingly different from the purposes that had motivated the creation of previous computer languages. FORTRAN made it easier for scientists to program computers. COBOL made it easier for accountants to program computers. LISP, some might say, made it easier for computers to program computers. LOGO, however, was an effort to make it easier for children to program computers.

Although its creators knew that the LOGO experiment could have profound implications in artificial intelligence and computer science as well as in education, the project was primarily intended to create a tool for teaching thinking and problem-solving skills to children. The intention was to empower rather than to suppress children's natural desire to solve problems in ways they find fun and rewarding. "The object is not for the computer to program the student, but for the student to program the computer," was the way the LOGO group put it.

Beginning in 1968, children between the ages of eight and twelve were introduced to programming through the use of attractive graphics and a new approach that put the power to learn in the hands of the people who were doing the learning. By learning how to use LOGO to have fun with computers, students were automatically practicing skills that would generalize to other parts of their lives.

Papert had observed from both his computer science and developmental psychology experience that certain of these skills are "powerful ideas" that can be used at any age, in any subject area, because they have to do with knowing how to learn . This is the key element that separated LOGO from the "computer assisted instruction" projects that had preceded it. Instead of treating education as a task of transferring knowledge from the teacher to the student, the LOGO approach was to help students strengthen their ability to discover knowledge on their own.

One of the most important of these skills, for example, is the idea of "bugs" — the word that programmers use to describe the small mistakes that inevitably crop up in computer programs, and which must be tracked down before the program will work. Instead of launching students on an ego-bruising search for the "right" answer, the task of learning LOGO was meant to encourage children to solve problems by daring to try new procedures, then debugging the procedures until they work.

The first revolutionary learning instrument introduced in LOGO was the "turtle," a device that is part machine and part metaphor. The original LOGO turtle was a small robot, controlled by the computer and programmed by the child, that could be instructed to move around, pulling a pen as it moved, drawing intriguing patterns on paper in the process. Alan Kay was one of several software designers who realized that this process was more than just practice at drawing pictures, for the ability to manipulate symbols — whether the symbols are turtle drawings, words, or mathematical equations — is central to every medium used to augment human thinking.

The abstract turtle of today's more advanced display technology is a triangular graphic figure that leaves a video trail behind it on a display screen. Whether it is made of metal and draws on paper, or made of electrons and draws on a video screen, the turtle is what educational psychologists call a transitional object — and what Papert calls an "object-to-think-with."

Instead of "programming the computer" to draw a pattern, children are encouraged to "teach the turtle" how to draw it. They start by "pretending to be the turtle" and trying to guess what the turtle would do in order to trace a square, a triangle, a circle, or a spiral. Then they give the turtle a series of English-like commands, typed in through a keyboard, to "teach the turtle a new word."

If the procedure followed by the turtle in response to the typed commands doesn't achieve the desired graphic effect, the next step is to systematically track down the "bug" that is preventing success. The fear of being wrong is replaced in this process by the immediate feedback of discovering powerful ideas on one's own.

After decades of research, Papert summarized the results of his LOGO work for a general audience in Mindstorms: Children, computers, and powerful ideas. In this manifesto of what has grown into an international movement in both the educational and computing communities, Papert reiterated something important that is easy to lose in the complexities of the underlying technology — that the purpose of any tool ought to be to help human beings become more human:

In my vision the computer acts as a transitional object to mediate relationships that are ultimately between person and person. . . .

I am talking about a revolution in ideas that is no more reducible to technologies than physics and molecular biology are reducible to the technological tools used in laboratories or poetry to the printing press. In my vision, technology has two roles. One is heuristic: The computer presence has catalyzed the emergence of ideas. The other is instrumental: The computer will carry ideas into a world larger than the research centers where they have incubated up to now.

When he came across the LOGO work, during the time he was meditating about the fact that he had put two years into the FLEX machine only to find that it wasn't amenable to humans who tried to use it, Alan Kay recalls that "it was like a light going on in my head. I knew I would never design another program that was not set up for children."

One of the first things he understood was that a program or a programming language that can be learned by children doesn't have to be a "toy." The toy can also serve as a tool. But that transformation doesn't happen naturally — it comes about through a great deal of work by the person who designs the language. Kay already knew that the most important tools for creating personal computing were to be found in the software, but now it dawned on him that the power those tools would amplify would be the power to learn — whether the user is a child, a computer systems designer, or an artificial intelligence program.

Although he knew he had a monstrous software task ahead of him if he was to create a means by which even children could use computers as a simulation tool, his FLEX experience and his exposure to LOGO convinced Kay that there was far more to it than just building an easy-to-operate computer and creating a new kind of computer language. It was something akin to the problem of building a tool that a child could use to build a sandcastle, but would be equally useful to architects who wanted to erect skyscrapers. What he had in mind was an altogether new kind of artifact: If he ended up with something an eight-year-old could carry in one hand and use to communicate music, words, pictures, and to consult museums and libraries, would the device be perceived as a tool or as a toy?

Kay began to understand that what he wanted to create was an entirely new medium — a medium that would be fundamentally different from all the previous static media of history. This was going to be the first dynamic medium — a means of representing, communicating, and animating thoughts, dreams, and fantasies as well as words, images, and sounds. He recognized the power of Engelbart's system as a toolkit for knowledge workers like editors and architects, scientists, stockbrokers, attorneys, designers, engineers, and legislators. Information experts desperately needed tools like NLS. But Kay was after a more universal, perhaps more profound power.

One of the concepts that played a big part in Papert's LOGO project, and thus influenced Alan Kay and others, was derived from the thinking of John Dewey, whose work encouraged generations of progressive educators. Dewey developed a theory that Piaget later elaborated — that the imaginative play often mistakenly judged by adults to be "aimless" is actually a potent tool for learning about the world. Kay wanted to link the natural desire to explore fantasies with the innate ability to learn from experimentation, and he knew that the computer's power to simulate anything that could be clearly described was one key to making that connection.

Alan wanted to create a medium that was a fantasy amplifier as well as an intellectual augmentor. First he had to devise a language more suited for his purposes than LOGO, a "new kind of programming system that would attempt to combine simplicity and ease of access with a qualitative improvement in expert-level adult programming." With the right kind of programming language, used in conjunction with the high-powered computer hardware he foresaw for the near future, Kay thought that an entirely new kind of computer — a personal computer — might be possible.

Such a software advance as the kind Kay envisioned could only be accomplished by using hardware that didn't exist yet in 1969, since the computing power required for each individual unit would have to be several hundred times that of the most sophisticated time-sharing computers of the 1960s. But at the end of the 1960s, such previously undreamed-of computing power seemed to be possible, if not imminent. The year 1969 was pivotal in the evolution of personal computing, as well as in Alan Kay's career. It was the year that the ARPAnet time-sharing communities began to discover that they were all plugged into a new kind of social-informational entity, and enthusiastically began to use their new medium to design the next generations of hardware and software.

After he finished his thesis on FLEX, Kay began to pursue his goal of designing a new computer language in one of the few places that had had the hardware, the software, and the critical mass of brain power to support his future plans — the Stanford Artificial Intelligence Laboratory. He had a lot to think about. There were many great programmers, but very few great creators of programming languages.

The programming language for the eventual successor to FLEX was his primary interest, not only because he knew that the hardware would be catching up to him, but because he knew that programming languages influence the minds of the people who use computers. In 1977, after the task of creating his new programming language, Smalltalk, was accomplished, Kay described the importance of this connection between a programming language and the thinking of the person who uses it:

The particular structure of a symbolic language is important because it provides a context in which some concepts are easier to think about than others. For example, mathematical notation first arose to abbreviate concepts that could be expressed only as ungainly circumlocutions in natural language. Gradually it was realized that the form of an expression and manipulation could be of a great help in the conception and manipulation of the meaning for which the expression stood. . . .

The computer created new needs for language by inverting the traditional process of scientific investigation. It made new universes available that could be shaped by theories to produce simulated phenomena.

The "inverting" of "the traditional process of scientific investigation" noted by Kay was the source of the computer's power of simulation. And the ability to simulate ideas in visible form was exactly what a new programming language needed to include in order to use a computer as an imagination amplifier. If Piaget was correct and children are both scientists and epistemologists, a tool for simulating scientific investigation could have great impact on how much and how fast young children and adult computer programmers are able to learn.

According to the rules of scientific induction, first set down by Francis Bacon three hundred years ago, scientific knowledge and the power granted by that knowledge are created by first observing nature, noting patterns and relationships that emerge from those direct observations, then creating a theory to explain the observations. With the creation of a machine that "obeyed laws you wanted to be held true," it became possible to specify the laws governing a world that doesn't exist, then observe the representation created by the computer on the basis of those laws.

Papert called these simulated universes "microworlds," and used LOGO-created microworlds to teach logic, geometry, calculus, and problem-solving to ten-year-olds. Part of the fascination of a good video game lies in the visual impact of its microworld representation and the amount of power given to the player to react to it and thus learn how to control it. In Smalltalk, every object was meant to be a microworld.

Computer scientists talk about computational metaphors in computer languages — alternative frameworks for thinking about what programming really does. The most widespread and oldest metaphor is that of a recipe, the kind of recipe you create for a very stupid but obedient servant — a list of definite, step-by-step instructions that could provide a desired result when carried out by a mindless instruction-following mechanism. The sequence of instructions is an accurate but limiting metaphor for how a computer operates. It is a reflection of the fact that early computers were built to do just one thing at a time, but to do it very fast and get on to the next instruction.

This model, however, is not well suited to computers of the future, which will perform many processes at the same time (in the kind of computation that is called parallel processing). Languages based on the dominant metaphors of numerical, serial procedures are much better suited for linear processes like arithmetic and less well suited for exactly those tasks that computers need to perform if they are to serve as representational media. Parallel processing is also a better model of the way human brains handle information.

Starting from concepts set forth in LOGO and in Simula, Kay began to devise a new metaphor in which the string of one-at-a-time instructions is replaced by a multidimensional environment occupied by objects that communicate by sending one another messages. In effect, he started out to build a computer language that would enable the programmer to look at the host computer not as a serial instruction follower, but as thousands of independent computers, each one able to command the power of the whole machine.

In 1969 and 1970, the growing impact of the Vietnam war and the pressure by congressional critics of what they interpreted as "frivolous research" contributed to the death of the "ARPA spirit" that had led to the creation of time-sharing and computer networks. The "Mansfield Amendment" in 1970 required ARPA to fund only projects with immediately obvious defense applications. Taylor was gone. The AI laboratories and the computer systems designers found funding from other agencies, but the central community that had grown up in the sixties began to fragment.

The momentum of the interactive approach to computing had built up such intensity in its small following by the late 1960s that everybody knew this fragmentation could only be a temporary situation. But nobody was sure where, or how, the regrouping would take place. Around 1971, Alan began to notice that the very best minds among his old friends from ARPA projects were showing up at a new institution a little more than a mile away from his office at the Stanford AI laboratory.

By the beginning of 1971, Alan Kay was a Xerox consultant, then a full-time member of the founding team at the Palo Alto Research Center. By this time, the hardware revolution had achieved another level of miniaturization, with the advent of integrated circuitry and the invention of the microprocessor. Xerox had the facilities to design and produce small quantities of state-of-the-art microelectronic hardware, which allowed the computer designers unheard-of power to get their designs up and running quickly. It was precisely the kind of environment in which a true personal computer might move from dream to design stage. Alan Kay was already thinking about a special kind of very powerful and portable personal computer that he later came to call "the Dynabook."

Everybody, from the programmers in the "software factory" who designed the software operating system and programming tools, to the hardware engineers of the Alto prototype machines, to the Ethernet local-area-network team who worked to link the units, was motivated by the burning desire to get a working personal computer in their own hands as soon as possible. In 1971, Alan wrote and thought about something that wasn't yet called a Dynabook but looked very much like it. Kay's Learning Research Group, including Adele Goldberg, Dan Ingalls, and others, began to create Smalltalk, the programming "environment" that would breathe computational life into the hardware, once the hardware wizards downstairs cooked up a small network of prototype personal computers.

One of the most important features of the anticipated hardware was the visual resolution of the display screen. One of the things Alan had noticed when watching children learn LOGO was that kids are very demanding computer users, especially in terms of having a high-resolution, colorful, dynamic display. They were accustomed to cartoons on television and 70-mm wide-screen movies, not the fuzzy images then to be found on computer displays. Kay and his colleagues knew that hardware breakthroughs of the near future would make it possible to combine the interactive properties of a graphical language like Sketchpad with very high-resolution images.

The amount of image resolution possible on a video display screen depends on how many picture elements are represented on the screen. Kay felt that the threshold number of picture elements needed to most strongly attract and hold the attention of a large population of computer users, and give the users significant power to control the computer, would be around one million dots. (The resolution of a standard snapshot is the equivalent to about four million dots.) The Alto computer being constructed for PARC researchers — which the Learning Research Group called "an interim Dynabook" — would have around half a million dots.

The technique by which the Alto would achieve its high-resolution screen was called "bit-mapping," a term that meant that each picture element, each dot of light on the display screen, was connected to one bit of information in a specific place in the computer's memory, thus creating a kind of two-way informational map of the screen. If, for example, a specific bit in the computer's "memory map" was turned off, there would not be a dot of light at the location on the screen. Conversely, an "on" bit at a coordinate in the memory map would produce a dot of light at the designated screen location. By turning on and off parts of the bit map through software commands, recognizable graphic images can be created (and changed) on the screen.

Bit-mapping was a major step toward creating a computer that an individual could use comfortably, whether the user is an expert programmer or a beginner. The importance of a visual display that is connected directly to the computer's memory is related to the human talent for recognizing very subtle visual patterns in large fields of information — undoubtedly a survival trait that evolved way back when our ancestors climbed trees and prowled savannas.

Human information processors have a very small short term memory, however, which means that all computers and no humans can extract the square roots of thousand-digit numbers in less than a second, no computers and all humans can recognize a familiar face in a crowd. By connecting part of the computer's internal processes to a visible symbolic representation, bit-mapping puts the most sophisticated part of the human information processor in closer contact with the most sophisticated part of the mechanical information processor.

Bit-mapping created more than a passive window on the computer's internal processes. Just as the computer could tell the human who used it certain facts about whatever it had in its memory, the user was also given the power to change the computer by manipulating the display. If users change the form of the visual representations on bit-mapped screens, using keyboard commands, lightpens ( Sketchpad), or pointing devices like mice ( la Engelbart), they can also change the computer's memory. The screen is a representation, but it is also a control panel — a drawing on a bit-mapped screen can be nothing more than a drawing, but it can also be a kind of command, even a program, to control the computer's operations.

If, for example, you were to use a mouse to move a video pointer on the screen to touch a visual representation of a file folder or an out basket, and you could call the folder, for example, from the computer's memory and display a document from it on your screen simply by pointing to it, or send the contents of the computer-stored out basket to somebody else's in basket, then a person would be able to accomplish the kind of work done in offices, even if that person knew nothing about computer programming. Which, after all, was the potential future market that motivated Xerox management to create PARC and cut their whiz kids loose in the first place.

Creating new kinds of computer input and output devices to help human pattern recognition mesh with mechanical symbol manipulation is known as "designing the human interface," an art and science that had to be created in the 1970s in the kind of human-computer partnership envisioned by Licklider and Engelbart in the 1960s, which could start to happen by the 1980s. Alan Kay's Smalltalk project played a key role in the evolution of the Alto interface, and as such was integral to the eventual company goals in the office automation market. But even at the beginning, Kay started bringing children into the project.

Part of the Smalltalk project's effect on the early days at PARC was inspirational. It wasn't long before the rest of the team understood Alan's desire to bring children into the process of designing the same instrument that he and all the other computer scientists wanted to use themselves. Another aspect of Kay's contribution was more concrete: the absolute conviction that they were designing something meant for people to use. That might not sound too revolutionary today, but even as late as 1971, most of the top-flight computer scientists who believed that this tool was going to be more than just a gadget for computer programmers were at PARC.

PARC in the early 1970s was a collection of the worlds best computer scientists, hardware engineers, physicists, programmers . . . which meant that it was also a collection of people with strong personalities and definite opinions. Bob Taylor, Alan Kay, Butler Lampson, Bob Metcalfe, and their colleagues each had his own unique approach to creating personal computing, but they agreed on one fundamental assumption — that their ultimate product should be as generally useful as a hammer, or pulley, or book. Secretaries and business executives would one day be able to use the same tool to help them perform the work. Architects and designers would have the power of modeling, forecasting, and simulation at their fingertips. A true personal computer, the diverse PARC groups agreed, ought to be usable by legislators and librarians, teachers and children. And a computer that could be commanded by looking at images on a screen and pointing to them by means of a mouse was certainly a lot more widely usable than a computer that required arcane keyboard-entered commands in order to function.

The first Alto personal computer prototypes were distributed to PARC researchers in 1974. As they had predicted, the creation of an environment in which every researcher had, for the first time in history, personal access to a powerful computer, and the means to communicate with all of his or her colleagues' computers, had a profound effect on their ability to do their job of designing even more powerful computer systems.

By the late 1970s, yet another generation of even more advanced hardware and software had been created by a network of nearly a thousand researchers at PARC equipped with Altos, communicating via Ethernet networks. But the outside world, and many people in the computer world, were still unaware of the potential of personal computers. The problem, as PARC alumnus Charles Simonyi was to point out in 1983, an eventful decade later, was that Xerox management couldn't be faulted for not realizing in 1973 that PARC was more than ten years ahead of an industry that wouldn't even exist until 1975.

Another small cloud on the horizon in the mid-1970s — the "home-brew" computer hobbyists who were building their own low-power microcomputers — became a gathering storm of popular interest in personal computing by the end of the 1970s. The microcomputer hobbyists, who assembled the new microprocessor chips into operational computers, were for the most part unaware of the far more powerful devices that were in use in Palo Alto years before a tiny company in New Mexico, the now-legendary MITS, produced the first affordable, do-it-yourself computer — the Altair.

In March, 1977, Alan Kay and Adele Goldberg condensed a PARC technical report into an article, the title of which described both the dream and the reality of the Smalltalk branch of the PARC project: "Personal Dynamic Media" was published in a magazine named Computer, during a time when computer magazines were for specialists. Like Bush, Licklider, Taylor and Engelbart before them, Kay and Goldberg did not talk of circuits or programs, but of media, knowledge, and creative human thought:

For most of recorded history, the interactions of humans with their media have been primarily nonconversational in the sense that marks on paper, paint on walls, even "motion" pictures and television do not change in response to the viewer's wished. A mathematical formulation — which may symbolize the essence of an entire universe — once put down on paper, remains static and requires the reader to expand on its possibilities.

Every message is, in one sense or another, a simulation of some idea. It may be representational or abstract. The essence of a medium is very much dependent on the way messages are embedded, changed, and viewed. Although digital computers were originally designed to do arithmetic computation, the ability to simulate the details of any descriptive model means that the computer, viewed as a medium in itself, can be all other media if the embedding and viewing methods are sufficiently well provided. Moreover, this new "metamedium" is active — it can respond to queries and experiments — so that the messages may involve the learner in a two-way conversation. This property has never been available before except through the medium of an individual teacher. We think the implications are vast and compelling.

A dynamic medium for creative thought: the Dynabook. Imagine having your own self-contained knowledge navigator in a portable package the size and shape of an ordinary notebook. Suppose it had enough power to outrace your senses of sight and hearing, enough capacity to store for later retrieval thousands of page-equivalents of reference materials, poems, letters, recipes, records, drawings, animations, musical scores, waveforms, dynamic simulations, and anything else you would like to remember and change.

The Learning Research Group introduced students from the nearby Jordan Middle School in Palo Alto to what they called "interim Dynabooks." Nearly a decade before keyboards and display screens became familiar appliances, these children were introduced to a device no child and only a few computer scientists had seen before — an Alto computer set up to run Smalltalk. By using the mouse and the graphics capabilities provided by the hardware and software, these students were able use Smalltalk to command the computer in much the same way that Papert's students in Cambridge, years before, had learned to program in LOGO by "teaching the turtle new words."

The screen was either a "very crisp high-resolution black-and-white CRT or a lower resolution high quality color display." High-fidelity speakers and sound synthesizers, five-key keyboards like Engelbart's, and piano-like keyboards were also available. The system could store the equivalent of 1500 pages of text and graphics, and the processor was capable of creating, editing, storing, and retrieving documents that consisted of words, graphic images, sounds, numbers, or combinations of all four symbol forms.

The mouse could be used to draw as well as to point, and an "iconic editor" (another Smalltalk innovation) used symbols that children who were too young to read could use to edit graphics; e.g., instead of typing in a command to invoke a graphics cursor, a child could point to a paintbrush icon.

The interim Dynabook could be used to read or write an old-fashioned book, complete with illustrations, but it could also do much more: "It need not be treated as a simulated paper book since this is a new medium with new properties. A dynamic search may be made for a particular context. The non-sequential nature of the file medium and the use of dynamic manipulation allows a story to have many accessible points of view; Durrell's Alexandria Quartet, for instance, could be one book in which the reader may pursue many paths through the narrative," wrote Kay and Goldberg.

The dynamic nature of the medium was made clear to the users as they became acquainted with the toolkit for drawing, editing, viewing, and communicating. Smalltalk was not just a language, and the Alto system was not just a one-person computer. Together, the hardware, the software, and the tools for the users to learn the software, constituted an environment — a small symbolic spaceship that the first-time user learned to control and steer through a personal universe.

The ability of the users to personalize their representation and use of information became clear as the children from Jordan Middle School experimented with changing typefonts for displaying letterforms, and with changing the bit-maps of the computer to create and animate cartoon images in mosaics, lines, and halftones. The users not only had the capability to create and edit in a new way, but once they learned how to use the medium they gained the ability to make their own choices about how to view the universe of information at their fingertips.

The editing capabilities of the Dynabook made it possible to display and change every object in the Smalltalk microworld. Text and graphics could be manipulated by pointing at icons and lists of choices — "menus" in software jargon — and multiple "windows" on the display screen made it possible to view a document or group of documents in several different ways at the same time. The filing capabilities made it possible to store and retrieve dynamic documents that could consist of any collection of objects that could be displayed and have something to do with each other. Drawing tools and painting programs made it possible to input information freehand as well as through the keyboard.

The structure of the Smalltalk language, the tools used by the first-time user to learn how to get around in the Dynabook, and the visual or auditory displays were deliberately designed to be mutable and movable in the same way: "Animation, music, and programming," wrote Kay and Goldberg, "can be thought of as different sensory views of dynamic processes. The structural similarities among them are apparent in Smalltalk, which provides a common framework for expressing those ideas." A "musical score capture system" called OPUS and a graphic animation tool called SHAZAM were part of the Smalltalk-Dynabook toolkit.

In 1977, Scientific American's annual theme edition was dedicated to the subject of "Microelectronics." Alan Kay's contribution to the issue, "Microlectronics and the Personal Computer," was the only article that directly talked about the meaning of this new technology for people. The magazine's editors summed up the piece in a two-sentence subtitle: "Rates of progress in microlectronics suggest that in about a decade many people will possess a notebook-sized computer with the capacity of a large computer of today. What might such a system do for them?"

One of the first things Kay pointed out was the connection between the use of interactive graphic tools and the exercise of a new cognitive skill — a skill at selecting new ways to view the world. The metamedium which Kay still saw to be a decade in the future would only achieve its full power when people use it enough to see what it is about. The power that the 1977 prototypes granted to the human who used such devices was the power to create many new points of view.

This freedom to change one's view of a microworld, Kay believed, was one of the most important powers of the new kinds of representational tools that were being designed and tested in the late 1970s. In describing the way children learned to use the Smalltalk system, Kay also described something of the nature of the experience:

Initially the children interact with our computer by "painting" pictures and drawing straight lines on the display screen with the pencillike pointer. The children then discover that programs can create structures more complex than any they can create by hand. They learn that a picture has several representations, of which only the most obvious — the image — appears on the screen. The most important representation is the symbolic model of the picture stored in the memory of the computer. . . .

One of the best ways to teach nonexperts to communicate with computers is to have them explore the levels of abstraction at which images can be manipulated.

Kay noted that when he gave the same tool that the children used as both an amusement and an entrance into Smalltalk programming to an adult artist, the artist started out creating various designs similar to those he was accustomed to making on paper. Eventually the artist discovered that the properties of the new medium, and his increasing facility for commanding those properties, made it possible for him to explore graphic universes that he could never have created with his old media: "From the use of the computer for the impoverished simulation of an already existing medium," Kay wrote, "he had progressed to the discovery of the computer's unique properties for human expression."

This freedom of viewpoint was only meant to be explored and demonstrated in a preliminary way in Smalltalk: It was Kay's hope that many new metaphors and languages would evolve as time went on, into what he called "observer languages":

In an observer language, activities are replaced by "viewpoints" that become attached to one another to form concepts. For example, a dog can be viewed abstractly (as an animal), analytically (as being composed of organs, cells, and molecules), pragmatically (as a vehicle by a child), allegorically (as a human being in a fairy tale) and contextually (as a bone's way to fertilize a lawn). Observer languages are just now being formulated. They and their successors will be the communication vehicles of the 1980s.

Kay set forth his theories about personal computers as the components of a new medium for human expression , and compares the recent and future emergence of personal computers with the slower development cycles of past media. He also predicted that the changes in the human social order that were likely to accompany a new computerized literacy would be much more sweeping than the effects of previous media revolutions. The creation of a literate population would be the first reason for such a change. Out of that literate population, perhaps a few creative individuals would show the rest of us what could be achieved. He declined to predict the specific shape of these social changes, noting the failure of previous attempts at such forecasting:

We may expect that the changes resulting from computer literacy will be as far reaching as those that came from literacy in reading and writing, but for most people the changes will be subtle and not necessarily in the direction of their idealized expectations. For example, we should not predict or expect that the personal computer will foster a new revolution in education just because it could. Every new communication medium in this century — the telephone, the motion picture, radio and television — has elicited similar predictions that did not come to pass. Millions of uneducated people in the world have ready access to the accumulated culture of the centuries in public libraries, but they do not avail themselves of it. Once an individual or a society decides that education is essential, however, the book, and now the personal computer, can become among the society's main vehicles for the transmission of knowledge.

The difference between a Dynabook of the future and all the libraries of the past, however, would depend upon the dynamic nature of this medium. A library is a passive repository of cultural treasures. You have to go in and dig out your own meanings. A Dynabook would combine the addictive allure of a good video game with the cultural resources of a library and a museum, with the expressive power of an animated fingerpaint set and a synthesized orchestra. Most importantly, it would actively find the knowledge appropriate for the task of the moment, communicated in the form and language best suited to each individual who used it.

The intelligence of such devices — the reason that software breakthroughs in artificial intelligence research would someday have to intersect with the evolution of personal computers — would influence their ability to bring resources to the person who needs them. When the machines grow smart enough to communicate with eight-year-olds, then the question will shift from how to build a computer that people can easily use to what we all do with that kind of power.

What if libraries were able to find out what most interests you and what you most need to know, and showed you how to find what you wanted? What if you could say to the library: "I wonder what it would be like to live in the Baghdad of the Caliphate?" or "I wonder how it feels to be a whale?" and expect the library to show you? Do you like Van Gogh? How about a simulation of the fields outside his house? Would you care to sit in with Louis Armstrong or Wolfgang Mozart? What would it do to the world if we could all see how everybody else lived and share in their cultures?

If the first effect of the coming metamedium was likely to be the creation of a literate population who shared a new freedom to use symbols and to choose how to view information, then the second effect lay in the power that would be unique to this medium — the power of simulation. Simulation is the power to see what you imagine, to create worlds that obey your command. The computer can build instant sensory representations. The user/programmer explores a universe that reacts, in which the degree of the user's power depends upon and grows with one's understanding of the way the worlds work.

The power of simulation to empower the imagination and give form to whatever can be clearly discerned in the mind's eye is what makes this kind of device a "fantasy amplifier." Although there are several homilies that are entitled to be called "Kay's First Law," the statement that he most often calls "Kay's Second Law" is: "Any time you build a fantasy amplifier, you have a winner." His reasoning is that game playing and fantasizing are metaphors for the kind of skill people need to get around in the world.

"We live in a hallucination of our own devising," Kay is fond of saying. But our illusion is so complex, so much of the world we experience appears to be beyond our control, and the operating manual is so difficult to find, that we all tend to get locked into the way our families, societies, and cultures see the world. "We can't exist without fantasy, Kay asserts, "because it is part of being a human. A fantasy is a simpler, more controllable world."

And by practicing how we would control a simpler version of the world, we often figure out how to operate the world outside the fantasy. A game is both controllable and challenging. It is entered vicariously, purposefully, and with an open mind about the outcome. Sports and science and art all involve vicarious, purposeful fantasies in that sense. That's why he feels that video games were not a fad but a precursor to something with much more profound power. And that is the most likely reason why he joined Atari Corporation.

The power of simulation is not necessarily or exclusively a beneficial one, as the legends of today's system-crashers, obsesses programmers, and dark-side hackers attest, and as Kay warned in his Scientific American paper:

The social impact of simulation — the central part of any computing — must also be considered. First, as with language, the computer user has a strong motivation to emphasize the similarity between simulation and experience and to ignore the great differences that symbols interpose between models and the real world. Feelings of power and a narcissistic fascination with the image reflected back from the machine are common. Additional tendencies are to employ the computer trivially (simulating what paper, paints, and a file cabinet can do), as a crutch (using the computer to remember things that we can perfectly well remember ourselves) or as an excuse (blaming the computer for human failings). More serious is the human propensity to place faith in and assign higher powers to an agency that is not completely understood. The fact that many organizations actually base their decisions on — worse, take their decisions from — computer models is profoundly disturbing given the current state of computer art . . . .

The fact of simulation is so seductive to human perception, and so potentially useful in "real world" applications, that its widespread use is inevitable, once personal computers grow sophisticated and inexpensive enough. The ethics of how and for what purposes simulations should and should not be used are only beginning to be formulated. The historical events, debates in PTAs and legislatures, and growth in public concern that will accompany the introduction of this medium will help determine the shape of the future ethics of simulation. The best place to look for expert guidance, Kay suggests, might be to those of us who are the least prejudiced by precomputer ways of thinking:

Children's Computer Ethic
Children who have not yet lost much of their sense of wonder and fun have helped us to find an ethic about computing: Do not automate the work you are engaged in, only the materials. If you like to draw, do not automate drawing; rather, program your personal computer to give you a new set of paints. If you like to play music, do not build a "player piano"; instead program yourself a new kind of instrument.

The way we think about computers — as machines, as systems that mimic human capabilities, as tools, as toys, as competitors, or as partners — will play a large part in determining their future role in society. In the conclusion of his article, Kay cautions against the presumptions of present-day minds about what the minds of future generations may or may not choose to do with the instruments past generations worked to create:

A popular misconception about computers is that they are logical. Forthright is a better term. Since computers can contain arbitrary descriptions, any conceivable collection of rules, consistent or not, can be carried out. Moreover, computers' use of symbols, like the use of symbols in language and mathematics, is sufficiently disconnected from the real world to enable them to create splendid nonsense. Although the hardware of the computer is subject to natural laws (electrons can move through circuits only in certain physically defined ways), the range of simulations the computer can perform is bounded only by the limits of human imagination. In a computer, spacecraft can be made to travel faster than the speed of light, to time travel in reverse.

It may seem almost sinful to discuss the simulation of nonsense, but only if we want to believe that what we know is correct and complete. History has not been kind to those who subscribe to this view. It is just this realm of apparent nonsense that must be kept open for the developing minds of the future. Although the personal computer can be guided in any direction we choose, the real sin would be to make it act like a machine!

Because he started out young in a field that was young itself, Kay was one of the first of the generation of infonauts, the ones who grew up with the tools created by the pioneers, and who have been using them to create a medium for the rest of us. One of the things he learned at ARPA and Utah, Sail and PARC, Atari and Apple, was that putting together a group of talents and leaving them alone might be the most important ingredient in invoking the breakthroughs he'll need to complete his dream.

People are beginning to wonder what Kay, now at Apple, intends to do next. "I would imagine that he feels more than a little frustrated," said Bob Taylor, in 1984, referring to the fact that Alan Kay hadn't produced anything as tangible as Smalltalk in a number of years. A hotshot programmer at Apple put it differently: "He deserves to be called a visionary, because he is. And I love to hang around him because he knows so much about so many things. But it gets a little tiring the third time you hear him say, 'We already did that back in '74.' "

Atari was the first institution where Alan Kay played a significant role but didn't make any breakthroughs. Because of what happened — or didn't happen — with the Atari team, he probably learned that being a member of a team, albeit an inspirational, even visionary member, doesn't necessarily mean that he is cut out to be a good leader. Before we explore the end of the dream at Atari, however, another infonaut by the name of Brenda will give us a glimpse at part of what Kay and his cohorts attempted to accomplish.

| index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | Footnotes |
read on to
Chapter Twelve:
Brenda and the Future Squad

howard rheingold's brainstorms

1985 howard rheingold, all rights reserved worldwide.