7. BEING ANALOG

DONALD A. NORMAN

From Norman, D. A. (In press, Fall, 1998). The invisible computer. Cambridge, MA: MIT Press. Copyright © 1997, 1998 Donald A. Norman. All rights reserved.

 

Making Sense of the World
Humans Versus Computers
Biological Versus Technological Evolution
The Ever-Increasing Pace of Change
Treating People Like Machines
The World Is Not Neat and Tidy
Making Sense of the World
Human Error
Humans & Computers as Cooperating Systems
Chapter 7: Notes

 

 

We are analog beings trapped in a digital world, and the worst part is, we did it to ourselves.

We humans are biological animals. We have evolved over millions of years to function well in the environment, to survive. We are analog devices following biological modes of operation. We are compliant, flexible, tolerant. Yet we people have constructed a world of machines that requires us to be rigid, fixed, intolerant. We have devised a technology that requires considerable care and attention, that demands it be treated on its own terms, not on ours. We live in a technology-centered world where the technology is not appropriate for people. No wonder we have such difficulties.

Here we are, wandering about the world, bumping into things, forgetful of details, with a poor sense of time, a poor memory for facts and figures, unable to keep attention on a topic for more than a short duration, reasoning by example rather than by logic, and drawing upon our admittedly deficient memories of prior experience. When viewed this way, we seem rather pitiful. No wonder that we have constructed a set of artificial devices that are very much not in our own image. We have constructed a world of machinery in which accuracy and precision matter. Time matters. Names, dates, facts, and figures matter. Accurate memory matters. Details matter.

All the things we are bad at matter, all the things we are good at are ignored. Bizarre.

MAKING SENSE OF THE WORLD

 

People are biological animals, evolved to fit within the natural world. We are flexible and tolerant. We excel at perception, at creativity, at the ability to go beyond the information given, making sense of otherwise chaotic events. We often have to interpret events far beyond the information available, and our ability to do this efficiently and effortlessly, usually without even being aware that we are doing so, greatly adds to our ability to function. This ability put together a sensible, coherent image of the world in the face of limited evidence allows us to anticipate and predict events, the better to cope with an ambiguous, ever-changing world.

Here's a simple test of your memory:

How many animals of each type

did Moses take on the Ark?

 

What's the answer? How many animals? Two? Be careful: what about an amoeba, a sexless, single-celled animal that reproduces by dividing itself in two? Did he need to take two of these?

Answer: None. No animals at all. Moses didn't take any animals onto the ark, it was Noah.

Some of you were fooled. Why? Because people hear what is intended, not what is said. In normal language, people ask real questions that have real answers and real meaning. It is only psychology professors and jokesters who ask trick questions. If you spotted the trick, it because you were unnaturally suspicious or alert. We don't need such alertness in normal human interaction. Those of you who were fooled responded normally: that is how we are meant to be.

Your mind interpreted the question meaningfully, making sense of the information. It may have confused "Moses" with "Noah," but it was aided by the fact that those names have a lot of similarity: both are short, with two syllables. Both are biblical, from the old testament. In normal circumstances, the confusion would be beneficial, for it is the sort of error that a speaker might make, and it is useful when a listener can go beyond superficial errors.

Note that the ability to be insensitive to simple speech errors does not mean the system is readily fooled. Thus, you would not have been fooled had I asked:

How many animals of each type

did Clinton take on the Ark?

 

The name Clinton is not sufficiently close to the target: it requires a biblical name to fool you . From a practical point of view, although a speaker might say "Moses" when "Noah" was intended, it is far less likely that someone would mistakenly say a non-biblical name such as "Clinton." The automatic interpretation of the original question is intelligent and sensible." The fact that the first question can fool people is a testament to our powers, not an indictment of them. Once again, in normal life, such corrections are beneficial. Normal life does not deliberately try to fool us. Take note of this example, for it is fundamental to understanding people and, more importantly, to understanding why computers are so different from people, why today's technology is such a bad match.

Why do accuracy and precision matter? In our natural world, they don't. We are approximate beings: we get at the meanings of things, and for this, the details don't much matter. Accurate times and dates matter only because we have created a culture in which these things are important. Accurate and precise measurements matter because the machines and procedures we have created are rigid, inflexible, and fixed in their ways, so if a measurement is off by some tiny fraction, the result can be a failure to operate. Worse yet, it can cause a tragic accident.

People are compliant: we adapt ourselves to the situation. We are flexible enough to allow our bodies and our actions to fit the circumstances. Animals don't require precise measurements and high accuracy to function. Machines do.

The same story is true of time, of facts and figures, and of accurate memory. These only matter because the mechanical, industrialized society created by people doesn't match people. In part, this is because we don't know how to do any better. Can we build machines that are as compliant and flexible as people? Not today. Biology doesn't build: it grows, it evolves. It constructs life out of soft, flexible parts. Parts that are self-repairable. We don't know how to do this with our machines: we can only build mechanical devices out of rigid substances like wood or steel. We only build information devices out of binary logic, with its insistence upon logic and precision. We invented the artificial mathematics of logic the better to enhance our own thought processes.

The dilemma facing us is the horrible mismatch between requirements of these human-built machines and human capabilities. Machines are mechanical, we are biological. Machines are rigid and require great precision and accuracy of control. We are compliant. We tolerate and produce huge amounts of ambiguity and uncertainty, very little precision and accuracy. The latest inventions of humankind are those of the digital technology of information processing and communication, yet we ourselves are analog devices. Analog and biological.

An analog device is one in which the representation of information corresponds to its physical structure. In an analog recording the stored signal varies in value precisely in the same way as sound energy varies in time. A phonograph recording is analog: it works by recreating the variations in sound energy by wiggles and depth of the groove. In a tape recording, the strength of the magnetic field on the tape varies in analogous fashion to the sound energy variations. These are analog signals.

Digital signals are very different. Here, what is recorded is an abstraction of the real signals. Digital encoding was invented mainly to get rid of noise. In the beginning, electrical circuits were all analog. But electrical circuits are noisy, meaning there are unwanted voltage variations. The noise gets in the way, mostly because the circuits are unable to distinguish between the stuff that matters and the stuff that doesn't: it blindly processes all.

Enter the digital world. Instead of using a signal that is analogous to the physical event, the physical event is transformed into a series of numbers that describes the original. In high-quality recording of music, the sound energy is sampled over 40,000 times each second, transformed into numbers that represent the energy value at the time each sample was made. The numbers are usually represented in the form of binary digits rather than the familiar decimal ones, which means that any digit can have only one of two states, 0 or 1, rather than the ten possible states of a decimal digit. When there are only two states to be distinguished between, the operation is far simpler and less subject to error than when it has to determine a precise value, as is required with an analog signal. Binary signals are relatively insensitive to noise.

As you can imagine, to record and playback a digital representation of sound waves requires a lot of processing. It is necessary to transform the sound into numbers, store the numerical digits, and then retrieve and restore them back to sound energy. Such rapid transformation wasn't possible at an affordable price until very recently, which is why the emphasis on digital signals seems new. It is only recently that the technology was capable of high-quality digital encoding of audio and television signals, but the concept is old.

There are a number of common misconceptions about digital and analog signals. One is that "analog" means continuous whereas "digital" means discrete. No, although this is often true, that is not the basis for the distinction. Think of "analog" as meaning "analogous": analogous to the real world. If the real world event is discrete, so too will be the analog one. If the physical process is continuous, then so too will be the analog one. Digital, however, is always discrete: one of a limited number of values, usually one of two, but occasionally one of three, four, or ten.

But the strangest misconception of all is that "digital" is somehow good, "analog" bad. This just isn't so. Yes, digital is good for our contemporary machines, but analog might be better for future machines. And analog is certainly far better for people. Why? Mainly because of the impact of noise.

Our evolution has been guided by the requirements of the world. Our perceptual systems evolved to deal with the world. In fact, if you want to understand how human perception works, it helps to start off by understanding how the world of light and sound works, because the eyes and ears have evolved to fit the nature of these physical signals. What this means is that we interact best with systems that are either part of the real world or analogous to them: analog signals.

Analog signals behave in ways the person can understand. A slight error or noise transforms the signals in known ways, ways the body has evolved to interpret and cope with. In a digital signal, the representation is so arbitrary, that a simple error can have unexpected consequences.

If there is some noise in a conventional television signal, encoded in analogical form, we see some noise on the screen. Usually we can tolerate the resulting image, at least as long as we can make sense of it. Small amounts of noise have slight impact.

The modern efficient digital encodings use compression technologies that eliminate redundancy. Digital television signals are almost always compressed to save space and bandwidth, the most common scheme being the algorithms devised by the Motion Picture Expert Group, or MPEG. If any information is lost, it takes a while before the system resends enough information to allow recovery. MPEG encoding breaks up the picture into rectangular regions. Noise can make it impossible for the system to reconstruct an entire region. As a result, when the image is noisy, whole regions of the screen break up and distort in ways the human brain cannot reconstruct, and it takes a few seconds until the picture reforms itself.

The real problem with being digital is that it implies a kind of slavery to accuracy, a requirement that is most unlike the natural workings of the person. People are analog, insensitive to noise, insensitive to error. People extract meanings, and as long as the meanings are unchanged, the details of the signals do not matter. They are not noticed, they are not remembered.

It is perfectly proper and reasonable for machines to use digital encodings for their internal workings. Machines do better with digital encoding. The problem comes about in the form of interaction between people and machines. People do best with signals and information that fit the way they perceive and think, which means analogous to the real world. Machines do best with signals and information that is suited for the way they function, which means digital, rigid, precise. So when the two have to meet, which side should dominate? In the past, it has been the machine that dominates. In the future it should be the person. Stay tuned for Chapter 9.

 

 

HUMANS VERSUS COMPUTERS

 

The ever-increasing complexity of everyday life brings with it both great opportunities and major challenges. One of the challenges, that the brain does not work at all like a computer, also provides us with an opportunity: the possibility of new modes of interaction that allow us to take advantage of the complementary talents of humans and machines.

The modern era of information technology has been with us but a short time. Computers are less than a century old. The technology has been constructed deliberately to produce mechanical systems that operate reliably, algorithmically, and consistently. They are based upon mathematics, or more precisely arithmetic in the case of the first computing devices and logic in the case of the more modern devices.

Contrast this with the human brain. Human beings are the results of millions of years of evolution, where the guiding principle was survival of the species, not efficient, algorithmic computation. Robustness in the face of unexpected circumstances plays a major role in the evolutionary process. Human intelligence has co-evolved with social interaction, cooperation and rivalry, and communication. The ability to learn from experience and to communicate and thereby coordinate with others has provided powerful adaptations for changing, complex environmental forces. Interestingly enough, the ability to deceive seems to have been one driving force. Only the most intelligent of animals is able to employ a sophisticated level of intentional, purposeful deception. Only the most sophisticated animal is capable of seeing through the deceit. Sure, nature also practices deception through camouflage and mimicry, but this isn't willful and intentional. Primates are the most skilled at intentional, willful deception, and the most sophisticated primate &emdash; the human &emdash; is the most sophisticated deceiver of all.

Note that some deception is essential for the smooth pursuit of social interaction: the "white lie" smoothes over many otherwise discomforting social clashes. It is not always best to tell the truth when someone asks how we like their appearance, or their presentation, or the gift they have just given us. One could argue that computers won't be truly intelligent or social until they too are able to deceive.

We humans have learned to control the environment. We are the masters of artifacts. Physical artifacts make us stronger, faster, and more comfortable. Cognitive artifacts make us smarter. Among the cognitive artifacts are the invention of writing and other notational systems, such as used in mathematics, dance, and musical transcription. The result of these inventions is that our knowledge is now cumulative: each generation grows upon the heritage left behind by previous generations. This is the good news. The bad news is that the amount to be learned about the history, culture, and the techniques of modern life increases with time. It now takes several decades to become a truly well-educated citizen. How much time will be required in fifty years? In one hundred years?

The biological nature of human computation, coupled with the evolutionary process by which the brain has emerged, leads to a very different style of computation from the precise, logic-driven systems that characterize current computers. The differences are dramatic. Computers are constructed from a large number of fast, simple devices, each following binary logic and working reliably and consistently. Errors in the operation of any of the underlying components are not tolerated, and they are avoided either by careful design to minimize failure rates or through error-correcting coding in critical areas. The resulting power of the computer is a result of the high speed of relatively simple computing devices.

Biological computation is performed by a very large number of slow, complex devices &emdash; neurons &emdash; each doing considerable computation and operating through electrical-chemical interactions. The power of the computation is a result of the highly parallel nature of the computation and the complex computations done by each of the billions of neural cells. Moreover, the cells are bathed in fluids whose chemistry can change rapidly, providing a means for rapid deployment of hormones and other signals to the entire system, chemicals that are site-specific. Think of it as a packet-switching deployment of chemical agents. The result is that the computational basis is dynamic, capable of rapid, fundamental change. Affect, emotion, and mood all play a powerful &emdash; and as yet poorly understood &emdash; role in human cognition. Certainly all of us have experienced the tension when logic dictates one course of action but mood or emotion another: more often than not, we follow mood or emotion.

Whatever the mode of computation &emdash; and the full story is not yet known &emdash; it is certainly not binary logic. Each individual biological element is neither reliable nor consistent. Errors are frequent &emdash; whole cells may die &emdash; and reliability is maintained through massive redundancy as well as through the inherently error-tolerant nature of the computational process and, for that matter, the relatively high error-tolerance of the resulting behavior.

These last points cannot be over-emphasized. The body, the brain, and human social interaction have all co-evolved to tolerate large variations in performance under a wide-ranging set of environmental conditions. It is a remarkable error-tolerant and forgiving system. It uses both electrical and chemical systems of communication and processing. Conscious and sub-conscious processing probably use different computational mechanisms, and the role of emotions and affect is not yet understood.

Human language serves as a good example of the evolution of a robust, redundant, and relatively noise-insensitive means of social communication. Errors are corrected so effortlessly that often neither party is aware of the error or the correction. The communication relies heavily upon a shared knowledge base, intentions, and goals: people with different cultural backgrounds often clash, even though they speak the same language. The result is a marvelously complex structure for social interaction and communication. Children learn language without conscious effort, yet the complexities of human languages still defies complete scientific understanding.

 

 

Biological Versus Technological Evolution

 

We humans have evolved to fit the natural environment. At the same time we have learned to modify and change the environment, leading to a co-evolution in which we have changed to fit the world, simultaneously changing the world, thus leading to further evolutionary change. Until recently, this co-evolution proceeded at a human pace. We developed language and tools. We discovered how to control fire and construct simple tools. The tools became more complex as simple tools became machines. The process was slow, the better to fit the new ways with the old, the new methods with human capabilities.

Biological evolution of humankind proceeds too slowly to be visible, but there is a kind of technological and environmental evolution that proceeds rapidly. We evolve our human-made artifacts to fit our abilities. This evolution is similar to, yet different from biological kind. For one thing, it has a history: it is Lamarckian, in that changes made to one generation can be propagated to future ones. Nonetheless, it is an evolutionary process because it tends to be unguided except by rules of survival. Each generation is but a small modification of the previous.

A good illustration of how an evolutionary process shapes our human-invented artifacts is sports. Sports require an exquisite mix of the doable and the difficult. Make a game too easy and it loses its appeal. Make it too difficult and it is unplayable. The range from too easy to too difficult is huge, fortunately so. One of our traits is the ability to learn, to develop skills far beyond that which the unpracticed person can do. Thus, some games, such as tic-tac-toe, which seem difficult when first encountered, are so readily mastered that they soon lose their appeal. A successful game is one that has a wide range of complexity, playable by beginners and experts alike, although not necessarily at the same time. Successful games include soccer, rugby, tennis, basketball, baseball, football, chess, go, checkers, poker, and bridge. These are multi-dimensional, rich and multi-faceted. As a result, the beginner can enjoy part of their charm while the expert can exploit all the multiple dimensions.

Games work well when they do not use too much technology. The reason is simple: games are suited for human reaction times, size and strength. Add too much technology to the mix, and you soon move the game beyond the reach of human abilities. This is aptly illustrated in war games, the deadly dueling exercises in which the armies of the world pit themselves, one against the other. But here, the technologies are deliberately exploited to exceed human capability, so much so that it can take ten years of training to be able to master a modern jet fighter plane, and even then the human pilot is rendered temporarily unconscious during violent maneuvers. These are games not fit for people.

Alas, the slow, graceful co-evolution of people and environment, and of the tools, artifacts, and games that we have designed no longer holds. Each generation benefits from the one before, and the accumulated knowledge leads to more rapid change. We benefit greatly with this cumulative buildup of knowledge, but the price we pay is that each succeeding generation has more and more to learn. The result is that the past acts both as a wonderful starting point, propelling us forward on the shoulders of giants. Alternatively, it can be seen as a massive anchor, compelling us to spend more and more time at school, learning the accumulated wisdom of the ages, to the point that one's motivation and energy may be depleted before the studies are over.

 

 

The Ever-Increasing Pace of Change

 

Once upon a time it was possible for everyone to learn the topics of a culture. After all, things changed slowly, at a human pace. During maturity, children learned of what had gone before, and from then on, they could keep up with changes. The technology changed slowly. Moreover, it was mechanical, which meant it was visible. Children could explore it. Teenagers could disassemble it. Young adults could hope to improve it.

Once upon a time technological evolution proceeded at a human pace. Crafts and sports evolved over a lifetime. Even though the results could be complex, the reason behind the complexity could usually be seen, examined, and talked about. The technology could be lived and experienced. As a result, it could be learned.

Today, this is no longer possible. The slow evolutionary pace of life is no longer up to the scale and pace of technological change. The accumulation of knowledge is enormous, for it increases with every passing year. Once upon a time, a few years of schooling &emdash; or even informal learning &emdash; was sufficient. Today, formal schooling is required, and the demands upon it continually increase. The number of different topics that must be mastered, from history and language to science and technology to practical knowledge and skills is ever-increasing. Once a grade-school education would suffice for most people. Then high-school was required. Then college, post-graduate education, and even further education after that. Today, no amount of education is sufficient.

Scientists no longer are able to keep up with advances even within their own field, let alone in all of science. As a result, we are in the age of specialization, where it is all one person can do to keep up with the pace in some restricted domain of endeavor. But with nothing but specialists, how can we bridge the gaps?

The new technologies can no longer be learned on their own. Today, the technology tends to be electronic, which means that its operation is invisible, for it takes place inside of semi-conductor circuits through the invisible transfer of voltages, currents and electromagnetic fields, all of which are invisible to the eye. A single computer chip may have ten million components, and chips with 100 million components are in the planning stage: who could learn such things by disassembly, even were disassembly possible? So too with computer programs: a program with hundreds of thousands of lines of instructions is commonplace. Those with millions of instructions are not infrequent.

Worst, the new technology can often be arbitrary, inconsistent, complex, and unnecessary. It is all up to the whim of the designer. In the past, physical structures posed their own natural constraints upon the design and the resulting complexity. But with information technologies, the result can be as simple or complex as the designer wills it to be, and far too few designers have any appreciation of the requirements of the people who must use their designs.

Even when a designer is considerate of the users of the technology, there may be no natural relationship between one set of designs and another. In the physical world, the natural constraints of physical objects meant that similar tools worked in similar ways. Not so in the world of information: Very similar tools may work in completely different &emdash; perhaps even contradictory &emdash; ways.

 

 

TREATING PEOPLE LIKE MACHINES

 

What an exciting time the turn of the century must have been! The period from the late 1800s through the early 1900s was one of rapid change, in many ways paralleling the changes that are taking place now. In a relatively short period of time, the entire world went through rapid, almost miraculous technological invention, forever changing the lives of its citizens, society, business and government. In this period, the light bulb was developed and electric power plants sprung up across the nation. Electric motors were developed to power factories. The telegraph spanned the American continent and the world, followed by the telephone. Then came the phonograph, for the first time in history allowing voices, songs, and sounds to be preserved and replayed at will. At the same time, mechanical devices were increasing in power. The railroad was rapidly expanding its coverage. Steam-powered ocean-going ships were under development. The automobile was invented, first as expensive, hand-made machines, starting with Daimler and Benz in Europe. Henry Ford developed the first assembly line for the mass-production of relatively inexpensive automobiles. The first airplane was flown and within a few decades would carry mail, passengers, and bombs. Photography was in its prime and motion pictures were on the way. And soon to come was radio, allowing signals, sounds, and images to be transmitted all across the world, without the need for wires. It was a remarkable period of change.

It is difficult today to imagine life prior to these times. At nighttime the only lighting was through flames: candles, fireplaces, oil and kerosene lamps, and in some places, gas. Letters were the primary means of communication, and although letter delivery within a large city was rapid and efficient, with delivery offered more than once each day, delivery across distances could take days or even weeks. Travel was difficult, and many people never ventured more than 30 miles from their homes during their entire lives. Everyday life was quite different from today. But in what to a historian is a relatively short period, the world changed dramatically in ways that affected everyone, not just the rich and upper class, but the everyday person as well.

Light, travel, entertainment: all changed through human inventions. Work did too, although not always in beneficial ways. The factory already existed, but the new technologies and processes brought forth new requirements, along with opportunities for exploitation. The electric motor allowed a more efficient means of running factories. But as usual, the largest change was social and organizational: the analysis of work into a series of small actions and the belief that if each action could be standardized, each organized into "the one best way, " then automated factories could reap even greater efficiencies and productivity. Hence the advent of time-and-motion studies, of "scientific management," and of the assembly line. Hence too came the dehumanization of the worker, for now the worker was essentially just another machine in the factory, analyzed like one, treated like one, and asked not to think on the job, for thinking slowed down the action.

The era of mass production and the assembly line, resulted in part from the efficiencies of the "disassembly line" developed by the meat packing factories. The tools of scientific management took into account the mechanical properties of the body but not the mental and psychological ones. The result was to cram ever more motions into the working day, treating the factory worker as a cog in a machine, deliberately depriving work of all meaning, all in the name of efficiency. These beliefs have stuck with us, and although today we do not go to quite the extremes advocated by the early practitioners of scientific management, the die was cast for the mindset of ever-increasing efficiency, ever-increasing productivity from the workforce. The principle of improved efficiency is hard to disagree with. The question is, at what price?

Frederick Taylor thought there was "the one best way" of doing things. Taylor's work, some people believe, has had the largest impact upon the lives of people in this century than that of anyone else. His book, The principles of scientific management, published in 1911, guided factory development and workforce habits across the world, from the United States to Stalin's attempt to devise an efficient communist workplace in the newly formed Soviet Union. You may not have ever heard of him, but he is primarily responsible for our notions of efficiency, of the work practices followed in industry across the world, and even of the sense of guilt we sometimes feel when we have been "goofing off," spending time on some idle pursuit when we should be attending to business.

Taylor's "scientific management" was a detailed, careful study of jobs, breaking down each task into its basic components. Once you knew the components, you could devise the most efficient way of doing things, devise procedures that enhanced performance and increased the efficiency of workers. If Taylor's methods were followed properly, management could raise workers' pay while at the same time increasing company profit. In fact, Taylor's methods required management to raise the pay, for money was used as the powerful incentive to get the workers to follow the procedures and work more efficiently. According to Taylor, everybody would win: the workers would get more money, the management more production and more profit. Sounds wonderful, doesn't it? The only problem was that workers hated it.

Taylor, you see, thought of people as simple, mechanical machines. Find the best way to do things, and have people do it, hour after hour, day after day. Efficiency required no deviation. Thought was eliminated. First of all, said Taylor, the sort of people who could shovel dirt, do simple cutting, lathing, and drilling, and in general do the lowest-level of tasks, were not capable of thought. "Brute laborers" is how he regarded them. Second, if thought was needed, it meant that there was some lack of clarity in the procedures or the process, which signaled that the procedures were wrong. The problem with thinking, explained Taylor, was not only that most workers were incapable of it, but that thinking slowed the work down. That's certainly true: why, if we never had to think, just imagine how much faster we could work. In order to eliminate the need for thought, Taylor stated that it was necessary to reduce all work to the routine, that is all the work except for people like him who didn't have to keep fixed hours, who didn't have to follow procedures, who were paid literally hundreds of times greater wages than the brutes, and who were allowed &emdash; encouraged &emdash; to think .

Taylor thought that the world itself was neat and tidy. If only everyone would do things according to procedure, everything would run smoothly, producing a clean, harmonious world. Taylor may have thought he understood machines, but he certainly didn't understand people. In fact, he didn't really understand the complexity of machines and the complexity of work. And he certainly didn't understand the complexity of the world.

 

 

The World Is Not Neat and Tidy

 

The world is not neat and tidy. Things not only don't always work as planned, but the notion of "plan" itself is suspect. Organizations spend a lot of time planning their future activities, but although the act of doing the planning is useful, the actual plans themselves are often obsolete even before their final printing.

There are lots of reasons for this. Those philosophically inclined can talk about the fundamental nature of quantum uncertainty, of the fundamental statistical nature of matter. Alternatively, one can talk of complexity theory and chaos theory, where tiny perturbations can have major, unexpected results at some future point. I prefer to think of the difficulties as consequences of the complex interactions that take place among the trillions of events and objects in the world, so many interactions that even if science were advanced enough to understand each individual one &emdash; which it isn't &emdash; there are simply too many combinations and perturbations possible to ever have worked out all possibilities. All of these different views are quite compatible with one another.

Consider these examples of things that can go wrong:

 

These examples illustrate several points. The world is extremely complex, too complex to keep track of, let alone predict. In retrospect, looking back after an accident, it always seems obvious. There are always a few simple actions that, had they been taken, would have prevented an accident. There are always precursor events that, had they been perceived and interpreted properly would have warned of the coming accident. Sure, but this is in retrospect, when we know how things turned out.

Remember, life is complex. Lots of stuff is always happening, most of which is irrelevant to the task at hand. We all know that it is important to ignore the irrelevant and attend to the relevant. But how does one know which is which? Ah.

We human beings are a complex mixture of motives and mechanisms. We are sense-making animals, always trying to understand and give explanations for the things we encounter. We are social animals, seeking company, working well in small groups. Sometimes this is for emotional support, sometimes for assistance, sometimes for selfish reasons, so we have someone to feel superior to, to show off to, to tell our problems to. We are narcissistic and hedonistic, but also altruistic. We are lots of things, sometimes competing, conflicting things. And we are also animals, with complex biological drives that strongly affect behavior: emotional drives, sexual drives, hunger drives. Strong fears, strong desires, strong phobias, and strong attractions.

 

 

Making Sense of the World

 

If an airplane crashes on the border between the United States and Canada, killing half the passengers, in which country should the survivors be buried?

 

We are social creatures, understanding creatures. We try to make sense of the world. We assume that information is sensible, and we do the best we can with what we receive. This is a virtue. It makes us successful communicators, efficient and robust in going about our daily activities. It also means we can readily be tricked. It wasn't Moses who brought the animals aboard the ark, it was Noah. It isn't the survivors who should be buried, it is the casualties.

It's a good thing we are built this way: this compliance saves us whenever the world goes awry. By making sense of the environment, by making sense of the events we encounter, we know what to attend to, what to ignore. Human attention is the limiting factor, a well known truism of psychology and of critical importance today. Human sensory systems are bombarded with far more information than can be processed in depth: some selection has to be made. Just how this selection is done has been the target of prolonged investigation by numerous cognitive scientists who have studied people's behavior when overloaded with information, by neuroscientists who have tried to follow the biological processing of sensory signals, and by a host of other investigators. I was one of them: I spent almost ten years of my research life studying the mechanisms of human attention.

One understanding of the cognitive process of attention comes from the concept of a "conceptual model," a concept that will gain great importance in Chapter 8 when I discuss how to design technology that people can use. A conceptual model is, to put it most simply, a story that makes sense of the situation.

I sit at my desk with a large number of sounds impinging upon me. It is an easy matter to classify the sounds. What is all that noise outside? A family must be riding their bicycles and the parents are yelling to their children. And the neighbor's dogs are barking at them, which is why my dogs started barking. Do I really know this? No. I didn't even bother to look out the window: my mind subconsciously, automatically created the story, creating a comprehensive explanation for the noises, even as I concentrated upon the computer screen.

How do I know what really happened? I don't. I listened to the sounds and created an explanation, one that was logical, heavily dependent upon past experience with those sound patterns. It is very likely to be correct, but I don't really know.

Note that the explanation also told me which sounds went together. I associated the barking dogs with the family of bicyclists. Maybe the dogs were barking at something else. Maybe. The point is not that I might be wrong, the point is that this is normal human behavior. Moreover, it is human behavior that stands us in good stead. I am quite confident that my original interpretations were correct, confident enough that I won't bother to check. I could be wrong.

A good conceptual model of events allows us to classify them into those relevant and those not relevant, dramatically simplifying life: we attend to the relevant and only monitor the irrelevant. Mind you, this monitoring and classification is completely subconscious. The conscious mind is usually unaware of the process. Indeed, the whole point is to reserve the conscious mind for the critical events of the task being attended to and to suppress most of the other, non-relevant events from taking up the limited attentional resources of consciousness.

On the whole, human consciousness avoids paying attention to the routine. Conscious processing attends to the non-routine, to the discrepancies and novelties, to things that go wrong. As a result, we are sensitive to changes in the environment, remarkably insensitive to the commonplace, the routine.

Consider how conceptual models play a role in complex behavior, say the behavior of a nuclear power plant, with many systems interacting. A nuclear power plant is large and complex, so it is no surprise that things are always breaking. Minor things. I like to think of this as analogous to my home. In my house, things seem forever to be breaking, and my home is nowhere near as complex as a power station. Light bulbs are continually burning out, several door hinges and motors need oiling, the bathroom faucet leaks, and the fan for the furnace is making strange noises. Similar breakdowns happen in the nuclear power plant, and although there are repair crews constantly attending to them, the people in the control room have to decide which of the events are important, which are just the everyday background noise that have no particular significance.

Most of the time people do brilliantly. People are very good at predicting things before they happen. Experts are particularly good at this because of their rich prior experience. When a particular set of events occurs, they know exactly what will follow.

But what happens when the unexpected happens? Do we go blindly down the path of the most likely interpretation? Of course, in fact this is the recommended strategy. Most of the time we behave not only correctly, but cleverly, anticipating events before they happen. You seldom hear about those instances. We get the headlines when things go wrong, not when they go right.

Look back at the incidents I described earlier. The nuclear power incident is the famous Three Mile Island event that completely destroyed the power-generating unit and caused such a public loss in confidence in nuclear power that no American plant has been built since. The operators misdiagnosed the situation, leading to a major calamity. But the misdiagnosis was a perfectly reasonable one. As a result, they concentrated on items they thought relevant to their diagnosis and missed other cues, which they thought were just part of the normal background noise. The tags that blocked the view would not normally have been important.

In the hospital x-ray situation, the real error was in the design of the software system, but even here, the programmer erred in not thinking through all of the myriad possible sequences of operation, something not easy to do. There are better ways of developing software that would have made it more likely to have caught these problems before the system was released to hospitals, but even then, there are no guarantees. As for the hospital personnel who failed to understand the relationship, well, they too were doing the best they could to interpret the events and to get through their crowded, hectic days. They interpreted things according to normal events, which was wrong only because this one was very abnormal.

Do we punish people for failure to follow procedures? This is what Frederick Taylor would have recommended. After all, management determines the one best way to do things, writes a detailed procedure to be followed in every situation, and expects workers to follow them. That's how we get maximum efficiency. But how is it possible to write a procedure for absolutely every possible situation, especially in a world filled with unexpected events? Answer: it's not possible. This doesn't stop people from trying. Procedures and rule books dominate industry. The rule books take up huge amounts of shelf space. In some industries, it is impossible for any individual to know all the rules. The situation is made even worse by national legislatures who can't resist adding new rules. Was there a major calamity? Pass a law prohibiting some behavior, or requiring some other behavior. Of course, the law strikes at the easiest source to blame, whereas the situation may have been so complex that no single factor was to blame. Nonetheless, the law sits there, further controlling sense and reasonableness in the conduct of business.

Do we need procedures? Of course. The best procedures will mandate outcomes, not methods. Methods change: it is the outcomes we care about. Procedures must be designed with care and attention to the social, human side of the operation. Else we have the existing condition in most industries. If the procedures are followed exactly, work slows to an unacceptable level. In order to perform properly it is necessary to violate the procedures. Workers get fired for lack of efficiency, which means they are subtly, unofficially encouraged to violate the procedures. Unless something goes wrong, in which case they can be fired for failure to follow the procedures.

Now look at the Navy. The apparent chaos, indecision and arguments are not what they seem to be. The apparent chaos is a carefully honed system, tested and evolved over generations, that maximizes safety and efficiency even in the face of numerous unknowns, novel circumstances, and a wide range of skills and knowledge by the crew. Having everyone participate and question the actions serves several roles simultaneously. The very ambiguity, the continual questioning and debate keeps everyone in touch with the activity, thereby providing redundant checks on the actions. This adds to the safety, for now it is likely for errors to get detected before they have caused problems. The newer crew members are learning, and the public discussions among the other crew serve as valuable training exercises, training mind you not in some artificial, abstract fashion, but in real, relevant situations where it really matters. And by not punishing people when they speak out, question, or even bring the operations to a halt, they encourage continual learning and performance enhancement. It makes for an effective, well-tuned team.

New crew members don't have the experience of older ones. This means they are not efficient, don't always know what to do, and perform slowly. They need a lot of guidance. The system automatically provides this constant supervision and coaching, allowing people to learn on the job. At the same time, because the minds of the new crew members are not yet locked into the routines, their questioning can sometimes reveal errors: they challenge the conventional mindset, asking whether the simple explanation of events is correct. This is the best way to avoid errors of misdiagnosis.

The continual challenge to authority goes against conventional wisdom and is certainly a violation of the traditional hierarchical management style. But it is so important to safety that the aviation industry now has special training in crew management, where the junior officers in the cockpit are encouraged to question the actions of the captain. In turn, the captain, who used to be thought of as the person in command, with full authority, never to be questioned, has had to learn to encourage crew members to question their every act. The end result may look less regular, but it is far safer.

The Navy's way of working is the safest, most sensible procedure. Accidents are minimized. The system is extremely safe. Despite the fact that the Navy is undertaking dangerous operations under periods of rushed pace and high stress, there are remarkably few mishaps. If the Navy would follow formal procedures and a strict hierarchy of rank, the result would very likely be an increase in accident rate . Other industries would do well to copy this behavior. Fred Taylor would turn over in his grave. (But he would be efficient about it, without any wasted motion.)

 

 

Human Error

 

Machines, including computers, don't err in the sense that they are fully deterministic, always returning the same value for the same inputs and operations. Someday we may have stochastic and/or quantum computation, but even then, we will expect them to follow precise laws of operation. When computers do err, it is either because a part has failed or because of human error, either in design specification, programming, or faulty construction. People are not fully deterministic: ask a person to repeat an operation, and the repetition is subject to numerous variations.

People do err, but primarily because they are asked to perform unnatural acts: to do detailed arithmetic calculations, to remember details of some lengthy sequence or statement, or to perform precise repetitions of actions. In the natural world, no such acts would be required: all are a result of the artificial nature of manufactured and invented artifacts. Perhaps the best example of the arbitrary and inelegant fit of human cognition to artificial demands contrasted with a natural fit to natural demands is to contrast people's ability to communicate with programming languages versus human language.

Programming languages are difficult to learn, and a large proportion of the population is incapable of learning them. Moreover, even the most skilled programmers make errors, and error finding and correction occupy a significant amount of a programming team's time and effort. Moreover, programming errors are serious. In the best circumstances, they lead to inoperable systems. In the worst, they lead to systems that appear to work but produce erroneous results.

A person's first human language is so natural to learn that it is done without any formal instruction: people must suffer severe brain impairment to be incapable of learning language. Note that "natural" does not mean "easy": it takes ten to fifteen years to master one's native language. Second language learning can be excruciatingly difficult.

Natural language, unlike programming language, is flexible, ambiguous, and heavily dependent on shared understanding, a shared knowledge base, and shared cultural experiences. Errors in speech are seldom important: Utterances can be interrupted, restarted, even contradicted, with little difficulty in understanding. The system makes natural language communication extremely robust.

Human error matters primarily because we followed a technology-centered approach in which it matters. A human-centered approach would make the technology robust, compliant, and flexible. The technology should conform to the people, not people to the technology.

Today, when faced with human error, the traditional response is to blame the human and institute a new training procedure: blame and train. But when the vast majority of industrial accidents are attributed to human error, it indicates that something is wrong with the system, not the people. Consider how we would approach an system failure due to a noisy environment: we wouldn't blame the noise, we would instead design a system that was robust in the face of noise.

This is exactly the approach that should be taken in response to human error: redesign the system to fit the people who must use it. This means to avoid the incompatibilities between human and machine that generate error, to make it so that errors can be rapidly detected and corrected, and to be tolerant of error. To "blame and train" does not solve the problem.

 

 

HUMANS & COMPUTERS AS COOPERATING SYSTEMS

 

Because humans and computers are such different kinds of systems, it should be possible to develop a symbiotic, complementary strategy for cooperative interaction. Alas, today's approaches are wrong. One major theme is to make computers more like humans. This is the original dream behind classical Artificial Intelligence: to simulate human intelligence. Another theme is to make people more like computers. This is how technology is designed today: the designers determine the needs of the technology and then ask people to conform to those needs. The result is an ever-increasing difficulty in learning the technology, and an ever-increasing error rate. It is no wonder that society exhibits an ever-increasing frustration with technology.

Consider the following attributes of humans and machines presented from today's machine-centered point of view :

 

The Machine-Centered View

People

Machines

Vague

Precise

Disorganized

Orderly

Distractible

Undistractible

Emotional

Unemotional

Illogical

Logical

Note how the humans lose: all the attributes associated to people are negative, all the ones associated with machines are positive. But now consider attributes of humans and machines presented from a human-centered point of view:

 

The Human-Centered View

People

Machines

Creative

Unoriginal

Compliant

Rigid

Attentive to change

Insensitive to change

Resourceful

Unimaginative

Now note how machines lose: all the attributes associated with people are positive, all the ones associated with machines are negative.

The basic point is that the two different viewpoints are complementary. People excel at qualitative considerations, machines at quantitative ones. As a result, for people, decisions are flexible because they follow qualitative as well as quantitative assessment, modified by special circumstances and context. For the machine, decisions are consistent, based upon quantitative evaluation of numerically specified, context-free variables. Which is to be preferred? Neither: we need both.

It's good that computers don't work like the brain. The reason I like my electronic calculator is because it is accurate: it doesn't make errors. If it were like my brain, it wouldn't always get the right answer. The very difference is what makes the device so valuable. I think about the problems and the method of attack. It does the dull, dreary details of arithmetic &emdash; or in more advanced machines, of algebraic manipulations and integration. Together, we are a more powerful team than either of us alone.

The same principle applies to all our machines: the difference is what is so powerful, for together, we complement one another. However, this is useful only if the machine adapts itself to human requirements. Alas, most of today's machines, especially the computer, force people to use them on their terms, terms that are antithetical to the way people work and think. The result is frustration, an increase in the rate of error (usually blamed on the user &emdash; human error &emdash; instead of on faulty design), and a general turning-away from technology.

Will the interactions between people and machines be done correctly in 50 years? Might schools of computer science start teaching the human-centered approach that is necessary to reverse the trend? I don't see why not.

 

CHAPTER 7: NOTES

Being analog. Omar Wason suggested this title at the conference "Jerry's Retreat," Aug. 19, 1996. The instant I heard it I knew I wanted to use it, so I sought his permission which he graciously gave.

 

Sections of this chapter originally appeared in Norman, D. A. (1997). Why it's good that computers don't work like the brain. In P. J. Denning & R. M. Metcalfe (Ed.), Beyond calculation: The next fifty years of computing. New York: Copernicus: Springer-Verlag. Many of the ideas made their original appearance in Norman, D. A. (1993). Things that make us smart. Reading, MA: Addison-Wesley.

I apologize to readers of my earlier books and papers for this repetition, but what can I say: the argument fits perfectly here, so in it goes.

It requires a biblical name to fool you. Erickson, T. A. & Mattson, M. E. (1981). From words to meaning: A semantic illusion. Journal of Verbal Learning and Verbal Behavior, 20, 540-552. The paper that started the quest for understanding why people have trouble discovering the problem with the question, "How many animals of each kind did Moses take on the Ark?"

Reder and Kusbit followed-up on the work and present numerous other examples of sentences that show this effect. Reder, L. M. & Kusbit, G. W. (1991). Locus of the Moses illusion: Imperfect encoding, retrieval, or match? Journal of Memory and Language, 30, 385-406.

 

Humans versus computers. This section was originally printed as Norman, D. A. (1997). Why it's good that computers don't work like the brain. In P. J. Denning & R. M. Metcalfe (Ed.), Beyond calculation: The next fifty years of computing. New York: Copernicus: Springer-Verlag.

 

The one best way. Kanigel, R. (1997). The one best way: Frederick Winslow Taylor and the enigma of efficiency. New York: Viking.

The question is at what price. For an excellent, in-depth analysis of the price paid in the name of efficiency, see Rifkin, J. (1995). The end of work: The decline of the global labor force and the dawn of the post-market era. New York: G. P. Putnam's Sons.

His book, The principles of scientific management. Taylor, F. W. (1911). The principles of scientific management. New York: Harper & Brothers. (See note 7.)

Taylor stated that it was necessary to reduce all work to the routine. Taylor's work is described well in three books. First, there is Taylor's major work: Taylor, F. W. (1911). The principles of scientific management. New York: Harper & Brothers.

Second, there is the masterful and critical biography of Taylor, one that illustrates the paradox between what Taylor professed and how he himself lived and acted: Kanigel, R. (1997). The one best way: Frederick Winslow Taylor and the enigma of efficiency. New York: Viking.

Finally, there is Rabinbach's masterful treatment of the impact of changing views of human behavior, the rise of the scientific method (even when it wasn't very scientific), and the impact of Taylor not only on modern work, but on political ideologies as well, especially Marxism and Fascism: Rabinbach, A. (1990). The human motor: Energy, fatigue, and the origins of modernity. New York: Basic Books.

Also see "Taylorismus + Fordismus = Amerikanismus," Chapter 6 of Hughes, T. P. (1989). American genesis: A century of invention and technological enthusiasm, 1870&emdash;1970. New York: Viking Penguin.

 

A repair crew disconnects a pump from service in a nuclear power plant. This is an oversimplified account of some of the many factors of the Three-Mile island Nuclear Power accident. See Kemeny, J. G., et al. (1979). Report of the President's Commission on the Accident at Three Mile Island. New York: Pergamon. Rubenstein, E. (1979). The accident that shouldn't have happened. IEEE Spectrum, 16 (11, November), 33-42.

 

A hospital x-ray technician enters a dosage for an x-ray machine, then realizes it is wrong. See Appendix A, Medical devices: The Therac-25 story, in Leveson, N. G. (1995). Safeware: System safety and computers. Reading, MA: Addison-Wesley. This book also includes nice appendices on The Therac-25 story (x-ray overdosage), the Bhopal chemical disaster, the Apollo 13 incident, DC-10s, and the NASA Challenger. And the various Nuclear Power industry problems, including Three Mile Island and Chernobyl.

 

There are better ways of developing software. See Nancy Leveson's book Safeware: System safety and computers. Reading, MA: Addison-Wesley (1995). For a scary discussion of the failures of system design, see Neumann, P. (1995). Computer-related risks. Reading, MA: Addison-Wesley.

 

If the Navy would follow formal procedures and a strict hierarchy of rank, the result would very likely be an increase in accident rate. See Hutchin's analysis of crew training and error management in ship navigation: Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press. See also La Porte, T. R. & Consolini, P. M. (1991). Working in practice but not in theory: Theoretical challenges of high-reliability organizations. Journal of Public Administration Research and Theory, 19-47.

These issues are well treated by Robert Pool in both his book and the excerpt in the Journal Technology Review:

Pool, R. (1997). Beyond engineering: How society shapes technology. New York: Oxford University Press. Pool, R. (1997). When failure is not an option. Technology Review, 100 (5), 38-45. (Also see http://web.mit.edu/techreview/).

Attributes of humans and machines taken from today's machine-centered point of view and a human-centered point of view. From Norman, D. A. (1993). Things that make us smart. Reading, MA: Addison-Wesley.