
This Article From Issue
September-October 2010
Volume 98, Number 5
Page 428
DOI: 10.1511/2010.86.428
THE INVISIBLE GORILLA: And Other Ways Our Intuitions Deceive Us. Christopher Chabris and Daniel Simons. xiv + 306 pp. Crown, 2010. $27.
In 1999 Christopher Chabris and Daniel Simons carried out an experiment in which they instructed subjects to watch a one-minute video of a basketball game and to count the number of passes made by the team wearing white. In the middle of the video a woman wearing a gorilla costume walks into the scene, stops, faces the camera, thumps her chest, and walks off. Astonishingly, half the subjects watching the video did not see the gorilla. Moreover, when asked afterward, “Did you notice a gorilla?,” they were unable to believe that they had missed it, and they were astonished when they watched the video a second time and saw it. Some subjects accused the experimenters of switching the videos. Later experiments using a device called an eye tracker demonstrated that some of the subjects who didn’t see the gorilla were in fact looking straight at it for a second at a time. The findings were published in the Journal of Perception, and the experiment became famous; it was featured on Dateline NBC and discussed in an episode of CSI: Crime Scene Investigation. Of course, misdirection of this kind has been used in magic tricks for millennia, but I suspect that even Houdini would have been surprised that one can make a gorilla in center stage invisible to half of all observers simply by asking them to count basketball passes.
Now Chabris and Simons have written a book called The Invisible Gorilla, which takes as its theme two kinds of errors: gaps of various kinds in people’s cognitive abilities, and the inability of people to recognize those gaps or even to believe in their existence. Each of the six chapters analyzes a different category of “illusion”: the illusions of attention, memory, confidence, knowledge, cause and potential. The book, which is very entertaining and readable, is a pleasant mixture of experimental studies and anecdotal information. Many of the anecdotes are derived from criminal cases, in which the reliability of observation and of memory are issues of paramount importance.
The analyses in the first three chapters are of great practical and social significance. In chapter 1, the authors relate the illusion of attention to all kinds of situations in which someone should have seen something but didn’t, because his or her mind was elsewhere—for example, accidents involving car drivers using cell phones, cases in which car drivers failed to see motorcycles and drove into them, and a 2001 incident in which the captain of a U.S. submarine failed to see a Japanese fishing vessel through a periscope and surfaced directly beneath it. The gorilla experiment sheds light on these incidents in two ways: It shows how easily a person whose mind is occupied elsewhere can miss things. And it demonstrates that in court cases, when person A says of person B, “I didn’t see him,” and B says of A, “He was looking straight at me,” both statements may be entirely true.
Chapter 2, on the illusion of memory, reveals that people’s memories are much less reliable than they suppose, and that the vividness of a memory and one’s certainty of it are no guarantee of its accuracy. The reliability of memory—which is, of course, of critical importance both in historical research and in criminal cases—has been the subject of a large body of psychological research, beginning with Elizabeth Loftus’s pioneering work in the mid 1970s. (It is astonishing, incidentally, that Loftus is mentioned only once, in passing, in this chapter.) The work of Loftus and others, which shows the unreliability of memory and its susceptibility to suggestion and other distorting influences, played a large part in bringing an end to the “recovered memory” fad of the 1980s and has had a positive impact on court proceedings by promoting a more accurate view of the reliability of eyewitness testimony. Everyone should read this chapter and be aware of this research, particularly since we all face the possibility of becoming involved in the legal system as jurors, witnesses or litigants.
The illusion of confidence, covered in chapter 3, is the belief that people who speak or act with great confidence have greater skill or knowledge or a more accurate memory than those who are less confident. Again, this is an almost universal belief. Patients become nervous if their doctor consults a reference book or expresses any uncertainty; jurors give more credence to witnesses who sound sure of what they are saying. The experimental evidence shows, however, that although statements by any particular person are generally more reliable when they are expressed more confidently, comparing one person’s level of confidence to another person’s is not a good basis for deciding which individual is more reliable, particularly if you don’t know much about either one’s baseline level of confidence. Some people are just confident in general, even in situations in which they lack expertise.
Several of the experiments discussed in the first three chapters are every bit as vivid, surprising and ingeniously designed as the gorilla experiment. In one, the authors staged a real-life “continuity error”: A confederate of the researcher poses with a map, looking lost, and asks unwitting subjects for directions. While he is speaking, two more confederates barge through with a large wooden door, briefly blocking the subject’s view of the person seeking directions. While the subject’s view is blocked, the directions-seeker is replaced by someone of a different height and build who is wearing different clothes. Nearly 50 percent of the subjects failed to notice that anything strange had happened. Other research shows that if the sex or race of the replacement is different from that of the person who is being replaced, subjects do take notice.
Unfortunately, even in these early chapters the quality of the evidence presented is very uneven, and some of the arguments made are weak. The authors discuss at some length the Joshua Bell stunt in which the violinist played his Stradivarius at a subway entrance and was ignored by all but a few of the passersby. The authors analyze this in terms of the illusion of attention, but there is no reason whatever to suppose that Bell was invisible to the commuters in the sense that the gorilla was invisible to the subjects watching the basketball video. All the Bell “experiment” proves is that commuters hurrying home are generally not at leisure to stop and listen to a violin recital. At most, it sheds a sad light on the pressures of life in the 21st century.
The last three chapters, on the illusions of knowledge, cause and potential, are much weaker. The prevalence of the illusion of knowledge—the fact that people, including experts, think they know more than they do—will not come as news to anyone, and the examples given are not particularly compelling. For instance, in the year 2000, leading geneticists were invited to predict the number of genes that would be found in the human genome and to enter a betting pool, with the total amount collected going to the person who made the best estimate. The estimates varied widely, ranging from 25,747 to 153,478. The current estimate is about 20,500, so no one was right, and some estimates were very far off the mark. It is hard to see what is surprising about this story or what point it illustrates. The scientists were invited to guess; the evidence at the time was inadequate, so their guesses were wrong. What of it? There is nothing to suggest that any of them had an unjustified degree of confidence in their guesses.
In general, there is a tendency in this book to assume that any error is a foolish mistake due to an illusion. Chabris and Simons say, for example, that the analysis of military turning points made by Dominic Johnson in the book Overconfidence and War shows that “almost any country that voluntarily initiates a war and then loses must have suffered from the illusion of confidence, since negotiation is always an option.” But that isn’t true. A country could have had good justification for thinking that it would probably win (or, more precisely, for expecting that the outcome of going to war would be more favorable than the outcome of not going to war), even if that’s not how things actually turned out for one reason or another. Losing is not necessarily a sign of a foolish decision, any more than winning is a sign of a wise one.
For an example that is more easily quantified and less fraught, consider a bridge player who must choose between two lines of play: Plan A, which has a 66 percent chance of success, and plan B, which has only a 50 percent chance of success. The uncertainty about whether to follow plan A or plan B derives from the fact that the player does not know the distribution of cards in his opponents’ hands. The rational choice is to follow the plan most likely to succeed, even though doing so will not guarantee success. So if the player chooses plan A and loses, and a postmortem reveals that if he had chosen B he would have won, he has not been irrational—just unlucky. And a player who chooses plan B is more likely to have miscalculated the odds of its success than to have been acting under the spell of some deep illusion. In more complex situations of this kind, in which it is mathematically impossible to compute the optimal solution in a reasonable amount of time, choosing a suboptimal strategy may not even be irrational; the player’s strategy may be perfectly reasonable, given that he cannot afford to spend centuries thinking about it.
I found the chapter on the illusion of cause disappointing. Reversing cause and effect, confusing cause with correlation and the like are indeed common and dangerous errors, but the treatment of them here is slipshod. The authors, wearing their “doctrinaire experimentalist” uniforms, proclaim that “the only way—let us repeat, the only way—to definitively test whether an association is causal is to run an experiment.” So, tainted wells don’t cause cholera? Tectonic movements don’t cause earthquakes? The difficulties in deducing causality from observational studies are profound and well known, but it often must be done, and ignoring this fact does a serious disservice to readers. The authors do go on to acknowledge that “Epidemiological studies . . . in many cases . . . are the best way to determine whether two factors are associated, and therefore have at least a potential causal connection.” But the whole thrust of the remainder of the chapter is that if you haven’t done an experiment, you can’t make a causal claim.
Much of the chapter is given over to decrying parents who refuse to have their children vaccinated because of the supposed relation to autism. Chabris and Simons characterize this decision entirely as an instance of the illusion of cause. Certainly, cognitive illusions play a role, but many other factors are involved, including a well-founded public distrust of the pharmaceutical industry.
It is not clear how the illusion of potential—the belief that there are easy ways to make oneself smarter—relates to the other illusions, or why the authors consider it important, except that they happen to have done research in the area. After all, one can spin out categories of self-flattering human error indefinitely: Consider the illusion of innocence (the belief that any fight with your spouse is entirely his or her fault—hard to beat in terms of eliciting anger when the cognitive error is pointed out), the illusion of power (the belief that the sports team you were rooting for lost because you left the TV to go get a snack) and so on.
As a warning to all of us to be guarded in trusting our own memories and to recognize that we may not always see what is right in front of our faces, and as a plea for us to be patient and understanding when others fail in these ways, the first three chapters of this book are immensely valuable. As a popular exposition of research in cognitive psychology, the book is much less satisfactory. The line between what people get wrong and what they get right, between what they see and what they miss, is very fine, and Chabris and Simons do not adequately engage with the question of what makes the difference.
The night my dog died, my wife was awakened from a deep sleep by the absence of the sound of his breathing. Such experiences are common. Something wrong on the periphery of vision will suddenly grab our attention. This is the reverse of the gorilla effect, and probably no less important, although it may be harder to elicit in the laboratory. The formula that people do not see what they are not expecting to see, repeated several times in this book, is far too crude.
There is a tendency in cognitive psychology to focus on errors—they are easily measured, they are vivid, and it is easy to get experimental subjects to commit them. It seems to me, however, that to focus too heavily on errors is itself to miss the gorilla. For what is really most amazing in these experiments, and most in need of explanation, is what is most quotidian: that people can see and understand a basketball game; that they can converse with a stranger and give him directions. The strange errors they sometimes make while carrying out these immensely complicated cognitive tasks, and their misconceptions about these errors, although fascinating, are ultimately of secondary importance.
Ernest Davis is a professor of computer science at New York University. He is the author of Representations of Commonsense Knowledge (1990) and Representing and Acquiring Geographic Knowledge (1986), both published by Morgan Kaufmann.
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.