
This Article From Issue
November-December 2009
Volume 97, Number 6
Page 510
DOI: 10.1511/2009.81.510
THE BOUNDS OF REASON: Game Theory and the Unification of the Behavioral Sciences. Herbert Gintis. xviii + 286 pp. Princeton University Press, 2009. $35.
Humans are social animals, and so were their ancestors, for millions of years before the first campfires lighted the night. But only recently have humans come to understand the mathematics of social interactions. The mathematician John von Neumann and the economist Oskar Morgenstern were the first to tackle the subject, in a book they were planning to call A General Theory of Rational Behavior. By the time it was published in 1944, they had changed the title to Game Theory and Economic Behavior, an inspired move. The book postulated, as did all follow-up texts on game theory for generations, that players are rational—that they can figure out the payoff of all possible moves and always choose the most favorable one.
Three decades later, game theory got a new lease on life through the work of biologists William D. Hamilton and John Maynard Smith, who used it to analyze biological interactions, such as fights between members of the same species or parental investment in offspring. This new “evolutionary game theory” was no longer based on axioms of rationality. Anatol Rapoport, one of the pillars of classical game theory, characterized it as “game theory without rationality.” Herbert Gintis was among the first economists attracted by the new field, and when, 10 years ago, I wrote a review of his textbook Game Theory Evolving, I described it as “testimony of the conversion of an economist.” Gintis has not recanted in the meantime—indeed, a second edition of that book just appeared. But a new companion volume, titled The Bounds of Reason, shows that he certainly has not forgotten his upbringing in the orthodox vein.
There is, of course, no contradiction between the two game theories. As with all mathematical theories, their aim is to rigorously derive the consequences of well-defined assumptions that are taken as granted. Just as there are geometries that use the parallel axiom and others that do not, so there are game theories that employ rationality axioms and others that do not. Mathematically, they are equally respectable. It is only when they are applied to the real world that tensions arise.
In his preface, Gintis compares the history of physics with that of economics. Physical theories have regularly stumbled against experiments revealing “anomalies” that falsified the theories and led to their replacement. Compared with this intensive dialogue between theory and experiment, most textbooks of classical economics long remained singularly fact-free. This state of affairs has changed recently: Experimental economics, and in particular behavioral game theory, has flourished in the past few decades, and The Bounds of Reason contains an impressive catalog of empirical findings. In the light of conventional rationality assumptions, many of those findings appear to be “anomalies.” They are based on intriguing thought experiments exploring the many facets of decision making.
The most venerable is the Prisoner’s Dilemma. Imagine a game with two players, each of whom has to decide whether to send a gift to the other, and whose decisions have to be made simultaneously. Sending the gift costs the donor $5 and provides the recipient with $15. If both send a gift, they each become $10 richer. But whatever his or her coplayer does, a player profits more by not sending a gift. Two rational agents who want to maximize their income will end up getting nothing. The same happens if one player is allowed to wait to decide until being informed of the other player’s decision.
The Trust Game is closely related. A coin toss decides which of the two players is the Investor and which the Trustee. The Investor can send a gift to the Trustee. The Trustee can then return part of the sum to the Investor. Thus both players can profit from the exchange. But rational players are stuck at a dead end: no gift, no return. Real players frequently overcome the stalemate. They trust, and honor trust, and mutually profit from their interaction.
Another example of some entertainment value is the Traveler’s Dilemma. Two persons have to separately file a claim of between $10 and $100. The rules prescribe that if they both claim the same amount, both will get it; but if they claim different amounts, the lower amount will be judged to be valid and paid out. As an incentive, the more modest claimant receives $2 more, and the other player correspondingly $2 less. Two innocent players would immediately claim $100 and get it. But a smart player bent on optimizing might hope to get one dollar more, by merely claiming $99. Anyone who fears that his or her coplayer might think along these same lines should not claim $100, because this would yield only $98. But if both claim $99, that’s what they get. They might think it better to outsmart the other by claiming $98, and pocket $100. . . . And from then on, the argument, an instance of what is termed backward induction, leads inexorably all the way down to the minimum claim of $10. Both players are caught in a trap: Asking for the minimum is the only solution that is consistent in the sense that no player can gain by unilaterally deviating from it.
Needless to say, real people rarely reach this solution in experiments: They may take a few steps down the ladder, but they stop well above $90.
Another classical example is known as the Ellsberg Paradox. Consider an urn containing 30 balls that can be red, black or white. You know that 10 balls are red. You do not know how many balls are black or how many are white. You now are offered the opportunity to bet on a color: If it is drawn, you will receive $100; if a different color is drawn, you will receive nothing. Most people bet on red, and this seems hardly surprising. However, if asked to bet on any combination of two colors instead, most people prefer to bet on (black, white). But those who initially bet on red seemingly believed that red was more likely than either black or white; so shouldn’t they now be betting on (red, black) or (red, white)? Their behavior cannot consistently be interpreted as always opting for the higher odds; rather, it reflects a deep-seated aversion to uncertainty.
In another vein, most people prefer a price of $10 with a $1 discount, to a price of $8 with a $1 surcharge. Such framing effects are ubiquitous.
Under the relentless impact of these empirical findings, the assumption of rationality has been modified in various ways. From its glorious state of an unreachable idealization, it has been downgraded to “bounded rationality.” This view accepts that perfect optimization is, in general, beyond human reach. But bounded rationality means different things to different people, depending on which parts of the ideal they want to salvage. This has given rise to interesting theoretical investigations into the logical dependencies of diverse blends. Nevertheless, many of those who study human behavior doubt that bounded rationality provides the best account of the empirical findings. The situation is reminiscent of the quandaries of early astronomers, who discovered, to their dismay, that planets did not move in perfect circles as presumed. They came to terms with this “bounded circularity” by describing the orbits as epicycles—circles within circles, and so on—and ingeniously deriving sophisticated approximations, at the cost of an explosion in complexity.
The version of rational agents proposed by Gintis gets rid of the assumption that players enjoy “common knowledge of rationality.” Players are all rational, but they need not believe that all players are rational. This does help to avoid the trap of backward induction. On the other hand, it does not explain why people are so sensitive to framing or so inept at lotteries and bets. Gintis, who claims that “the bounds of reason are . . . not the irrational, but the social,” dismisses anomalies involving just one person as “performance errors.” Such a fix can explain everything and nothing, and it therefore threatens to weaken his cause.
For Gintis champions a cause, one that can be wholeheartedly subscribed to: namely, to promote game theory as an indispensable instrument in modeling human behavior. He rightly points out the wide discrepancies between the approaches in separate branches of the behavioral sciences. For example, economics, sociology, anthropology and social psychology use vastly different premises in studying social behavior and organization. Such academic traditions (based, in Gintis’s words, on “the virtually impassible feudal organization of the behavior disciplines”) are not likely to vanish overnight, but behavioral game theory can offer a tool for them all. In fact, we see this happening already.
Here are some examples: The behavior of participants in the Trust Game and similar games of cooperation is a superb “microsocial” indicator for what sociologists call the Rule of Law (the general respect for rules and institutions in a society). Game theoretical models help in explaining the adaptive value of humankind’s nearly ubiquitous belief in supernatural agents. Ingenious experiments uncover our often subliminal concern with being watched, which can be triggered by the mere image of an eye and greatly boosts cooperative behavior. The widely varying gift-giving traditions in small-scale societies, with all their attendant complexities concerning obligations and status, are dissected by means of simple economic games. And so on.
The academic tribes, however, will hesitate to accept the gift of game theory from economists if they are told that it comes with the rational actor model. Not everyone wants to shoulder the obligations that model entails. Humans have no doubt developed the faculty of reasoning to a unique degree; but our decisions are also guided by other factors, such as habits and customs, passions, emotions and “animal spirits” (to use the expression of economist John Maynard Keynes). Many actions do not fall under the heading of rational behavior as that term is commonly understood (although it must be admitted that modern economists’ definitions of rationality are as far removed from the everyday use of that word as modern theology’s concepts of divinity are from the average layperson’s idea of the Good Lord).
Psychologists, for instance, analyze decisions in terms of (often unconscious) cues and heuristics, and are not likely to switch to the paradigm of Beliefs, Preferences, Constraints and Expected Utilities that underlies the rational actor model. Why should they? In evolutionary game theory, they can enjoy the full panoply of behavioral experiments without the restraints imposed by the loitering presence of the rational actor. Strategies (that is, programs of behavior) need not be the product of rational decisions. They can be copied, for instance, through learning or inheritance.
Interestingly, Gintis stresses right from the start that game theory is central to understanding the dynamics of all sorts of life forms, and he touts it as “the universal lexicon of life.” Because it applies to primates, birds, lizards, plants and even bacteria, game theory must be able to do without the rational actor model. Needless to say, in applications to humans our very special cognitive and communicative facilities come into play.
But in The Bounds of Reason, Gintis allows only a marginal role for evolutionary game theory. Surprisingly, he hardly mentions the pioneering work of Robert Trivers, William D. Hamilton or Robert M. Axelrod. This is a pity, because what these authors have to say about human nature has had a deep impact on most behavioral sciences. In the tradition of Adam Smith, evolutionary biologists explain cooperative behavior by long-term self-interest that is ultimately grounded in reproductive success. In particular, it can be advantageous to forfeit an immediate material gain if this increases one’s reputation and thus promises to confer a higher value in the market for collaborators. The corresponding cost-benefit calculation need not be conscious, let alone rational; it can be mediated by heuristics based on emotional responses such as shame, sympathy or anger and possibly rationalized after the act.
Gintis embraces another approach, explaining cooperation by a human preference for what he terms character virtues (such as honesty, trustworthiness or fairness). But every behavior can be interpreted as a preference for some virtue. Short of providing an ultimate reason for the preference (such as that it promotes long-term self-interest), this approach has as little explanatory power as Gintis’s statement that “the increased incidence of prosocial behaviors is precisely what permits humans to cooperate effectively in groups.” Likewise, Gintis postulates that humans have a genetic predisposition to follow social norms even when it is costly to do so. But neither the evolution of the pre- disposition nor the emergence of norms is explained. Many game theorists have attempted to describe collective phenomena, and in particular social norms or social institutions, by the actions of the individuals involved. Gintis, however, rejects methodological individualism—the basis for such an approach—on the grounds that it is incompatible with the characteristics of rational agents. But the incompatibility cuts both ways.
The Bounds of Reason appears as two books in one. One part develops an epistemic theory of the rational actor as an alternative to what is provided by classical game theory, and the other part is a spirited plea to use behavioral game theory as a unifying tool in all behavioral sciences. Both objectives are highly valuable, but combining them creates friction. Friction creates heat, and Gintis, who thrives gleefully on controversial issues, may be enjoying the prospect of heated discussions.
Karl Sigmund is a professor of mathematics at the University of Vienna. His interests include game dynamics (replicator dynamics, adaptive dynamics), game-theoretic models for the evolution of cooperation, experimental economics, and the history of the Vienna Circle. He is the author of, among other books, Games of Life: Explorations in Ecology, Evolution and Behaviour (Oxford University Press, 1993) and The Calculus of Selfishness (Princeton University Press, 2009).
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.