got the greatest number of positives and the least number of negatives.”
This analytical, emotionless notion of ideal decision making was codified into the “rational agent” model of economic theory. The model consumer or investor, it reckoned, would somehow have access to all possible information about the market and would be able to somehow instantly distill it all and make the perfect choice. Shockingly, real markets, and real investors and consumers, don’t work this way.
But even when recognition came that omniscient rationality was
not
the right model to use, it seemed that economists were more interested in talking about this as a shortfall than a boon. Consider 2008’s
Predictably Irrational
, in which behavioral economist Dan Ariely argues against the rational-agent model by highlighting the various human behaviors that don’t accord with it. A victory for re-assimilating the various neglected and denigrated capacities of the self? A glance at the jacket blurbs is enough to produce a resounding no, revealing the light in which we are meant to read these deviations from economic theory. “How we can prevent being fooled,” says Jerome Groopman, Recanati Professor of Medicine at Harvard Medical School. “The weird ways we act,” says business writer James Surowiecki. “Foibles, errors, and bloopers,” says Harvard psychologist Daniel Gilbert. “Foolish, and sometimes disastrous, mistakes,” says Nobel laureate in economics George Akerlof. “Managing your emotions … so challenging for all of us … can help you avoid common mistakes,” says financial icon Charles Schwab. 12
Now, some of what passes for “irrationality” in traditional “rational” economics is simply bad science, cautions Daniel Kahneman, Nobel laureate from Princeton. For instance, given a choice between a million dollars and a 50 percent chance of winning four million dollars, the “rational” choice is “obviously” the latter, whose “expected outcome” is two million dollars, double the first offer. Yet most people say they would choose the former—fools! Or are they? It turns out to depend on how wealthy you are: the richer you are, the more inclined toward the gamble. Is this because wealthier people are (as demonstrated by being rich) more logical? Is this because less wealthy people are blinded by an emotional reaction to money? Is it because the brain is, tragically, more averse to loss than excited by gain? Or perhaps the wealthy person who accepts the gamble and the less wealthy person who declines it are, in fact, choosing completely appropriately in both cases. Consider: a family deep into debt and about to default on their home could
really
use that first million; the added three million would be icing on the cake but wouldn’t change much. The “quadruple or nothing” offer just isn’t worth betting the farm—literally. Whereas for a billionaire like Donald Trump, a million bucks is chump change, and he’ll probably take his chances, knowing the odds favor him. The two choose differently—and both choose
correctly
.
At any rate, and with examples like this one aside, the prevailing attitude seems clear: economists who subscribe to the rational-choice theory and those who critique it (in favor of what’s known as “bounded rationality”)
both
think that an emotionless, Spock-like approach to decision making is demonstrably superior. We should all aspire to throw off our ape ancestry to whatever extent we can—alas, we are fallible and will still make silly emotion-tinged “bloopers” here and there.
This has been for centuries, and by and large continues to be, the theoretical mainstream, and not just economics but Western intellectual history at large is full of examples of the creature needing thecomputer. But examples of the
reverse
, of the computer needing the creature, have been much rarer and more marginal—until lately.
Baba Shiv says that as early as the 1960s and ’70s,