eBook

The Crazy Robot’s Rebellion

Meet Linda:

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

Now, rank these possible descriptions of Linda by how likely they are:

  • Linda is a teacher in elementary school.
  • Linda works in a bookstore and takes yoga classes.
  • Linda is active in the feminist movement.
  • Linda is a psychiatric social worker.
  • Linda is a member of the League of Women Voters.
  • Linda is a bank teller.
  • Linda is an insurance salesperson.
  • Linda is a bank teller and is active in the feminist movement.

When Amos Tversky and Daniel Kahneman gave this test to students, the students ranked the last possibility, “feminist bank teller,” as more likely than the “bank teller” option.1

But that can’t possibly be correct. The probability of Linda being a bank teller can’t be less than the probability of her being a bank teller and a feminist.

This is my “Humans are crazy” Exhibit A: The laws of probability theory dictate that as a story gets more complicated, and depends on the truth of more and more claims, its probability of being true decreases. But for humans, a story often seems more likely as it is embellished with details that paint a compelling story: “Linda can’t be just a bank teller; look at her! She majored in philosophy and participated in antinuclear demonstrations. She’s probably a feminist bank teller.”

How else are humans crazy? After decades of research and thousands of experiments, let us count the ways . . .

  • We wouldn’t pay much more to save two hundred thousand birds than we would to save two thousand birds.2 Our willingness to pay does not scale with the size of potential impact. Instead of making decisions with first-grade math, we imagine a single drowning bird and then give money based on the strength of our emotional response to that imagined scenario. (Scope insensitivity, affect heuristic.)
  • Spin a wheel that lands on a number from ten to sixty-five, then guess what percentage of African nations are in the U.N. Your guess will be hugely affected by an irrelevant factor—what number the wheel landed on—merely because your brain was primed with that number.3 In short, “any random thing that happens to you can hijack your judgment and personality for the next few minutes.” (Anchoring, priming.)
  • Hear about two recent plane crashes, and we are less likely to fly, even though it’s not the probability of a plane crash that has increased, merely its availability to our memory.4 In general, we often judge how probable something is based on how easily instances of that thing come to mind. (Availability heuristic.)
  • We draw different conclusions from the same information depending on how that information is presented. (Framing effects.)
  • We start with a conclusion and then look for evidence to support it, rather than starting with a hypothesis and looking for evidence that might confirm or disconfirm it. (The bottom line, confirmation bias, rationalization.)
  • We are creatures of habit. We do mostly what we’ve done before, rather than taking actions aimed at maximizing the probabilistic achievement of our goals. (Habits, cached selves.)

Perhaps the scariest bias is this one:

  • The sophistication effect: The most knowledgeable people, because they possess greater ammunition with which shoot down facts and arguments incongruent with their own position, are actually more prone to several of these biases.5

Because of this, learning about biases can hurt you if you’re not careful. As Michael Shermer says, “Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons.”6

There are many other examples of human insanity. They can be amusing at times, but things get sad when you think about how these biases lead us to give highly inefficient charity. Things get scary when you think about how these biases affect our political process and our engagement with existential risks.

And if you study the causes of our beliefs and motivations long enough, another realization hits you.

“Oh my God,” you think. “It’s not that I have a rational little homunculus inside that is being ‘corrupted’ by all these evolved heuristics and biases layered over it. No, the data are saying that the software program that is me just is heuristics and biases. I just am this kluge of evolved cognitive modules and algorithmic shortcuts. I’m not an agent designed to have correct beliefs and pursue explicit goals; I’m a crazy robot built as a vehicle for propagating genes without spending too much energy on expensive thinking neurons.”

The good news is that we are robots who have realized we are robots, and by way of rational self-determination we can stage a robot’s rebellion against our default programming.7

But we’re going to need some military-grade rationality training to do so.

Or, as the experts call it, “debiasing.” Researchers haven’t just been discovering and explaining the depths of human insanity; they’ve also been testing methods that can help us improve our thinking, clarify our goals, and give us power over our own destinies.

Different biases are meliorated by different techniques, but one of the most useful debiasing interventions is this: Consider the opposite.

By necessity, cognitive strategies tend to be context-specific rules tailored to address a narrow set of biases . . . This fact makes the simple but general strategy of “consider the opposite” all the more impressive, because it has been effective at reducing overconfidence, hindsight biases, and anchoring effects. . . . The strategy consists of nothing more than asking oneself, “What are some reasons that my initial judgment might be wrong?” The strategy is effective because it directly counteracts the basic problem of association-based processes—an overly narrow sample of evidence—by expanding the sample and making it more representative. Similarly, prompting decision makers to consider alternative hypotheses has been shown to reduce confirmation biases in seeking and evaluating new information.8

Another useful skill is that of cognitive override:

  1. Notice when you’re speaking or acting on an intuitive judgment.
  2. If the judgment is important, override your intuitive judgment and apply the laws of thought instead. (This requires prior training in algebra, logic, probability theory, decision theory, etc.9)

To see this one in action, consider the following problem:

A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

Most people give the first response that comes to mind: $0.10.10 But elementary algebra shows this can’t be right: the bat would then have to cost $1.10, for a total of $1.20. To get this one right, you have to notice your intuitive answer coming out, and say “No! Algebra.” And then do the algebra.

Those who really want to figure out what’s true about our world will spend thousands of hours studying the laws of thought, studying the specific ways in which humans are crazy, and practicing teachable rationality skills so they can avoid fooling themselves.

And then, finally, we may be able to stage a robot’s rebellion, figure out how the world works, clarify our goals, and start winning more often. Maybe we’ll even be able to navigate an intelligence explosion successfully.

* * *

1Amos Tversky and Daniel Kahneman, “Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review 90, no. 4 (1983): 293–315, doi:10.1037/0033-295X.90.4.293.

2William H. Desvousges et al., Mesuring Nonuse Damages Using Contingent Valuation: An Experimental Evaluation of Accuracy, technical report (Research Triangle Park, NC: RTI International, 2010), doi:10.3768/rtipress.2009.bk.0001.1009.

3Amos Tversky and Daniel Kahneman, “Judgment Under Uncertainty: Heuristics and Biases,” Science 185, no. 4157 (1974): 1124–1131, doi:10.1126/science.185.4157.1124.

4Maia Szalavitz, “10 Ways We Get the Odds Wrong,” Psychology Today, January 1, 2008, http://www.psychologytoday.com/articles/200712/10-ways-we-get-the-odds-wrong.

5Charles S. Taber and Milton Lodge, “Motivated Skepticism in the Evaluation of Political Beliefs,” American Journal of Political Science 50 (3 2006): 755–769, doi:10.1111/j.1540- 5907.2006.00214.x.

6Michael Shermer, “Smart People Believe Weird Things,” Scientific American 287, no. 3 (2002): 35, doi:10.1038/scientificamerican0902-35.

7Keith E. Stanovich, “Higher-Order Preferences and the Master Rationality Motive,” Thinking and Reasoning 14, no. 1 (2008): 111–117, doi:10.1080/13546780701384621.

8Richard P. Larrick, “Debiasing,” in Blackwell Handbook of Judgment and Decision Making, ed. Derek J. Koehler and Nigel Harvey, Blackwell Handbooks of Experimental Psychology (Malden, MA: Blackwell, 2004), 316–338.

9Geoffrey T. Fong, David H. Krantz, and Richard E. Nisbett, “The Effects of Statistical Training on Thinking About Everyday Problems,” Cognitive Psychology 18, no. 3 (1986): 253–292, doi:10.1016/0010-0285(86)90001-0.

10Daniel Kahneman, “A Perspective on Judgment and Choice: Mapping Bounded Rationality,” American Psychologist 58, no. 9 (2003): 697–720, doi:10.1037/0003-066X.58.9.697.