eBook

Rygga inte tillbaka

Tyvärr är denna artikel enbart tillgänglig på Amerikansk Engelska, Franska, Ryska, Slovenčina, 中文 och Italienska.

Perhaps you’ve heard of the Japanese holdouts who refused to believe the reports of Japan’s surrender in 1945. One of them was Lt. Hiroo Onoda, who was in charge of three other soldiers on the island of Lubang in the Philippines. For more than a decade they lived off coconuts and bananas in the jungle, refusing to believe the war was over:

Leaflet after leaflet was dropped. Newspapers were left. Photographs and letters from relatives were dropped. Friends and relatives spoke out over loudspeakers. There was always something suspicious, so they never believed that the war had really ended.1

One by one, Onoda’s soldiers died or surrendered, but Onoda himself wasn’t convinced of Japan’s surrender until 1974, nearly thirty years after the war had ended. Later, he recalled:

Suddenly everything went black. A storm raged inside me. I felt like a fool. . . . What had I been doing for all these years?2

The student of rationality wants true beliefs, so that she can better achieve her goals. She will respond to evidence very differently than Onoda did. She will change her mind as soon as there is enough evidence to justify doing so (according to what she knows already and the laws of probability theory). Holding on to false beliefs can have serious consequences: say, thirty years of pooping in coconut shells.

Onoda had been trained under the kind of militaristic dogmatism that creates kamikaze pilots, and he had been made to believe that Japanese defeat was the worst outcome imaginable. Thus, he naturally flinched away from evidence that Japan had lost the war, for he was trained to believe this outcome was emotionally and mentally unthinkable.

One of the skills in the toolkit of rational thought is to notice when this “flinching away” happens and counteract it. We will need this skill as we begin to look at the implications of the ideas we’ve just discussed—that AI is inevitable if scientific progress continues, and that AI can be much more intelligent (and therefore more powerful) than humans are. When we examine the implications of these ideas, it will be helpful to understand what happens in human brains when they consider ideas with unwelcome implications. As we’ll see, it doesn’t take military indoctrination for the human brain to flinch away. In fact, flinching away is a standard feature of human psychology and goes by names like “motivated cognition” and “rationalization.”

As an extreme example, consider the creationist. He will accept dubious evidence for his own position, and be overly skeptical of evidence against it. He’ll seek out evidence that might confirm his position, but won’t seek out the strongest evidence against it. (Thus I encounter creationists who have never heard of endogenous retroviruses.)

Most of us do things like this every week in (hopefully) less obvious and damaging ways. We flinch away from uncomfortable truths and choices, saying, “It’s better to suspend judgment.” We avoid our beliefs’ real weak points. We start with a preconceived opinion and come up with arguments for it later. We unconsciously filter our available evidence so as to favor our current beliefs. We rebut weak arguments against our positions, and don’t try to consider the strongest possible arguments against our positions.

And usually these processes are automatic and subconscious. You don’t have to be an especially irrational person to flinch away like this. No, you’ll tend to flinch away by default without even noticing it, and you’ll have to exert conscious mental effort in order to not flinch away from uncomfortable facts. And I don’t mean chanting, “Do not commit confirmation bias. Do not commit confirmation bias. . . .” I mean something more effective.

What kinds of conscious mental effort can you exert to counteract the “flinching away” response?

One piece of advice is to leave yourself a line of retreat:

Last night I happened to be conversing with [someone who] had just declared (a) her belief in souls and (b) that she didn’t believe in cryonics because she believed the soul wouldn’t stay with the frozen body. I asked, “But how do you know that?” From the confusion that flashed on her face, it was pretty clear that this question had never occurred to her. . . .

“Make sure,” I suggested to her, “that you visualize what the world would be like if there are no souls, and what you would do about that. Don’t think about all the reasons that it can’t be that way, just accept it as a premise and then visualize the consequences. So that you’ll think, ‘Well, if there are no souls, I can just sign up for cryonics,’ or ‘If there is no God, I can just go on being moral anyway,’ rather than it being too horrifying to face. As a matter of self-respect you should try to believe the truth no matter how uncomfortable it is . . . [and] as a matter of human nature, it helps to make a belief less uncomfortable before you try to evaluate the evidence for it.”

Of course, you still need to weigh the evidence fairly. There are beliefs that scare me which I still reject, and beliefs that attract me which I still accept. But it’s important to visualize a scenario clearly and make it less scary so that your human brain can more fairly assess the evidence on the matter.

Leaving a line of retreat is a tool to use before the battle. An anti-flinching technique for use during the battle is to verbally call out the flinching reaction. I catch myself thinking things like “I think I read that sugar isn’t actually all that bad for us,” and I mentally add, “but that could just be motivated cognition, because I really want to eat this cookie right now.”

This is like pressing pause on my decision-making module, giving me time to engage my “curiosity modules,” which are trained to want to know what’s actually true rather than to justify eating cookies. For example, I envision what could go badly if I came to false beliefs on the matter—due to motivated cognition or some other thinking failure. In this case, I might imagine gaining weight or having lower energy in the long run as possible consequences of being wrong about the effects of sugar intake. If I were considering whether to buy fire insurance, I would imagine what might happen if I were wrong about my intuitive judgment on whether to buy the insurance.

These tools will be important when we consider the implications of AI. We’re about to talk about some heavy shit, but remember: Don’t flinch away. Look reality in the eye and don’t back down.

* * *

1Jennifer Rosenberg, “The War is Over . . . Please Come Out,” About.com, accessed November 10, 2012, http://history1900s.about.com/od/worldwarii/a/soldiersurr.htm.

2Hiroo Onoda, No Surrender: My Thirty-Year War, 1st ed., trans. Charles S. Terry (New York: Kodansha International, 1974).