From Skepticism to Technical Rationality
Before I talk about machine superintelligence, I need to talk about rationality. My understanding of rationality shapes the way I see everything, and it is the main reason I take the problems of machine superintelligence seriously.
If I could say only one thing to the “atheist” and “skeptic” communities, it would be this:
Skepticism and critical thinking teach us important lessons: Extraordinary claims require extraordinary evidence. Correlation does not imply causation. Don’t take authority too seriously. Claims should be specific and falsifiable. Remember to apply Occam’s razor. Beware logical fallacies. Be open-minded, but not gullible. Et cetera.
But this is only the beginning. In writings on skepticism and critical thinking, these guidelines are only loosely specified, and they are not mathematically grounded in a well-justified normative theory. Instead, they are a grab bag of vague but generally useful rules of thumb. They provide a great entry point to rational thought, but they are no more than a beginning. For forty years there has been a mainstream cognitive science of rationality, with detailed models of how our thinking goes wrong and well-justified mathematical theories of what it means for a thinking process to be “wrong.” This is what we might call the science and mathematics of technical rationality. It takes more effort to learn and practice than entry-level skepticism does, but it is powerful. It can improve your life and help you to think more clearly about the world’s toughest problems.
You will find the cognitive science of rationality described in every university textbook on thinking and decision making. For example:
- Baron, Thinking and Deciding
- Hastie and Dawes, Rational Choice in an Uncertain World
- Bazerman and Moore, Judgment in Managerial Decision Making
- Plous, The Psychology of Judgment and Decision Making
- Gilboa, Making Better Decisions
You will also find pieces of it in the recent popular-level books on human irrationality. For example:
- Ariely, Predictably Irrational
- Kahneman, Thinking, Fast and Slow
- Thaler and Sunstein, Nudge
- Tavris and Aronson, Mistakes Were Made (But Not by Me)
And you will, of course, find it in the academic journals. Here are links to the Google Scholar results for just a few of the field’s common terms:
- “heuristics and biases”
- “affect heuristic”
- “myside bias”
- “base rate fallacy”
- “framing effects”
- “availability bias”
- “conjunction fallacy”
So what is this mainstream cognitive science of rationality—or, as I will call it, technical rationality?
There are two parts to technical rationality: normative and descriptive.
The normative part describes the laws of thought and action—logic, probability theory, and decision theory. Logic and probability theory describe how you should reason if you want to maximize your chances of acquiring true beliefs. Decision theory describes how you should act if you want to maximize your chances of acquiring what you want. Of course, these are not physical laws but normative laws. You can break these laws if you choose, and people often do. But if you break the laws of logic or probability theory you decrease your chances of arriving at true beliefs; if you break the laws of decision theory you decrease your chances of achieving your goals.
The descriptive part describes not how we should reason and act, but how we usually do reason and act. The descriptive program includes research on how humans think and decide. It also includes a catalog of common ways in which we violate the laws of thought and action from logic, probability theory, and decision theory. A cognitive bias is a particular way of violating logic, probability theory, or decision theory. That’s how “bias” is defined (see, e.g., Thinking and Deciding or Rationality and the Reflective Mind, each of which has a table of common biases and which part of logic, probability theory, or decision theory is violated by each of them).
Cognitive scientists also distinguish two domains of rationality: epistemic and instrumental.
Epistemic rationality concerns forming true beliefs, or having in your head an accurate map of the territory out there in the world. Epistemic rationality is governed by the laws of logic and probability theory.
Instrumental rationality concerns achieving your goals, or maximizing your chances of getting what you want. Or, more formally, maximizing your “expected utility.” This is also known as “winning.” Instrumental rationality is governed by the laws of decision theory.
In a sense, instrumental rationality takes priority, because the point of forming true beliefs is to help you achieve your goals, and sometimes spending too much time on epistemic rationality is not instrumentally rational. For example, I know some people who would be more likely to achieve their goals if they spent less time studying rationality and more time, say, developing their social skills.
Still, it can be useful to talk about epistemic and instrumental rationality separately. Just know that, when I talk about epistemic rationality, I’m talking about following the laws of logic and probability theory, and that, when I talk about instrumental rationality, I’m talking about following the laws of decision theory.
And from now on, when I talk about “rationality,” I mean technical rationality.
Before I say more about rationality, though, I need to be sure we’re clear on what rationality is not. I want to explain why Spock is not rational.