eBook

Tankelagarna

Tyvärr är denna artikel enbart tillgänglig på English, Français, русский, Slovenčina, 中文 och Italiano.

If someone doesn’t agree with me on the laws of logic, probability theory, and decision theory, then I won’t get very far with them in discussing the intelligence explosion because they’ll end up arguing that human intelligence runs on magic, or that a machine will only become more benevolent as it becomes more intelligent, or that it’s simple to specify what humans want, or some other bit of foolishness. So, let’s make sure we agree on the basics before we try to agree about more complex matters.

Logic

Luckily, not many people disagree about logic. As with math, we might make mistakes out of ignorance, but once someone shows us the proof for the Pythagorean theorem or for the invalidity of affirming the consequent, we agree. Math and logic are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc. (Of course, one cannot fully escape uncertainty: Andrew Wiles’ famous proof of Fermat’s Last Theorem is over one hundred pages long, so even if I worked through the whole thing myself I wouldn’t be certain I hadn’t made a mistake somewhere.)

Why should we let the laws of logic dictate our thinking? There needn’t be anything spooky about this. The laws of logic are baked into how we’ve agreed to talk to each other. If you tell me the car in front of you is 100% red and at the same time and in the same way 100% blue, then the problem isn’t so much that you’re “operating under different laws of logic,” but rather that we’re speaking different languages. Part of what I mean when I say that the car in front of me is “100% red” is that it isn’t also 100% blue in the same way at the same time. If you disagree, then we’re not speaking the same language. You’re speaking a language that uses many of the same sounds and spellings as mine but doesn’t mean the same things.

But logic is a system of certainty, and our world is one of uncertainty. In our world, we need to talk not about certainties but about probabilities.

Probability Theory

Give a child religion first, and she may find it hard to shake even when she encounters science. Give a child science first, and when she discovers religion it will look silly.

For this reason, I will explain the correct theory of probability first, and only later mention the incorrect theory.

What is probability? It’s a measure of how likely a proposition is to be true, given what else you believe. And whatever our theory of probability is, it should be consistent with common sense (for example, consistent with logic), and it should be consistent with itself (if you can calculate a probability with two methods, both methods should give the same answer).

Several authors have shown that the axioms of probability theory can be derived from these assumptions plus logic.1,2 In other words, probability theory is just an extension of logic. If you accept logic, and you accept the above (very minimal) assumptions about what probability is, then whether you know it or not you have accepted probability theory.

Another reason to accept probability theory is this (roughly speaking): If you don’t, and you are willing to take bets on your beliefs being true, then someone who is using probability theory can take all your money. (For the proof, look into Dutch Book arguments.3)

Perhaps the most useful rule to be derived from the axioms of probability theory is Bayes’ Theorem, which tells you exactly how your probability for a statement should change as you encounter new information. (In the cognitive science of rationality, many cognitive biases are defined in terms of how they violate either basic logic or Bayes’ Theorem.) If you’re not using Bayes’ Theorem to update your beliefs, then you’re violating probability theory, which is an extension of logic.

Of course, the human brain is too slow to make explicit Bayesian calculations all day. But you can develop mental heuristics that do a better job of approximating Bayesian calculations than many of our default evolved heuristics do.

This is not the place for a full tutorial on logic or probability theory or rationality training. I just want to introduce the core tools we’ll be using so I can later explain why I came to one conclusion instead of another concerning the intelligence explosion. Still, you might want to at least read this short tutorial on Bayes’ Theorem before continuing.

Finally, I owe you a quick explanation of why frequentism, the theory of probability you probably learned in school like I did, is wrong. Whereas the Bayesian view sees probability as a measure of uncertainty about the world, frequentism sees probability as “the proportion of times the event would occur in a long run of repeated experiments.” I’ll mention just two problems with this, out of at least fifteen:4

  • Frequentism is not derived from the laws of logic, and is not self-consistent. Under frequentism, calculating a probability with two methods can often lead to two different answers.
  • Frequentism judges probability based not exclusively on what we know but also on a long series of hypothetical “experiments” that we may never observe, and which are only vaguely defined. That is, frequentism abandons empiricism.

If frequentism is wrong, why is it so popular? There are many reasons, reviewed in this book about the history of Bayes’ Theorem.5 Anyway, when I talk about probability theory, I’ll be referring to Bayesianism.

Decision Theory

I explained why there are laws of thought when it comes to epistemic rationality (acquiring true beliefs), and I pointed you to some detailed tutorials. But how can there be laws of thought concerning instrumental rationality (maximally achieving one’s goals)? Isn’t what we want subjective, and therefore not subject to rules?

Yes, you may have any number of goals. But when it comes to maximally achieving those goals, there are indeed rules. If you think about it, this should be obvious. Whatever goals you have, there are always really stupid ways to go about trying to achieve them. If you want to know what exists, you shouldn’t bury your head in the sand and refuse to look at what exists. And if you want to achieve goals in the world, you probably shouldn’t paralyze your entire body, unless paralysis is your only goal.

Let’s be more specific. Decision theory is about choosing among possible actions based on how much you desire the possible outcomes of those actions.

How does this work? We can describe what you want with something called a utility function, which assigns a number that expresses how much you desire each possible outcome (or “description of an entire possible future”). Perhaps a single scoop of ice cream has forty “utils” for you, the death of your daughter has -274,000 utils for you, and so on. This numerical representation of everything you care about is your utility function.

We can combine your probabilistic beliefs and your utility function to calculate the expected utility for any action under consideration. The expected utility of an action is the average utility of the action’s possible outcomes, weighted by the probability that each outcome occurs.

Suppose you’re walking along a freeway with your young daughter. You see an ice cream stand across the freeway, but you’ve recently injured your leg and wouldn’t be able to move quickly across the freeway. Given what you know, if you send your daughter across the freeway to get you some ice cream, there’s a 60% chance you’ll get some ice cream, a 5% chance your child will be killed by speeding cars, and other probabilities for other outcomes.

To calculate the expected utility of sending your daughter across the freeway for ice cream, we multiply the utility of the first outcome by its probability: 0.6 × 40 = 24. Then, we add to this the product of the next outcome’s utility and its probability: 24 + (-274,000 × 0.05) = -13,676. And suppose the sum of the products of the utilities and probabilities for other possible outcomes was zero. The expected utility of sending your daughter across the freeway for ice cream is thus very low (as we would expect from common sense). You should probably take one of the other actions available to you, for example the action of not sending your daughter across the freeway for ice cream, or some action with even higher expected utility.

A rational agent aims to maximize its expected utility, because an agent that does so will on average get the most possible of what it wants, given its beliefs and desires.

It seems intuitive that a rational agent should maximize its expected utility, but why is that the only rational way to do things? Why not try to minimize the worst possible loss? Why not try to maximize the weighted sum of the cubes of the possible utilities?

The justification for the “maximize expected utility” principle was discovered in the 1940s by von Neumann and Morgenstern. In short, they proved that if a few axioms about preferences are accepted, then an agent can only act consistently with its own preferences by choosing the action that maximizes expected utility.6

What are these axioms? Like the axioms of probability theory, they are simple and intuitive. For example, one of them is the transitivity axiom, which says that if an agent prefers A to B, and it prefers B to C, then it must prefer A to C. This axiom is motivated by the fact that someone with nontransitive preferences can be tricked out of all her money even while only making trades that she prefers.

I won’t go into the details here because this result has been so widely accepted: a rational agent maximizes expected utility.

Unfortunately, humans are not rational agents. As we’ll see in the next chapter, humans are crazy.

* * *

1E. T. Jaynes, Probability Theory: The Logic of Science, ed. G. Larry Bretthorst (New York: Cambridge University Press, 2003), doi:10.2277/0521592712.

2Stefan Arnborg and Gunnr Sjödin, “On the Foundations of Bayesianism,” AIP Conference Proceedings 568, no. 1 (2001): 61–71, http://connection.ebscohost.com/c/articles/5665715/foundations-bayesianism.

3Kenny Easwaran, “Bayesianism I: Introduction and Arguments in Favor,” Philosophy Compass 6 (5 2011): 312–320, doi:10.1111/j.1747-9991.2011.00399.x.

4Alan Hájek, “‘Mises Redux’—Redux: Fifteen Arguments Against Finite Frequentism,” Erkenntnis 45, no. 2 (November 1996): 209–227, doi:10.1007/BF00276791.

5Sharon Bertsch McGrayne, The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy (New Haven, CT: Yale University Press, 2011).

6John Von Neumann and Oskar Morgenstern, Theory of Games and Economic Behavior, 2nd ed. (Princeton, NJ: Princeton University Press, 1947).