My Own Story
Sometime this century, machines will surpass human levels of intelligence and ability. This event—the “intelligence explosion”—will be the most important event in our history, and navigating it wisely will be the most important thing we can ever do.
Facing the Intelligence Explosion is my attempt to answer these questions.
I’ll begin with my personal background. It will help to know who I am and where I’m coming from. That information is some evidence about how you should respond to the other things I say.
When my religious beliefs finally succumbed to reality, I deconverted and started a blog to explain atheism and naturalism to others. Common Sense Atheism became one of the most popular atheism blogs on the internet. I enjoyed translating the papers of professional philosophers into understandable English, and I enjoyed speaking with experts in the field for my podcast Conversations from the Pale Blue Dot. Moreover, losing my religion didn’t tell me what I should believe or should be doing with my life, and I used my blog to search for answers.
I’ve also been interested in rationality, at least since my deconversion, during which I discovered that I could easily be strongly confident of things that I had no evidence for, things that had been shown false, and even total nonsense. How could the human brain be so incredibly misled? Obviously, I wasn’t Aristotle’s “rational animal.” Instead, I was Gazzaniga’s rationalizing animal. Critical thinking was a major focus of Common Sense Atheism, and I spent as much time criticizing poor thinking in atheists as I did criticizing poor thinking in theists.
My interest in rationality inevitably led me (in mid-2010, I think) to a treasure trove of articles on the mainstream cognitive science of rationality: the website Less Wrong. It was here that I first encountered the idea of intelligence explosion:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. . . . Thus the first ultraintelligent machine is the last invention that man need ever make.2
I tell the story of my first encounter with this famous paragraph here. In short:
Good’s paragraph ran me over like a train. Not because it was absurd, but because it was clearly true. Intelligence explosion was a direct consequence of things I already believed, I just hadn’t noticed! Humans do not automatically propagate their beliefs, so I hadn’t noticed that my worldview already implied intelligence explosion.
I spent a week looking for counterarguments, to check whether I was missing something, and then accepted intelligence explosion to be likely (so long as scientific progress continued). And though I hadn’t read Eliezer on the complexity of value, I had read David Hume and Joshua Greene. So I already understood that an arbitrary artificial intelligence would almost certainly not share our values.
My response to this discovery was immediate and transforming:
I put my other projects on hold and spent the next month reading almost everything Eliezer had written. I also found articles by Nick Bostrom and Steve Omohundro. I began writing articles for Less Wrong and learning from the community. I applied to [the Machine Intelligence Research Institute's] Visiting Fellows program and was accepted. I quit my job in L.A., moved to Berkeley, worked my ass off, got hired, and started collecting research related to rationality and intelligence explosion.
As my friend Will Newsome once said, “Luke seems to have two copies of the ‘Take Ideas Seriously’ gene.”
Of course, what some people laud as “taking serious ideas seriously,” others see as an innate tendency toward fanaticism. Here’s a comment I could imagine someone making:
I’m not surprised. Luke grew up believing that he was on a cosmic mission to save humanity before the world ended with the arrival of a superpowerful being (the return of Christ). He lost his faith and, with it, his sense of epic purpose. His fear of nihilism made him susceptible to seduction by something that felt like moral realism, and his need for an epic purpose made him susceptible to seduction by existential risk reduction.
One response I could make to this would be to say that this is just “psychologizing” and doesn’t address the state of the evidence for the claims I now defend concerning intelligence explosion. That’s true, but again: Plausible facts about my psychology do provide some Bayesian evidence about how you should respond to the words I’m writing in this book.
Another response I could make would be to explain why I don’t think this is quite what happened, though elements of it are certainly true. (For example, I don’t recall feeling that the return of Christ was imminent or that I was on a cosmic mission to save every last soul, though as an evangelical Christian I was theologically committed to those positions. But it’s certainly the case that I am drawn to “epic” things, like the rock band Muse and the movie Avatar.) But I don’t want to make this chapter even more about my personal psychology.
A third response would be to appeal to social proof. There seems to be a class of Common Sense Atheism readers that have read my writing so closely that they have developed a strong respect for my serious commitment to intellectual self-honesty and changing my mind when I’m wrong, and so when I started writing about intelligence explosion issues they thought, “Well, I used to think this intelligence explosion stuff was pretty kooky, but if Luke is taking it seriously then maybe there’s more to it than I’m realizing,” and they followed me to Less Wrong (where I was now posting regularly). I’ll also mention that a significant causal factor in my being made Executive Director of the Machine Intelligence Research Institute after so little time with the organization was that the staff could see that I was seriously devoted to rationality and debiasing, seriously devoted to saying “oops” and changing my mind and responding to argument, and seriously devoted to acting on decision theory as often as I could, rather than on habit and emotion as I would be inclined to.
In surveying my possible responses to the “fanaticism” criticism above, I’ve already put up something of a defense. But that’s about as far as I’ll take it. I want people to take what I say with a solid serving of salt. I am, after all, only human. Hopefully my readers will take into account not only my humanity but also the force of the arguments and evidence I will later supply concerning the arrival of machine superintelligence.
* * *
1A. M. Turing, “Intelligent Machinery, A Heretical Theory” (A lecture given to ’51 Society’ at Manchester, 1951).
2Irving John Good, “Speculations Concerning the First Ultraintelligent Machine,” in Advances in Computers, ed. Franz L. Alt and Morris Rubinoff, vol. 6 (New York: Academic Press, 1965), 31–88, doi:10.1016/S0065-2458(08)60418-0.