eBook

Playing Taboo with “Intelligence”

Eliezer Yudkowsky recounts:

Years ago when I was on a panel with Jaron Lanier, he had offered some elaborate argument that no machine could be intelligent, because it was just a machine and to call it “intelligent” was therefore bad poetry, or something along those lines. Fed up, I finally snapped: “Do you mean to say that if I write a computer program and that computer program rewrites itself and rewrites itself and builds its own nanotechnology and zips off to Alpha Centauri and builds its own Dyson Sphere, that computer program is not intelligent?”

Much of the confusion about AI comes from disagreements about the meaning of “intelligence.”

Let me clear things up with a parable:

If a tree falls in the forest, and no one hears it, does it make a sound?

Albert: “Of course it does. What kind of silly question is that? Every time I’ve listened to a tree fall, it made a sound, so I’ll guess that other trees falling also make sounds. I don’t believe the world changes around when I’m not looking.”

Barry: “Wait a minute. If no one hears it, how can it be a sound?”

Albert and Barry are not arguing about facts, but about definitions:

The first person is speaking as if “sound” means acoustic vibrations in the air; the second person is speaking as if “sound” means an auditory experience in a brain. If you ask “Are there acoustic vibrations?” or “Are there auditory experiences?”, the answer is at once obvious. And so the argument is really about the definition of the word “sound.”

We need not argue about definitions. Wherever we might be using different meanings for words, we can cut to the chase by replacing the confusing symbol (a word) with its intended substance (the meaning you intend).

This is like playing the game Taboo (by Hasbro). In Taboo, you have to describe something to your partner without using a certain list of words:

For example, you might have to get your partner to say “baseball” without using the words “sport,” “bat,” “hit,” “pitch,” “base,” or, of course, “baseball.”

This game is good practice for a discussion about AI. If two people notice they’re using different definitions of “intelligence,” they don’t need to argue about whose definition is “right.” They can taboo the word “intelligence” and talk about “analytic ability” or “problem-solving ability” or whatever it is they mean by “intelligence.” Now they are closer to arguing about facts instead of definitions.

Shane Legg once collected seventy-one definitions of intelligence.1 Scanning through the definitions for commonly occurring features, he notes that people seem to think intelligence is:

  • A property that an individual agent has as it interacts with its environment or environments.
  • . . . Related to the agent’s ability to succeed or profit with respect to some goal or objective.
  • Depend[ent] on how able [the] agent is to adapt to different objectives and environments.

If we combine these features, we get something like this:

Intelligence measures an agent’s ability to achieve goals in a wide range of environments.2

This is, after all, the kind of intelligence that let humans dominate all other species on the planet, and the kind of intelligence that leaves us superior to machines (at least for now). Termites built bigger cities (relative to their body size) and whales had bigger brains, but humans had the intelligence to adapt to almost every terrestrial environment and make tools and spears and boats and farms. Watson can beat us in Jeopardy! and WolframAlpha can process computable knowledge better than we can, but drop either of them in a lake or expect them to find their own electricity and they are helpless. Unlike Watson and WolframAlpha, humans have cross-domain goal-optimizing ability. Or:

A bee builds hives, and a beaver builds dams; but a bee doesn’t build dams and a beaver doesn’t build hives. A human, watching, thinks, “Oh, I see how to do it” and goes on to build a dam using a honeycomb structure for extra strength.

But wait a minute. Suppose Bill Gates gives me ten billion dollars. I now have much greater ability to “achieve goals in a wide range of environments,” but would we say my “intelligence” has gone up? I doubt it. If we want to measure an agent’s “intelligence,” we should take that agent’s ability to optimize for its goals in a wide range of environments—its “optimization power,” we might say—and divide that by its resources used to do so:

This definition sees intelligence as efficient cross-domain optimization. Intelligence is what allows an agent to steer the future, a power that is amplified by the resources at its disposal.

This may or may not match your own intuitive definition of “intelligence.” But it doesn’t matter. I’ve tabooed “intelligence.” I’ve replaced the symbol with the substance. When discussing AI, I can speak only of “efficient cross-domain optimization,” and nothing about your preferred definition of “intelligence” will matter for anything I say.

Now, “intelligence” is shorter, so I’d prefer to just say that. But it’s best if you always read “intelligence” (when I use it) as “efficient cross-domain optimization.”

And, now that we understand what I mean by “intelligence,” we’re ready to talk about AI.

* * *

1Shane Legg and Marcus Hutter, A Collection of Definitions of Intelligence (Manno-Lugano, Switzerland: IDSIA, July 15, 2007), http://www.idsia.ch/idsiareport/IDSIA-07-07.pdf.

2Shane Legg and Marcus Hutter, A Formal Measure of Machine Intelligence (Manno-Lugano, Switzerland: IDSIA, April 12, 2006), http://www.idsia.ch/idsiareport/IDSIA-10-06.pdf.