eBook

Intelligensexplosion

Tyvärr är denna artikel enbart tillgänglig på Amerikansk Engelska, Franska, Ryska, Slovenčina, 中文 och Italienska.

Suppose you’re a disembodied spirit watching the universe evolve. For the first nine billion years, almost nothing happens.

“God, this is so boring!” you complain.

“How so?” asks your partner.

“There’s no depth or complexity to anything because nothing aims at anything. Nothing optimizes for anything. There’s no plot. It’s just a bunch of random crap that happens. Worse than Seinfeld.”

“Really? What’s that over there?”

You follow your partner’s gaze and notice a tiny molecule in a pool of water on a rocky planet. Before your eyes, it makes a copy of itself. And then another. And then the copies make copies of themselves.

“A replicator!” you exclaim. “Within months there could be millions of those things.”

“I wonder if this will lead to squirrels.”

“What are squirrels?”

Your partner explains the functional complexity of squirrels, which they encountered in Universe 217.

“That’s absurd! At anything like our current rate of optimization, we wouldn’t see any squirrels come about by pure accident until long after the heat death of the universe.”

But soon you notice something even more important: some of the copies are errors. The copies are exploring the neighboring regions of the conceptual search space. Some of these regions contain better replicators, and those superior replicators end up with more copies of themselves than the original replicators and explore their own neighborhoods.

The next few billions years are by far the most exciting you’ve seen. Simple replicators lead to simple organisms, which lead to complex life, which leads to brains, which lead to the Homo line of apes.

At first, Homo looks much like any other brained animal. It shares 99% of its coding DNA with chimpanzees. You might be forgiven for thinking the human brain wasn’t that big a deal—maybe it would enable a 50% increase in optimization speed, or something like that. After all, animal brains have been around for millions of years, and have gradually evolved without any dramatic increase in function.

But then one thing leads to another. Before your eyes, humans become smart enough to domesticate crops, which leads to a sedentary lifestyle and repeatable trade, which leads to writing for keeping track of debts. Farming also generates food surpluses, and that enables professional specialization, which gives people the ability to focus on solving problems other than finding food and fucking. Professional specialization leads to science and technology and the industrial revolution, which lead to space travel and iPhones.

The difference between chimpanzees and humans illustrates how powerful it can be to rewrite an agent’s cognitive algorithms. But, of course, the algorithm’s origin in this case was merely evolution, blind and stupid. An intelligent process with a bit of foresight can leap through the search space more efficiently. A human computer programmer can make innovations in a day that evolution couldn’t have discovered in billions of years.

But for the most part, humans still haven’t figured out how their own cognitive algorithms work, or how to rewrite them. And the computers we program don’t understand their own cognitive algorithms, either (for the most part). But one day they will.

Which means the future contains a feedback loop that the past does not:

If you’re eurisko, you manage to modify some of your metaheuristics, and the metaheuristics work noticeably better, and they even manage to make a few further modifications to themselves, but then the whole process runs out of steam and flatlines.

It was human intelligence that produced these artifacts to begin with. Their own optimization power is far short of human—so incredibly weak that, after they push themselves along a little, they can’t push any further. Worse, their optimization at any given level is characterized by a limited number of opportunities, which once used up are gone—extremely sharp diminishing returns. . . .

When you first build an AI, it’s a baby—if it had to improve itself, it would almost immediately flatline. So you push it along using your own cognition . . . and knowledge—not getting any benefit of recursion in doing so, just the usual human idiom of knowledge feeding upon itself and insights cascading into insights. Eventually the AI becomes sophisticated enough to start improving itself—not just small improvements, but improvements large enough to cascade into other improvements. . . . And then you get what I. J. Good called an “intelligence explosion.”

. . . and the AI leaves our human abilities far behind.

At that point, we might as well be dumb chimpanzees watching as those newfangled “humans” invent fire and farming and writing and science and guns and planes and take over the whole world. And like the chimpanzee, at that point we won’t be in a position to negotiate with our superiors. Our future will depend on what they want.