{"id":117,"date":"2011-12-15T13:17:44","date_gmt":"2011-12-15T21:17:44","guid":{"rendered":"http:\/\/facingthesing.wpengine.com\/?p=117"},"modified":"2014-12-01T13:46:52","modified_gmt":"2014-12-01T21:46:52","slug":"not-built-to-think-about-ai","status":"publish","type":"post","link":"https:\/\/intelligenceexplosion.com\/sv\/2011\/not-built-to-think-about-ai\/","title":{"rendered":"Inte byggda f\u00f6r att t\u00e4nka p\u00e5 AI"},"content":{"rendered":"<p class=\"qtranxs-available-languages-message qtranxs-available-languages-message-sv\">Tyv\u00e4rr \u00e4r denna artikel enbart tillg\u00e4nglig p\u00e5 <a href=\"https:\/\/intelligenceexplosion.com\/en\/wp-json\/wp\/v2\/posts\/117\" class=\"qtranxs-available-language-link qtranxs-available-language-link-en\" title=\"English\">English<\/a>, <a href=\"https:\/\/intelligenceexplosion.com\/fr\/wp-json\/wp\/v2\/posts\/117\" class=\"qtranxs-available-language-link qtranxs-available-language-link-fr\" title=\"Fran\u00e7ais\">Fran\u00e7ais<\/a>, <a href=\"https:\/\/intelligenceexplosion.com\/ru\/wp-json\/wp\/v2\/posts\/117\" class=\"qtranxs-available-language-link qtranxs-available-language-link-ru\" title=\"\u0440\u0443\u0441\u0441\u043a\u0438\u0439\">\u0440\u0443\u0441\u0441\u043a\u0438\u0439<\/a>, <a href=\"https:\/\/intelligenceexplosion.com\/sk\/wp-json\/wp\/v2\/posts\/117\" class=\"qtranxs-available-language-link qtranxs-available-language-link-sk\" title=\"Sloven\u010dina\">Sloven\u010dina<\/a>, <a href=\"https:\/\/intelligenceexplosion.com\/zh\/wp-json\/wp\/v2\/posts\/117\" class=\"qtranxs-available-language-link qtranxs-available-language-link-zh\" title=\"\u4e2d\u6587\">\u4e2d\u6587<\/a> och <a href=\"https:\/\/intelligenceexplosion.com\/it\/wp-json\/wp\/v2\/posts\/117\" class=\"qtranxs-available-language-link qtranxs-available-language-link-it\" title=\"Italiano\">Italiano<\/a>.<\/p><p><span class=\"dropcap\">I<\/span>n the <em>Terminator<\/em> movies, the <a href=\"http:\/\/en.wikipedia.org\/wiki\/Skynet_(Terminator)\">Skynet<\/a> AI becomes self-aware, kills billions of people, and dispatches killer robots to wipe out the remaining bands of human resistance fighters. That sounds pretty bad, but in <em>NPR<\/em>\u2019s <a href=\"http:\/\/www.npr.org\/2011\/01\/11\/132840775\/The-Singularity-Humanitys-Last-Invention\">piece<\/a> on the intelligence explosion, eBay programmer Keefe Roedersheimer explained that the creation of an <em>actual<\/em> machine superintelligence would be much worse than that.<\/p>\n<blockquote><p>M<span class=\"small-caps\">artin<\/span> K<span class=\"small-caps\">aste<\/span> (NPR): Much worse than <em>Terminator<\/em>?<\/p>\n<p>K<span class=\"small-caps\">eefe<\/span> R<span class=\"small-caps\">oedersheimer<\/span>: Much, much worse.<\/p>\n<p>K<span class=\"small-caps\">aste<\/span>: . . . That\u2019s a moonscape with people hiding under burnt out buildings being shot by lasers. I mean, what could be worse than <em>that<\/em>?<\/p>\n<p>R<span class=\"small-caps\">oedersheimer<\/span>: All the people are dead.<a id=\"fn1x9-bk\" href=\"#fn1x9\"><sup>1<\/sup><\/a><\/p><\/blockquote>\n<p>Why did he say that? For most goals an AI could have\u2014whether it be proving the <a href=\"http:\/\/wiki.lesswrong.com\/wiki\/Riemann_Hypothesis_Catastrophe\">Riemann hypothesis<\/a> or maximizing oil production\u2014the simple reason <a href=\"http:\/\/intelligence.org\/files\/AIPosNegFactor.pdf\">is that<\/a> \u201cthe AI does not love you, nor does it hate you, but you are made of atoms which it can use for something else.\u201d<a id=\"fn2x9-bk\" href=\"#fn2x9\"><sup>2<\/sup><\/a> And when a superhuman AI notices that we humans are likely to <em>resist<\/em> having our atoms used for \u201csomething else,\u201d and therefore pose a threat to the AI and its goals, it will be motivated to wipe us out as quickly as possible\u2014not in a way that exposes its critical weakness to an intrepid team of heroes who can save the world if only they can set aside their differences . . . No. For most goals an AI could have, wiping out the human threat to its goals, as efficiently as possible, will maximize the AI\u2019s <a href=\"http:\/\/facingthesing.wpengine.com\/2011\/the-laws-of-thought\/\">expected utility<\/a>.<\/p>\n<p><a href=\"http:\/\/facingthesing.wpengine.com\/wp-content\/uploads\/2011\/12\/mushroomcloud.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignright size-medium wp-image-134\" title=\"mushroomcloud\" src=\"http:\/\/facingthesing.wpengine.com\/wp-content\/uploads\/2011\/12\/mushroomcloud-300x223.jpg\" alt=\"\" width=\"300\" height=\"223\" srcset=\"https:\/\/intelligenceexplosion.com\/wp-content\/uploads\/2011\/12\/mushroomcloud-300x223.jpg 300w, https:\/\/intelligenceexplosion.com\/wp-content\/uploads\/2011\/12\/mushroomcloud.jpg 545w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>And let\u2019s face it: there are easier ways to kill us humans than to send out <em>human-shaped<\/em> robots that walk on the ground and seem to enjoy tossing humans into walls instead of just snapping their necks. Much better if the attack on humans is sudden, has simultaneous effects all over the world, is rapidly lethal, and defies available countermeasures. For example: Why not just use all that superintelligence to engineer an airborne, highly contagious and fatal supervirus? A <a href=\"http:\/\/en.wikipedia.org\/wiki\/1918_flu_pandemic\">1918 mutation of the flu<\/a> killed 3% of Earth\u2019s population, and that was before air travel (which spreads diseases across the globe) was common, and without an ounce of intelligence going into the design of the virus. Apply a bit of intelligence, and you get a team from the Netherlands creating a variant of bird flu that \u201c<a href=\"http:\/\/rt.com\/news\/bird-flu-killer-strain-119\/\">could kill half of humanity<\/a>.\u201d<a id=\"fn3x9-bk\" href=\"#fn3x9\"><sup>3<\/sup><\/a> A superintelligence could create something far worse. Or the AI could hide underground or blast itself into space and kill us all with <em>existing<\/em> technology: <a href=\"http:\/\/lesswrong.com\/lw\/8f0\/existential_risk\/\">a few thousand nuclear weapons<\/a>.<\/p>\n<p>The point is not that either of these <em>particular<\/em> scenarios is likely. I\u2019m just trying to point out that the reality of a situation can be, of course, quite different from what makes for good storytelling. As Oxford philosopher Nick Bostrom <a href=\"http:\/\/www.nickbostrom.com\/existential\/risks.html\">put it<\/a>:<\/p>\n<blockquote><p>When was the last time you saw a movie about humankind suddenly going extinct (without warning and without being replaced by some other civilization)?<a id=\"fn4x9-bk\" href=\"#fn4x9\"><sup>4<\/sup><\/a><\/p><\/blockquote>\n<p>It makes a better story if the fight is plausibly winnable by either side. <em>The<\/em> <em>Lord of the Rings<\/em> wouldn\u2019t have sold as many copies if Frodo had done the sensible thing and dropped the ring into the volcano <a href=\"http:\/\/www.youtube.com\/watch?v=1yqVD0swvWU\">from the back of a giant eagle<\/a>. And it\u2019s not an interesting story if humans suddenly <em>lose<\/em>, full stop.<\/p>\n<p>When thinking about AI, we must not <a href=\"http:\/\/lesswrong.com\/lw\/k9\/the_logical_fallacy_of_generalization_from\/\">generalize from fictional evidence<\/a>. Unfortunately, our brains do that <em>automatically<\/em>.<\/p>\n<p>A famous 1978 study asked subjects to judge which of two dangers occurred more often. Subjects thought accidents caused about as many deaths as disease, and that homicide was more frequent than suicide.<a id=\"fn5x9-bk\" href=\"#fn5x9\"><sup>5<\/sup><\/a> Actually, disease causes sixteen times as many deaths as accidents, and suicide is twice as common as homicide. What happened?<\/p>\n<p>Dozens of studies on the <a href=\"http:\/\/wiki.lesswrong.com\/wiki\/Availability_heuristic\">availability heuristic<\/a> suggest that we judge the frequency or probability of events by how easily they come to mind. That\u2019s not too bad a heuristic to have evolved in our ancestral environment when we couldn\u2019t check <em>actual<\/em> frequencies on <em>Wikipedia<\/em> or determine <em>actual<\/em> probabilities with <a href=\"http:\/\/yudkowsky.net\/rational\/bayes\">Bayes\u2019 Theorem<\/a>. The brain\u2019s heuristic is quick, cheap, and often right.<\/p>\n<p>But like so many of our evolved cognitive heuristics, the availability heuristic often gets the wrong results. Accidents are more vivid than diseases, and thus come to mind more easily, causing us to overestimate their frequency relative to diseases. The same goes for homicide and suicide.<\/p>\n<p>The availability heuristic also <a href=\"http:\/\/www.psychologytoday.com\/articles\/200712\/10-ways-we-get-the-odds-wrong\">explains<\/a> why people think flying is more dangerous than driving when the opposite is true: a plane crash is more vivid and is reported widely when it happens, so it\u2019s more available to one\u2019s memory, and the brain tricks itself into thinking the event\u2019s <em>availability<\/em> indicates its <em>probability<\/em>.<\/p>\n<p>What does your brain do when it thinks about superhuman AI? Most likely, it checks all the instances of superhuman AI you\u2019ve encountered\u2014which, to make things worse, are all <em>fictional<\/em>\u2014and judges the probability of certain scenarios by how well they match with the scenarios that come to mind most easily (due to your mind encountering them in fiction). In <a href=\"http:\/\/lesswrong.com\/lw\/k9\/the_logical_fallacy_of_generalization_from\/\">other words<\/a>, \u201cRemembered fictions rush in and do your thinking for you.\u201d<\/p>\n<p>So if you\u2019ve got intuitions about what superhuman AIs will be like, they\u2019re probably based on <em>fiction<\/em>, without you even realizing it.<\/p>\n<p>This is why I began <em>Facing the Intelligence Explosion<\/em> by talking about <a href=\"http:\/\/facingthesing.wpengine.com\/2011\/from-skepticism-to-technical-rationality\/\">rationality<\/a>. The moment we start thinking about AI is the moment we run straight into a thicket of common human failure modes. Generalizing from fictional evidence is one of them. Here are a few <a href=\"http:\/\/intelligence.org\/files\/CognitiveBiases.pdf\">others<\/a>:<\/p>\n<ul>\n<li>Due to the availability heuristic, your brain will tell you that an AI wiping out mankind is incredibly unlikely because you\u2019ve never encountered this before. Moreover, even when things go as badly as they do in, say, <a href=\"http:\/\/tvtropes.org\/pmwiki\/pmwiki.php\/Main\/AlienInvasion\">alien invasion<\/a> movies, the plucky human heroes always end up finding a way to win at the last minute.<\/li>\n<li>Because we overestimate the probability of conjunctive events but underestimate the probability of disjunctive events,<a id=\"fn6x9-bk\" href=\"#fn6x9\"><sup>6<\/sup><\/a> we\u2019re likely to <em>overestimate<\/em> the probability that superhuman AI will turn out okay because X, Y, and Z will all happen, and we\u2019re likely to <em>underestimate<\/em> the probability that superhuman AI will turn out badly because there are many ways superhuman AI can turn out badly that don\u2019t depend on many other events occurring.<\/li>\n<li>Due to your brain\u2019s <a href=\"http:\/\/lesswrong.com\/lw\/j7\/anchoring_and_adjustment\/\">anchoring and adjustment<\/a> heuristic, your judgment of a situation will anchor on noticeably irrelevant information. For example, the number that comes up on a wheel of fortune roll will affect your guess at how many countries there are in Africa. Even though I\u2019ve just talked about how <em>The Terminator<\/em> is irrelevant fictional evidence about how superhuman AIs will turn out, your brain will be tempted to anchor on <em>The Terminator<\/em> and then adjust away from it, but not far enough away.<\/li>\n<li>Due to the <a href=\"http:\/\/lesswrong.com\/lw\/lg\/the_affect_heuristic\/\">affect heuristic<\/a>, we judge things based on how we feel about them. We feel good about beautiful people, so we assume beautiful people are also smart and hard-working. We feel good about intelligence, so we might expect an intelligent machine to be benevolent. (But <a href=\"http:\/\/intelligence.org\/files\/SuperintelligenceBenevolence.pdf\">no<\/a>.)<\/li>\n<li>Due to <a href=\"http:\/\/lesswrong.com\/lw\/hw\/scope_insensitivity\/\">scope insensitivity<\/a>, we do not feel any more motivated to prevent one hundred thousand deaths than we do to prevent one hundred deaths.<\/li>\n<\/ul>\n<p>Clearly, we were not built to think about AI.<\/p>\n<p>To think wisely about AI, we\u2019ll have to keep watch for\u2014and actively resist\u2014many kinds of common thinking errors. We\u2019ll have to use the <a href=\"http:\/\/facingthesing.wpengine.com\/2011\/the-laws-of-thought\/\">laws of thought<\/a> rather than <a href=\"http:\/\/facingthesing.wpengine.com\/2011\/the-crazy-robots-rebellion\/\">normal human insanity<\/a> to think about AI.<\/p>\n<p>Or, to put it another way, superhuman AI will have a huge impact on our world, so we want very badly to \u201cwin\u201d instead of \u201close\u201d with superhuman AI. <a href=\"http:\/\/facingthesing.wpengine.com\/2011\/from-skepticism-to-technical-rationality\/\">Technical rationality<\/a> <a href=\"http:\/\/facingthesing.wpengine.com\/2011\/why-spock-is-not-rational\/\">done right<\/a> is a <a href=\"http:\/\/lesswrong.com\/lw\/7i\/rationality_is_systematized_winning\/\">system for optimal winning<\/a>\u2014indeed, it\u2019s <a href=\"http:\/\/selfawaresystems.files.wordpress.com\/2012\/03\/rational_ai_greater_good.pdf\">the system a flawless AI would use to win as much as possible<\/a>.<a id=\"fn7x9-bk\" href=\"#fn7x9\"><sup>7<\/sup><\/a> So if we want to win with superhuman AI, we should use rationality to do so.<\/p>\n<p>And that\u2019s just what we\u2019ll begin to do in the next chapter.<\/p>\n<p class=\"footnotes\">* * *<\/p>\n<p><small><a id=\"fn1x9\" href=\"#fn1x9-bk\"><sup>1<\/sup><\/a>Martin Kaste, \u201cThe Singularity: Humanity\u2019s Last Invention?,\u201d <em>NPR<\/em>, All Things Considered (January 11, 2011), accessed November 4, 2012, <a class=\"url\" href=\"http:\/\/www.npr.org\/2011\/01\/11\/132840775\/The-Singularity-Humanitys-Last-Invention\">http:\/\/www.npr.org\/2011\/01\/11\/132840775\/The-Singularity-Humanitys-Last-Invention<\/a>.<\/small><\/p>\n<p><small><a id=\"fn2x9\" href=\"#fn2x9-bk\"><sup>2<\/sup><\/a>Eliezer Yudkowsky, \u201cArtificial Intelligence as a Positive and Negative Factor in Global Risk,\u201d in <em>Global Catastrophic Risks<\/em>, ed. Nick Bostrom and Milan M. \u0106irkovi\u0107 (New York: Oxford University Press, 2008), 308\u2013345.<\/small><\/p>\n<p><small><a id=\"fn3x9\" href=\"#fn3x9-bk\"><sup>3<\/sup><\/a>RT, \u201cMan-Made Super-Flu Could Kill Half Humanity,\u201d <em>RT<\/em>, November 24, 2011, <a class=\"url\" href=\"http:\/\/www.rt.com\/news\/bird-flu-killer-strain-119\/\">http:\/\/www.rt.com\/news\/bird-flu-killer-strain-119\/<\/a>.<\/small><\/p>\n<p><small><a id=\"fn4x9\" href=\"#fn4x9-bk\"><sup>4<\/sup><\/a>Nick Bostrom, \u201cExistential Risks: Analyzing Human Extinction Scenarios and Related Hazards,\u201d <em>Journal of Evolution and Technology<\/em> 9 (2002), <a class=\"url\" href=\"http:\/\/www.jetpress.org\/volume9\/risks.html\">http:\/\/www.jetpress.org\/volume9\/risks.html<\/a>.<\/small><\/p>\n<p><small><a id=\"fn5x9\" href=\"#fn5x9-bk\"><sup>5<\/sup><\/a>Sarah Lichtenstein et al., \u201cJudged Frequency of Lethal Events,\u201d <em>Journal of Experimental Psychology: Human Learning and Memory<\/em> 4 (6 1978): 551\u2013578, <span class=\"textrm\">doi<\/span>:<a href=\"http:\/\/dx.doi.org\/10.1037\/0278-7393.4.6.551\">10.1037\/0278-7393.4.6.551<\/a>.<\/small><\/p>\n<p><small><a id=\"fn6x9\" href=\"#fn6x9-bk\"><sup>6<\/sup><\/a>Tversky and Kahneman, \u201cJudgment Under Uncertainty.\u201d<\/small><\/p>\n<p><small><a id=\"fn7x9\" href=\"#fn7x9-bk\"><sup>7<\/sup><\/a>Stephen M. Omohundro, \u201cRational Artificial Intelligence for the Greater Good,\u201d in <em>The Singularity Hypothesis: A Scientific and Philosophical Assessment<\/em>, ed. Amnon Eden et al. (Berlin: Springer, 2012), Preprint at, <a class=\"url\" href=\"http:\/\/selfawaresystems.files.wordpress.com\/2012\/03\/rational_ai_greater_good.pdf\">http:\/\/selfawaresystems.files.wordpress.com\/2012\/03\/rational_ai_greater_good.pdf<\/a>.<\/small><\/p>","protected":false},"excerpt":{"rendered":"<p>Tyv\u00e4rr \u00e4r denna artikel enbart tillg\u00e4nglig p\u00e5 English, Fran\u00e7ais, \u0440\u0443\u0441\u0441\u043a\u0438\u0439, Sloven\u010dina, \u4e2d\u6587 och Italiano.n the Terminator movies, the Skynet AI becomes self-aware, kills billions of people, and dispatches killer robots to wipe out the remaining bands of human resistance fighters.&hellip;  <a href=\"https:\/\/intelligenceexplosion.com\/sv\/2011\/not-built-to-think-about-ai\/\">continue reading<\/a> &raquo;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[4],"tags":[],"class_list":["post-117","post","type-post","status-publish","format-standard","hentry","category-chapter"],"_links":{"self":[{"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/posts\/117","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/comments?post=117"}],"version-history":[{"count":0,"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/posts\/117\/revisions"}],"wp:attachment":[{"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/media?parent=117"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/categories?post=117"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/tags?post=117"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}