{"id":217,"date":"2012-01-01T08:38:19","date_gmt":"2012-01-01T16:38:19","guid":{"rendered":"http:\/\/facingthesing.wpengine.com\/?page_id=217"},"modified":"2022-10-10T17:42:55","modified_gmt":"2022-10-11T00:42:55","slug":"glossary","status":"publish","type":"page","link":"https:\/\/intelligenceexplosion.com\/sv\/glossary\/","title":{"rendered":"Ordlista"},"content":{"rendered":"<p class=\"qtranxs-available-languages-message qtranxs-available-languages-message-sv\">Tyv\u00e4rr \u00e4r denna artikel enbart tillg\u00e4nglig p\u00e5 <a href=\"https:\/\/intelligenceexplosion.com\/en\/wp-json\/wp\/v2\/pages\/217\" class=\"qtranxs-available-language-link qtranxs-available-language-link-en\" title=\"English\">English<\/a>, <a href=\"https:\/\/intelligenceexplosion.com\/fr\/wp-json\/wp\/v2\/pages\/217\" class=\"qtranxs-available-language-link qtranxs-available-language-link-fr\" title=\"Fran\u00e7ais\">Fran\u00e7ais<\/a> och <a href=\"https:\/\/intelligenceexplosion.com\/zh\/wp-json\/wp\/v2\/pages\/217\" class=\"qtranxs-available-language-link qtranxs-available-language-link-zh\" title=\"\u4e2d\u6587\">\u4e2d\u6587<\/a>.<\/p><p><strong>artificial intelligence (AI).<\/strong>\u00a0A machine intelligence, or the field which studies intelligent machines. &#8221;Narrow AI&#8221; displays intelligence only in a narrow domain, like chess or arithmetic. &#8221;<a href=\"http:\/\/en.wikipedia.org\/wiki\/Strong_ai\">Strong AI<\/a>&#8221; or &#8221;<strong>artificial general intelligence (AGI)<\/strong>&#8221; matches or exceeds human intelligence, including the ability to solve problems in a wide variety of environments. On this website, &#8221;AI&#8221; means &#8221;AGI&#8221; unless otherwise specified.<\/p>\n<p><strong>cognitive bias.<\/strong>\u00a0An obstacle to truth <a href=\"http:\/\/lesswrong.com\/lw\/gp\/whats_a_bias_again\/\">produced by one&#8217;s mental machinery<\/a>. (Other obstacles to truth include the cost of information and computation.) Many such obstacles are common and predictable enough that they have <a href=\"http:\/\/wiki.lesswrong.com\/wiki\/Bias#Blog_posts_about_known_cognitive_biases\">names<\/a>, like the <a href=\"http:\/\/lesswrong.com\/lw\/ji\/conjunction_fallacy\/\">conjunction fallacy<\/a> and the <a href=\"http:\/\/lesswrong.com\/lw\/lg\/the_affect_heuristic\/\">affect heuristic<\/a>.<\/p>\n<p><strong>decision theory.<\/strong>\u00a0The study of correct decisions. A core idea is <em>expected utility maximization<\/em>: an agent should choose the action that maximizes their expected utility. Unsolved problems in decision theory arise when considering <a href=\"http:\/\/wiki.lesswrong.com\/wiki\/Decision_theory#Thought_experiments\">thought experiments<\/a> wherein ideal situations challenge the applicability of current decision theories.<\/p>\n<p><strong>expected utility.<\/strong> <a href=\"http:\/\/en.wikipedia.org\/wiki\/Expected_value\">Expected value<\/a> (in utility).<\/p>\n<p><strong>expected value.<\/strong>\u00a0The average value of all the possible outcomes of an event, each outcome weighted by its probability. Suppose you&#8217;re about to roll a die, and you&#8217;ll win (in dollars) twice the number you roll, unless you roll a 6, in which case you&#8217;ll win 4 times the number you roll. The expected value of rolling the die is [2($1) + 2($2) + 2($3) + 2($4) + 2($5) + 4($6)]\/6 = $9.<\/p>\n<p><strong>intelligence.<\/strong>\u00a0<a href=\"http:\/\/facingthesing.wpengine.com\/2011\/playing-taboo-with-intelligence\/\">Efficient cross-domain optimization<\/a>. Intelligence is one&#8217;s ability to efficiently use available resources to shape the world in accordance with one&#8217;s preferences, in a wide variety of environments.<\/p>\n<p><strong>probability theory.<\/strong>\u00a0The study of probability. On this website, &#8221;probability&#8221; refers to <em>degrees of belief or uncertainty<\/em>, so &#8221;probability theory&#8221; means <em><a href=\"http:\/\/wiki.lesswrong.com\/wiki\/Bayesian_probability\">Bayesian<\/a><\/em>\u00a0probability theory, which <a href=\"http:\/\/facingthesing.wpengine.com\/2011\/the-laws-of-thought\/\">follows from the laws of logic<\/a>. A core rule of probability theory is\u00a0<a href=\"http:\/\/yudkowsky.net\/rational\/bayes\">Bayes&#8217; Theorem<\/a>.<\/p>\n<p><strong>rationality.<\/strong>\u00a0<a href=\"http:\/\/lesswrong.com\/lw\/7i\/rationality_is_systematized_winning\/\">Systematized winning<\/a>. Sometimes called &#8221;<a href=\"http:\/\/facingthesing.wpengine.com\/2011\/from-skepticism-to-technical-rationality\/\">technical rationality<\/a>&#8221; to distinguish it from &#8221;<a href=\"http:\/\/facingthesing.wpengine.com\/2011\/why-spock-is-not-rational\/\">Hollywood rationality<\/a>.&#8221; &#8221;<strong>Epistemic rationality<\/strong>&#8221; is the <a href=\"http:\/\/lesswrong.com\/lw\/31\/what_do_we_mean_by_rationality\/\">craft<\/a> of obtaining true beliefs, <em>i.e.<\/em> making optimal belief updates according to\u00a0<a href=\"http:\/\/facingthesing.wpengine.com\/2011\/the-laws-of-thought\/\">the laws of logic and probability theory<\/a>. &#8221;<strong>Instrumental rationality<\/strong>&#8221; is the <a href=\"http:\/\/lesswrong.com\/lw\/31\/what_do_we_mean_by_rationality\/\">craft<\/a> of achieving one&#8217;s goals, <em>i.e.<\/em> making optimal choices in accord with <a href=\"http:\/\/facingthesing.wpengine.com\/2011\/the-laws-of-thought\/\">the laws of decision theory<\/a>.<\/p>\n<p><strong>utility.<\/strong>\u00a0A numerical measure of preference or value. In a utility function, outcomes with higher utilities are preferred to outcomes with lower utilities.<\/p>\n<p><strong>utility function. <\/strong>A\u00a0<a href=\"http:\/\/en.wikipedia.org\/wiki\/Function_(mathematics)\">function<\/a> that assigns utilities to outcomes. Outcomes with higher utilities are preferred to outcomes with lower utilities. Humans <a href=\"http:\/\/lesswrong.com\/lw\/8a7\/review_of_sharot_dolan_neuroscience_of_preference\/\">do not have coherent utility functions<\/a>, <a href=\"http:\/\/lesswrong.com\/lw\/l3\/thou_art_godshatter\/\">nor would we expect them to<\/a> given how they evolved, which is why they have <a href=\"http:\/\/lesswrong.com\/lw\/5sk\/inferring_our_desires\/\">so much trouble<\/a> <a href=\"http:\/\/lesswrong.com\/lw\/zv\/post_your_utility_function\/\">guessing<\/a> their own utility functions.<\/p>","protected":false},"excerpt":{"rendered":"<p>Tyv\u00e4rr \u00e4r denna artikel enbart tillg\u00e4nglig p\u00e5 English, Fran\u00e7ais och \u4e2d\u6587.artificial intelligence (AI).\u00a0A machine intelligence, or the field which studies intelligent machines. &#8221;Narrow AI&#8221; displays intelligence only in a narrow domain, like chess or arithmetic. &#8221;Strong AI&#8221; or &#8221;artificial general&hellip;  <a href=\"https:\/\/intelligenceexplosion.com\/sv\/glossary\/\">continue reading<\/a> &raquo;<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"class_list":["post-217","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/pages\/217","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/comments?post=217"}],"version-history":[{"count":0,"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/pages\/217\/revisions"}],"wp:attachment":[{"href":"https:\/\/intelligenceexplosion.com\/sv\/wp-json\/wp\/v2\/media?parent=217"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}