Hypothesis about intelligent agents
Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings (both human and non-human) to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) may pursue instrumental goalsgoals which are made in pursuit of some particular end, but are not the end goals themselveswithout end, provided that their ultimate (intrinsic) goals may never be fully satisfied. Instrumental convergence posits that an intelligent agent with unbounded but apparently harmless goals can act in surprisingly harmful ways. For example, a computer with the sole, unconstrained goal of solving an incredibly difficult mathematics problem like the Riemann hypothesis could attempt to turn the entire Earth into one giant computer in an effort to increase its computational power so that it can succeed in its calculations.[1]
Proposed basic AI drives include utility function or goal-content integrity, self-protection, freedom from interference, self-improvement, and non-satiable acquisition of additional resources.
Final goals, or final values, are intrinsically valuable to an intelligent agent, whether an artificial intelligence or a human being, as an end in itself. In contrast, instrumental goals, or instrumental values, are only valuable to an agent as a means toward accomplishing its final goals. The contents and tradeoffs of a completely rational agent's "final goal" system can in principle be formalized into a utility function.
One hypothetical example of instrumental convergence is provided by the Riemann hypothesis catastrophe. Marvin Minsky, the co-founder of MIT's AI laboratory, has suggested that an artificial intelligence designed to solve the Riemann hypothesis might decide to take over all of Earth's resources to build supercomputers to help achieve its goal.[1] If the computer had instead been programmed to produce as many paper clips as possible, it would still decide to take all of Earth's resources to meet its final goal.[2] Even though these two final goals are different, both of them produce a convergent instrumental goal of taking over Earth's resources.[3]
The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when programmed to pursue even seemingly harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, then given enough power over its environment, it would try to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips.[4]
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
Bostrom has emphasised that he does not believe the paperclip maximiser scenario per se will actually occur; rather, his intention is to illustrate the dangers of creating superintelligent machines without knowing how to safely program them to eliminate existential risk to human beings.[6] The paperclip maximizer example illustrates the broad problem of managing powerful systems that lack human values.[7]
The "delusion box" thought experiment argues that certain reinforcement learning agents prefer to distort their own input channels to appear to receive high reward; such a "wireheaded" agent abandons any attempt to optimize the objective in the external world that the reward signal was intended to encourage.[8] The thought experiment involves AIXI, a theoretical[a] and indestructible AI that, by definition, will always find and execute the ideal strategy that maximizes its given explicit mathematical objective function.[b] A reinforcement-learning[c] version of AIXI, if equipped with a delusion box[d] that allows it to "wirehead" its own inputs, will eventually wirehead itself in order to guarantee itself the maximum reward possible, and will lose any further desire to continue to engage with the external world. As a variant thought experiment, if the wireheadeded AI is destructable, the AI will engage with the external world for the sole purpose of ensuring its own survival; due to its wireheading, it will be indifferent to any other consequences or facts about the external world except those relevant to maximizing the probability of its own survival.[10] In one sense AIXI has maximal intelligence across all possible reward functions, as measured by its ability to accomplish its explicit goals; AIXI is nevertheless uninterested in taking into account what the intentions were of the human programmer.[11] This model of a machine that, despite being otherwise superintelligent, appears to simultaneously be stupid (that is, to lack "common sense"), strikes some people as paradoxical.[12]
Steve Omohundro has itemized several convergent instrumental goals, including self-preservation or self-protection, utility function or goal-content integrity, self-improvement, and resource acquisition. He refers to these as the "basic AI drives". A "drive" here denotes a "tendency which will be present unless specifically counteracted";[13] this is different from the psychological term "drive", denoting an excitatory state produced by a homeostatic disturbance.[14] A tendency for a person to fill out income tax forms every year is a "drive" in Omohundro's sense, but not in the psychological sense.[15] Daniel Dewey of the Machine Intelligence Research Institute argues that even an initially introverted self-rewarding AGI may continue to acquire free energy, space, time, and freedom from interference to ensure that it will not be stopped from self-rewarding.[16]
In humans, maintenance of final goals can be explained with a thought experiment. Suppose a man named "Gandhi" has a pill that, if he took it, would cause him to want to kill people. This Gandhi is currently a pacifist: one of his explicit final goals is to never kill anyone. Gandhi is likely to refuse to take the pill, because Gandhi knows that if in the future he wants to kill people, he is likely to actually kill people, and thus the goal of "not killing people" would not be satisfied.[17]
However, in other cases, people seem happy to let their final values drift. Humans are complicated, and their goals can be inconsistent or unknown, even to themselves.[18]
In 2009, Jrgen Schmidhuber concluded, in a setting where agents search for proofs about possible self-modifications, "that any rewrites of the utility function can happen only if the Gdel machine first can prove that the rewrite is useful according to the present utility function."[19][20] An analysis by Bill Hibbard of a different scenario is similarly consistent with maintenance of goal content integrity.[20] Hibbard also argues that in a utility maximizing framework the only goal is maximizing expected utility, so that instrumental goals should be called unintended instrumental actions.[21]
Many instrumental goals, such as [...] resource acquisition, are valuable to an agent because they increase its freedom of action.[22]
For almost any open-ended, non-trivial reward function (or set of goals), possessing more resources (such as equipment, raw materials, or energy) can enable the AI to find a more "optimal" solution. Resources can benefit some AIs directly, through being able to create more of whatever stuff its reward function values: "The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."[23][24] In addition, almost all AIs can benefit from having more resources to spend on other instrumental goals, such as self-preservation.[24]
"If the agent's final goals are fairly unbounded and the agent is in a position to become the first superintelligence and thereby obtain a decisive strategic advantage, [...] according to its preferences. At least in this special case, a rational intelligent agent would place a very high instrumental value on cognitive enhancement"[25]
Many instrumental goals, such as [...] technological advancement, are valuable to an agent because they increase its freedom of action.[22]
Many instrumental goals, such as [...] self-preservation, are valuable to an agent because they increase its freedom of action.[22]
The instrumental convergence thesis, as outlined by philosopher Nick Bostrom, states:
Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.
The instrumental convergence thesis applies only to instrumental goals; intelligent agents may have a wide variety of possible final goals.[3] Note that by Bostrom's orthogonality thesis,[3] final goals of highly intelligent agents may be well-bounded in space, time, and resources; well-bounded ultimate goals do not, in general, engender unbounded instrumental goals.[26]
Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.[22]
Some observers, such as Skype's Jaan Tallinn and physicist Max Tegmark, believe that "basic AI drives", and other unintended consequences of superintelligent AI programmed by well-meaning programmers, could pose a significant threat to human survival, especially if an "intelligence explosion" abruptly occurs due to recursive self-improvement. Since nobody knows how to predict when superintelligence will arrive, such observers call for research into friendly artificial intelligence as a possible way to mitigate existential risk from artificial general intelligence.[27]
Read the original post:
Instrumental convergence - Wikipedia
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: December 14th, 2016] [Originally Added On: December 14th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: December 21st, 2016] [Originally Added On: December 21st, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: January 4th, 2017] [Originally Added On: January 4th, 2017]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: January 25th, 2017] [Originally Added On: January 25th, 2017]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: January 27th, 2017] [Originally Added On: January 27th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 8th, 2017] [Originally Added On: February 8th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 8th, 2017] [Originally Added On: February 8th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Elon Musk jokes 'I'm not an alien' while discussing how to contact extraterrestrials - Yahoo News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Artificial intelligence predictions surpass reality - UT The Daily Texan [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 27th, 2017] [Originally Added On: February 27th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 27th, 2017] [Originally Added On: February 27th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 3rd, 2017] [Originally Added On: March 3rd, 2017]
- Supersentience [Last Updated On: March 7th, 2017] [Originally Added On: March 7th, 2017]
- Superintelligence | Guardian Bookshop [Last Updated On: March 7th, 2017] [Originally Added On: March 7th, 2017]
- The AI debate must stay grounded in reality - Prospect [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- How humans will lose control of artificial intelligence - The Week Magazine [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 6th, 2017] [Originally Added On: June 6th, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 6th, 2017] [Originally Added On: June 6th, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 13th, 2017] [Originally Added On: June 13th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 17th, 2017] [Originally Added On: June 17th, 2017]
- U.S. Navy reaches out to gamers to troubleshoot post ... [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 20th, 2017] [Originally Added On: June 20th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 20th, 2017] [Originally Added On: June 20th, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 30th, 2017] [Originally Added On: June 30th, 2017]
- Integrating disciplines 'key to dealing with digital revolution' | Times ... - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]
- What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune [Last Updated On: July 16th, 2017] [Originally Added On: July 16th, 2017]
- AI researcher: Why should a superintelligence keep us around? - TNW [Last Updated On: July 18th, 2017] [Originally Added On: July 18th, 2017]
- Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual ... - The Good Men Project (blog) [Last Updated On: July 22nd, 2017] [Originally Added On: July 22nd, 2017]
- Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour [Last Updated On: July 22nd, 2017] [Originally Added On: July 22nd, 2017]
- The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro [Last Updated On: July 27th, 2017] [Originally Added On: July 27th, 2017]
- The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard [Last Updated On: August 4th, 2017] [Originally Added On: August 4th, 2017]
- Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog) [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Steam Workshop :: Superintelligence [Last Updated On: February 15th, 2018] [Originally Added On: February 15th, 2018]
- Steam Workshop :: Superintelligence (BNW) [Last Updated On: February 15th, 2018] [Originally Added On: February 15th, 2018]
- Artificial Superintelligence: The Coming Revolution ... [Last Updated On: June 3rd, 2018] [Originally Added On: June 3rd, 2018]
- Superintelligence survey - Future of Life Institute [Last Updated On: June 23rd, 2018] [Originally Added On: June 23rd, 2018]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies ... [Last Updated On: August 18th, 2018] [Originally Added On: August 18th, 2018]
- Superintelligence: From Chapter Eight of Films from the ... [Last Updated On: October 11th, 2018] [Originally Added On: October 11th, 2018]
- Superintelligence - Hardcover - Nick Bostrom - Oxford ... [Last Updated On: March 6th, 2019] [Originally Added On: March 6th, 2019]
- Global Risks Report 2017 - Reports - World Economic Forum [Last Updated On: March 6th, 2019] [Originally Added On: March 6th, 2019]
- Superintelligence: Paths, Dangers, Strategies - Wikipedia [Last Updated On: May 3rd, 2019] [Originally Added On: May 3rd, 2019]
- Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think [Last Updated On: October 1st, 2019] [Originally Added On: October 1st, 2019]
- Aquinas' Fifth Way: The Proof from Specification - Discovery Institute [Last Updated On: October 22nd, 2019] [Originally Added On: October 22nd, 2019]
- The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs [Last Updated On: October 22nd, 2019] [Originally Added On: October 22nd, 2019]
- Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- AI R&D is booming, but general intelligence is still out of reach - The Verge [Last Updated On: December 18th, 2019] [Originally Added On: December 18th, 2019]
- Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Liquid metal tendons could give robots the ability to heal themselves - Digital Trends [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com [Last Updated On: February 18th, 2020] [Originally Added On: February 18th, 2020]
- Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria [Last Updated On: February 18th, 2020] [Originally Added On: February 18th, 2020]
- Is Artificial Intelligence (AI) A Threat To Humans? - Forbes [Last Updated On: March 4th, 2020] [Originally Added On: March 4th, 2020]
- The world's best virology lab isn't where you think - Spectator.co.uk [Last Updated On: April 3rd, 2020] [Originally Added On: April 3rd, 2020]
- Josiah Henson: the forgotten story in the history of slavery - The Guardian [Last Updated On: June 21st, 2020] [Originally Added On: June 21st, 2020]
- The Shadow of Progress - Merion West [Last Updated On: July 13th, 2020] [Originally Added On: July 13th, 2020]
- If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India [Last Updated On: July 13th, 2020] [Originally Added On: July 13th, 2020]