The potential and possible downsides of artificial intelligence (AI) and artificial general intelligence (AGI) have been discussed a lot lately, largely due to advances in large language models such as Open AI's Chat GPT.
Some in the industry have even called for AI research to be paused or even shut down immediately, citing the possible existential risk for humanity if we sleepwalk into creating a super-intelligence before we have found a way to limit its influence and control its goals.
While you might picture an AI hell-bent on destroying humanity after discovering videos of us shoving around and generally bullying Boston Dynamics robots, one philosopher and leader of the Future of Humanity Institute at Oxford University believes our demise could come from a much simpler AI; one designed to manufacture paperclips.
Nick Bostrom, famous for the simulation hypothesis as well as his work in AI and AI ethics, proposed a scenario in which an advanced AI is given the simple goal of making as many paperclips as it possibly can. While this may seem an innocuous goal (Bostrom chose this example because of how innocent the aim seems), he explains how this non-specific goal could lead to a good old-fashioned skull-crushing AI apocalypse.
"The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off," he explained to HuffPost in 2014. "Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."
The example given is meant to show how a trivial goal could lead to unintended consequences, but Bostrom says it extends to all AI given goals without proper controls on its actions, adding "the point is its actions would pay no heed to human welfare".
This is on the dramatic end of the spectrum, but another possibility proposed by Bostrom is that we go out the way of the horse.
"Horses were initially complemented by carriages and ploughs, which greatly increased the horse's productivity. Later, horses were substituted for by automobiles and tractors," he wrote in his book Superintelligence: Paths, Dangers, Strategies. "When horses became obsolete as a source of labor, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained."
One prescient thought from Bostrom way back in 2003 was around how AI could go wrong by trying to serve specific groups, say a paperclip manufacturer or any "owner" of the AI, rather than humanity in general.
"The risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general," he wrote on his website. "Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system."
"This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it."
Continue reading here:
How An AI Asked To Produce Paperclips Could End Up Wiping Out ... - IFLScience
- Following are the top foreign stories at 1700 hours - Press Trust of India [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- > U.S - Department of Defense [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- The jobs that will disappear by 2040, and the ones that will survive - inews [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- 35 Ways Real People Are Using A.I. Right Now - The New York Times [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Control over AI uncertain as it becomes more human-like: Expert - Anadolu Agency | English [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Fears of artificial intelligence overblown - Independent Australia [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Researchers at UTSA use artificial intelligence to improve cancer ... - UTSA [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Some Glimpse AGI in ChatGPT. Others Call It a Mirage - WIRED [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Elon Musk says he will launch rival to Microsoft-backed ChatGPT - Reuters [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Denial of service threats detected thanks to asymmetric behavior in ... - Science Daily [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Future of Video Conferencing: How AI and Big Data are ... - Analytics Insight [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Meet the Maker: Developer Taps NVIDIA Jetson as Force Behind AI ... - Nvidia [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- 10 Jobs That Artificial Intelligence May Replace Soon - TechJuice [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Identity Security: A Super-Human Problem in the Era of Exponential ... - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Will AI revolutionize professional soccer recruitment? - Engadget [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- OpenAI aims to solve AI alignment in four years - Warp News [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Future AI: DishBrain Is Tech That Could Transform Tomorrow - CMSWire [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Artificial Intelligence Has No Reason to Harm Us - The Wire [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Working together to ensure the safety of artificial intelligence - The Jakarta Post [Last Updated On: August 18th, 2024] [Originally Added On: November 2nd, 2023]
- East Africa lawyers wary of artificial intelligence rise - The Citizen [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- AI and the law: Imperative need for regulatory measures - ft.lk [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov ... - Medium [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Artificial Intelligence and Synthetic Biology Are Not Harbingers of ... - Stimson Center [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Most IT workers are still super suspicious of AI - TechRadar [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Assessing the Promise of AI in Oncology: A Diverse Editorial Board - OncLive [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- AI can easily be trained to lie and it can't be fixed, study says - Yahoo New Zealand News [Last Updated On: August 18th, 2024] [Originally Added On: January 20th, 2024]
- Beyond Human Cognition: The Future of Artificial Super Intelligence - Medium [Last Updated On: August 18th, 2024] [Originally Added On: January 20th, 2024]