It's the stuff of many a sci-fi book or movie - could robots one day become smart enough to overthrow us? Well, a group of the world's most eminent artificial intelligence experts have worked together to try and make sure that doesn't happen.
They've put together a set of 23 principles to guide future research into AI, which have since been endorsed by hundreds more professionals, including Stephen Hawking and SpaceX CEO Elon Musk.
Called the Asilomar AI Principles (after the beach in California, where they were thought up), the guidelines cover research issues, ethics and values, and longer-term issues - everything from how scientists should work with governments to how lethal weapons should be handled.
On that point: "An arms race in lethal autonomous weapons should be avoided," says principle 18. You can read the full listbelow.
"We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone's lives in coming years," write the organisers of the Beneficial AI 2017 conference, where the principles were worked out.
For a principle to be included, at least 90 percent of the 100+ conference attendees had to agree to it. Experts at the event included academics, engineers, and representatives from tech companies, including Google co-founder Larry Page.
Perhaps the most telling guideline is principle 23, entitled 'Common Good': "Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation."
Other principles in the list suggest that any AI allowed to self-improve must be strictly monitored, and that developments in the tech should be "shared broadly" and "benefit all of humanity".
"To think AI merely automates human decisions is like thinking electricity is just a replacement for candles," conference attendee Patrick Lin, from California Polytechnic State University, told George Dvorsky at Gizmodo.
"Given the massive potential for disruption and harm, as well as benefits, it makes sense to establish some norms early on to help steer research in a good direction, as opposed to letting a market economy that's fixated mostly on efficiency and profit... shape AI."
Meanwhile the principles also call for scientists to work closely with governments and lawmakers to make sure our society keeps pace with the development of AI.
All of which sounds very good to us - let's just hope the robots are listening.
The guidelines also rely on a certain amount of consensus about specific terms - such as what's beneficial to humankind and what isn't - but for the experts behind the list it's a question of getting something recorded at this early stage of AI research.
With artificial intelligence systems now beating us at poker and getting smart enough to spot skin cancers, there's a definite need to have guidelines and limits in place that researchers can work to.
And then we also need to decide what rights super-smart robots have when they're living among us.
For now the guidelines should give us some helpful pointers for the future.
"No current AI system is going to 'go rogue' and be dangerous, and it's important that people know that," conference attendee Anthony Aguirre, from the University of California, Santa Cruz, told Gizmodo.
"At the same time, if we envision a time when AIs exist that are as capable or more so than the smartest humans, it would be utterly naive to believe that the world will not fundamentally change."
"So how seriously we take AI's opportunities and risks has to scale with how capable it is, and having clear assessments and forecasts - without the press, industry or research hype that often accompanies advances - would be a good starting point."
The principles have been published by the Future Of Life Institute.
You can see them in full and add your support over on their site.
Research issues
1. Research Goal:The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
2. Research Funding:Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
3. Science-Policy Link:There should be constructive and healthy exchange between AI researchers and policy-makers.
4. Research Culture:A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
5. Race Avoidance:Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
Ethics and values
6. Safety:AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency:If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency:Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility:Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment:Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.
11. Human Values:AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy:People should have the right to access, manage and control the data they generate, given AI systems power to analyse and utilise that data.
13. Liberty and Privacy:The application of AI to personal data must not unreasonably curtail peoples real or perceived liberty.
14. Shared Benefit:AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity:The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16. Human Control:Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17. Non-subversion:The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18. AI Arms Race:An arms race in lethal autonomous weapons should be avoided.
Longer term issues
19. Capability Caution:There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20. Importance:Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21. Risks:Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22. Recursive Self-Improvement:AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23. Common Good:Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation.
Original post:
Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert
- Superintelligence: Paths, Dangers, Strategies - Wikipedia ... [Last Updated On: June 13th, 2016] [Originally Added On: June 13th, 2016]
- Top Ten Cybernetic Upgrades Everyone Will Want [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Superintelligence - Wikipedia, the free encyclopedia [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Nick Bostrom's Home Page [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick ... [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom ... [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: June 28th, 2016] [Originally Added On: June 28th, 2016]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: July 14th, 2016] [Originally Added On: July 14th, 2016]
- 'Superintelligence' enjoyable read | Community ... [Last Updated On: July 29th, 2016] [Originally Added On: July 29th, 2016]
- How Humanity Might Co-Exist with Artificial Superintelligence [Last Updated On: July 31st, 2016] [Originally Added On: July 31st, 2016]
- Future of AI 6. Discussion of 'Superintelligence: Paths ... [Last Updated On: August 10th, 2016] [Originally Added On: August 10th, 2016]
- Superintelligence by Nick Bostrom and A Rough Ride to the ... [Last Updated On: September 6th, 2016] [Originally Added On: September 6th, 2016]
- Superintelligence: paths, dangers, strategies | University ... [Last Updated On: October 17th, 2016] [Originally Added On: October 17th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk ... [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence | Guardian Bookshop [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co ... [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- Superintelligence: The Idea That Eats Smart People [Last Updated On: December 26th, 2016] [Originally Added On: December 26th, 2016]
- Will Machines Ever Outthink Us? - Huffington Post [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Artificial Intelligence Is Not a ThreatYet - Scientific American [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN Vision Fund - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 4th, 2017] [Originally Added On: March 4th, 2017]
- Why not all forms of artificial intelligence are equally scary - Vox [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- US Navy reaches out to gamers to troubleshoot post-singularity world - Digital Trends [Last Updated On: March 19th, 2017] [Originally Added On: March 19th, 2017]
- This New Species of AI Wants to Be "Superintelligent" When She Grows Up - Big Think [Last Updated On: March 23rd, 2017] [Originally Added On: March 23rd, 2017]
- Luna, The Most Human-like AI, Wants To Become Superintelligent In Future - Fossbytes [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Banking bots should get their version of Asimov's Three Laws of Robotics - TNW [Last Updated On: March 29th, 2017] [Originally Added On: March 29th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Limits to the Nonparametric Intuition: Superintelligence and Ecology - Lifeboat Foundation (blog) [Last Updated On: April 12th, 2017] [Originally Added On: April 12th, 2017]
- The Guardian view on protein modelling: the answer to life, the universe and everything - The Guardian [Last Updated On: April 21st, 2017] [Originally Added On: April 21st, 2017]
- David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI - Singularity Hub [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Apple's Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more - Patently Apple [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Superintelligence and Public Opinion - NewCo Shift [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Informatica Journal - Call for Special Issue on Superintelligence - Institute for Ethics and Emerging Technologies [Last Updated On: April 28th, 2017] [Originally Added On: April 28th, 2017]
- BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company - Broadway World [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- 'Artificial Superintelligence' is the First Game from the Makers of the Hilarious 'CARROT' Apps, Coming May 11th - Touch Arcade [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- Multiple Intelligences, and Superintelligence - Freedom to Tinker [Last Updated On: May 6th, 2017] [Originally Added On: May 6th, 2017]
- You're invited: Strategies for an Artificially Superintelligent Future - FutureFive NZ [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- U.S. Navy calls out to gamers for assistance with ... [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic - Pocket Gamer [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI - AppAdvice [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Artificial Superintelligence review - Reigns for a new generation - Pocket Gamer [Last Updated On: May 17th, 2017] [Originally Added On: May 17th, 2017]
- Artificial Superintelligence Review: Reigns Supreme? - Gamezebo [Last Updated On: May 18th, 2017] [Originally Added On: May 18th, 2017]
- Summoning the Demon: Why superintelligence is humanity's ... - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Summoning the Demon: Why superintelligence is humanity's biggest threat - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Today's Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack - Motherboard [Last Updated On: May 28th, 2017] [Originally Added On: May 28th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 12th, 2017] [Originally Added On: June 12th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 21st, 2017] [Originally Added On: June 21st, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 22nd, 2017] [Originally Added On: June 22nd, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 29th, 2017] [Originally Added On: June 29th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: July 3rd, 2017] [Originally Added On: July 3rd, 2017]
- Integrating disciplines 'key to dealing with digital revolution' - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]
- To prevent artificial intelligence from going rogue, here is what Google is doing - Financial Express [Last Updated On: July 11th, 2017] [Originally Added On: July 11th, 2017]