University of Oxford
Oli Scarff/Getty Images
Oxford and Cambridge, the oldest universities in Britain and two of the oldest in the world, are keeping a watchful eye on the buzzy field of artificial intelligence (AI), which has been hailed as a technology that will bring about a new industrial revolution and change the world as we know it.
Over the last few years, each of the centuries-old institutions have pumped millions of pounds into researching the possible risks associated with machines of the future.
Clever algorithms can already outperform humans at certain tasks. For example, they can beat the best human players in the world at incredibly complex games like chess and Go, and they're able to spot cancerous tumors in a mammogram far quicker than a human clinician can. Machines can also tell the difference between a cat and a dog, or determine a random person's identity just by looking at a photo of their face. They can also translate languages, drive cars, and keep your home at the right temperature. But generally speaking, they're still nowhere near as smart as the average 7-year-old.
The main issue is that AI can't multitask. For example, a game-playing AI can't yet paint a picture. In other words, AI today is very "narrow" in its intelligence. However, computer scientists at the the likes of Google and Facebook are aiming to make AI more "general" in the years ahead, and that's got some big thinkers deeply concerned.
Nick Bostrom, a 47-year-old Swedish born philosopher and polymath, founded the Future of Humanity Institute (FHI) at the University of Oxford in 2005 to assess how dangerous AI and other potential threats might be to the human species.
In the main foyer of the institute, complex equations beyond most people's comprehension are scribbled on whiteboards next to words like "AI safety" and "AI governance." Pensive students from other departments pop in and out as they go about daily routines.
It's rare to get an interview with Bostrom, a transhumanist who believes that we can and should augment our bodies with technology to help eliminate ageing as a cause of death.
"I'm quite protective about research and thinking time so I'm kind of semi-allergic to scheduling too many meetings," he says.
Tall, skinny and clean shaven, Bostrom has riled some AI researchers with his openness to entertain the idea that one day in the not so distant future, machines will be the top dog on Earth. He doesn't go as far as to say when that day will be, but he thinks that it's potentially close enough for us to be worrying about it.
Swedish philosopher Nick Bostrom is a polymath and the author of "Superintelligence."
The Future of Humanity Institute
If and when machines possess human-level artificial general intelligence, Bostrom thinks they could quickly go on to make themselves even smarter and become superintelligent. At this point, it's anyone's guess what happens next.
The optimist says the superintelligent machines will free up humans from work and allow them to live in some sort of utopia where there's an abundance of everything they could ever desire. The pessimist says they'll decide humans are no longer necessary and wipe them all out.Billionare Elon Musk, who has a complex relationship with AI researchers, recommended Bostrom's book "Superintelligence" on Twitter.
Bostrom's institute has been backed with roughly $20 million since its inception. Around $14 million of that coming from the Open Philanthropy Project, a San Francisco-headquartered research and grant-making foundation. The rest of the money has come from the likes of Musk and the European Research Council.
Located in an unassuming building down a winding road off Oxford's main shopping street, the institute is full of mathematicians, computer scientists, physicians, neuroscientists, philosophers, engineers and political scientists.
Eccentric thinkers from all over the world come here to have conversations over cups of tea about what might lie ahead. "A lot of people have some kind of polymath and they are often interested in more than one field," says Bostrom.
The FHI team has scaled from four people to about 60 people over the years. "In a year, or a year and a half, we will be approaching 100 (people)," says Bostrom. The culture at the institute is a blend of academia, start-up and NGO, according to Bostrom, who says it results in an "interesting creative space of possibilities" where there is "a sense of mission and urgency."
If AI somehow became much more powerful, there are three main ways in which it could end up causing harm, according to Bostrom. They are:
"Each of these categories is a plausible place where things could go wrong," says Bostrom.
With regards to machines turning against humans, Bostrom says that if AI becomes really powerful then "there's a potential risk from the AI itself that it does something different than anybody intended that could then be detrimental."
In terms of humans doing bad things to other humans with AI, there's already a precedent there as humans have used other technological discoveries for the purpose of war or oppression. Just look at the atomic bombings of Hiroshima and Nagasaki, for example. Figuring out how to reduce the risk of this happening with AI is worthwhile, Bostrom says, adding that it's easier said than done.
I think there is now less need to emphasize primarily the downsides of AI.
Asked if he is more or less worried about the arrival of superintelligent machines than he was when his book was published in 2014, Bostrom says the timelines have contracted.
"I think progress has been faster than expected over the last six years with the whole deep learning revolution and everything," he says.
When Bostrom wrote the book, there weren't many people in the world seriously researching the potential dangers of AI. "Now there is this thriving small, but thriving field of AI safety work with a number of groups," he says.
While there's potential for things to go wrong, Bostrom says it's important to remember that there are exciting upsides to AI and he doesn't want to be viewed as the person predicting the end of the world.
"I think there is now less need to emphasize primarily the downsides of AI," he says, stressing that his views on AI are complex and multifaceted.
Bostrom says the aim of FHI is "to apply careful thinking to big picture questions for humanity." The institute is not just looking at the next year or the next 10 years, it's looking at everything in perpetuity.
"AI has been an interest since the beginning and for me, I mean, all the way back to the 90s," says Bostrom. "It is a big focus, you could say obsession almost."
The rise of technology is one of several plausible ways that could cause the "human condition" to change in Bostrom's view. AI is one of those technologies but there are groups at the FHI looking at biosecurity (viruses etc), molecular nanotechnology, surveillance tech, genetics, and biotech (human enhancement).
A scene from 'Ex Machina.'
Source: Universal Pictures | YouTube
When it comes to AI, the FHI has two groups; one does technical work on the AI alignment problem and the other looks at governance issuesthat will arise as machine intelligence becomes increasingly powerful.
The AI alignment group is developing algorithms and trying to figure out how to ensure complex intelligent systems behave as we intend them to behave. That involves aligning them with "human preferences," says Bostrom.
Roughly 66 miles away at the University of Cambridge, academics are also looking at threats to human existence, albeit through a slightly different lens.
Researchers at the Center for the Study of Existential Risk (CSER) are assessing biological weapons, pandemics, and, of course, AI.
We are dedicated to the study and mitigation of risks that could lead to human extinction or civilization collapse.
Centre for the Study of Existential Risk (CSER)
"One of the most active areas of activities has been on AI," said CSER co-founder Lord Martin Rees from his sizable quarters at Trinity College in an earlier interview.
Rees, a renowned cosmologist and astrophysicist who was the president of the prestigious Royal Society from 2005 to 2010, is retired so his CSER role is voluntary, but he remains highly involved.
It's important that any algorithm deciding the fate of human beings can be explained to human beings, according to Rees. "If you are put in prison or deprived of your credit by some algorithm then you are entitled to have an explanation so you can understand. Of course, that's the problem at the moment because the remarkable thing about these algorithms like AlphaGo (Google DeepMind's Go-playing algorithm) is that the creators of the program don't understand how it actually operates. This is a genuine dilemma and they're aware of this."
The idea for CSER was conceived in the summer of 2011 during a conversation in the back of a Copenhagen cab between Cambridge academic Huw Price and Skype co-founder Jaan Tallinn, whose donations account for 7-8% of the center's overall funding and equate to hundreds of thousands of pounds.
"I shared a taxi with a man who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer," Price wrote of his taxi ride with Tallinn. "I'd never met anyone who regarded it as such a pressing cause for concern let alone anyone with their feet so firmly on the ground in the software business."
University of Cambridge
Geography Photos/UIG via Getty Images
CSER is studying how AI could be used in warfare, as well as analyzing some of the longer term concerns that people like Bostrom have written about. It is also looking at how AI can turbocharge climate science and agricultural food supply chains.
"We try to look at both the positives and negatives of the technology because our real aim is making the world more secure," says Sen higeartaigh, executive director at CSER and a former colleague of Bostrom's. higeartaigh, who holds a PhD in genomics from Trinity College Dublin, says CSER currently has three joint projects on the go with FHI.
External advisors include Bostrom and Musk, as well as other AI experts like Stuart Russell and DeepMind's Murray Shanahan. The late Stephen Hawking was also an advisor when he was alive.
The Leverhulme Center for the Future of Intelligence (CFI) was opened at Cambridge in 2016 and today it sits in the same building as CSER, a stone's throw from the punting boats on the River Cam. The building isn't the only thing the centers share staff overlap too and there's a lot of research that spans both departments.
Backed with over 10 million from the grant-making Leverhulme Foundation, the center is designed to support "innovative blue skies thinking," according to higeartaigh, its co-developer.
Was there really a need for another one of these research centers? higeartaigh thinks so. "It was becoming clear that there would be, as well as the technical opportunities and challenges, legal topics to explore, economic topics, social science topics," he says.
"How do we make sure that artificial intelligence benefits everyone in a global society? You look at issues like who's involved in the development process? Who is consulted? How does the governance work? How do we make sure that marginalized communities have a voice?"
The aim of CFI is to get computer scientists and machine-learning experts working hand in hand with people from policy, social science, risk and governance, ethics, culture, critical theory and so on. As a result, the center should be able to take a broad view of the range of opportunities and challenges that AI poses to societies.
"By bringing together people who think about these things from different angles, we're able to figure out what might be properly plausible scenarios that are worth trying to mitigate against," said higeartaigh.
Original post:
How Britain's oldest universities are trying to protect humanity from risky A.I. - CNBC
- Neurodiversity vs. Cognitive Liberty [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.13 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Limits to the biolibertarian impulse [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.15 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neurodiversity vs. Cognitive Liberty, Round II [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.17 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Cognitive liberty and right to one's mind [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- TED Talks: Henry Markram builds a brain in a supercomputer [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- And Now, for Something Completely Different: Doomsday! [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.19 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Oklahoma and abortion - some fittingly harsh reflections [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Pigliucci on science and the scope of skeptical inquiry [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Remembering Mac Tonnies [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.24 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.26 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- The Bright Side of Nuclear Armament [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Grieving chimps [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Elephant prosthetic [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Mass produced artificial skin to replace animal testing [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Dog gets osseointegrated prosthetic [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- NASA Shuttle-derived Sidemount Heavy Launch Vehicle Concept [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump for 2009.02.02 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump for 2009.11.04 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump for 2009.11.05 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- IEET's Biopolitics of Popular Culture Seminar [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Einstein and Millikan should have done a Kurzweil [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Affective Death Spirals [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Cure aging or give a small number of disabled people jobs as janitors? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Would unary notation prevent scope insensitivity? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Cure aging or give a small number of disabled people jobs as janitors - unary version. [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- At SENS4, Cambridge, UK [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- SENS4 overview and review - how evolution complicates SENS, and why we must try harder [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- SENS4 top 10 photos [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research for this year [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Formal Logic [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Category theory and institution theory [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: The Semantic Web [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Features and Flaws of Logical representation [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Graphical models and probabilistic logics [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Hughes and More engage Italian Catholicism: Image caption competition [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Surprisingly good solutions, falling in love and life in a materialistic universe [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What do you get when you cross slightly evolved, status seeking monkeys with the scientific method? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Seeking the optimal philanthropic strategy: Global Warming or AI risk? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Machine Learning - harbinger of the future of AI? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- At the Singularity Summit in NYC [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Katja Grace: world-dominating superintelligence is "unlikely" [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Normal Human Heroes on "Nightmare futures" [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Anissimov on Intelligence Enhancement [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Yudkowsky on "Value is fragile" [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Response to Pearce [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Creative thinking lets you believe whatever you want [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Let’s get metaphysical: How our ongoing existence could appear increasingly absurd [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Linda MacDonald Glenn guest blogging in November and December [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump for 2009.11.15 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Call 1-800-New-Organ, by 2020? [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- IBM's claim to have simulated a cat's brain grossly overstated [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- John Hodgman pulls off Fermi Paradox schtick [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Deus Sex Machina [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- How Americans spent themselves into ruin... but saved the world [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- I am my own grandpa (or grandma)? [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump for 2009.11.29 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The art of Tomas Saraceno [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump: 2009.12.05 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The Harmonic Convergence of Science, Sight, & Sound [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Working on my website [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Transhumanism, personal immortality and the prospect of technologically enabled utopia [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- RokoMijic.com is up [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Why the Fuss About Intelligence? [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Initiation ceremony [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Birthing Gods [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- 11 core rationalist skills - from LessWrong [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The best of the guests [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The best of Sentient Developments: 2009 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump: 2009.12.15 [Last Updated On: December 15th, 2009] [Originally Added On: December 15th, 2009]
- The Utopia Force [Last Updated On: December 22nd, 2009] [Originally Added On: December 22nd, 2009]
- Avatar: The good, the bad and ugly [Last Updated On: December 23rd, 2009] [Originally Added On: December 23rd, 2009]
- Singularity Institute launches "2010 Singularity Research Challenge" [Last Updated On: December 24th, 2009] [Originally Added On: December 24th, 2009]
- Transhumanism as a "nonissue" [Last Updated On: December 24th, 2009] [Originally Added On: December 24th, 2009]
- Hanson on "Meh, Transhumanism" [Last Updated On: December 25th, 2009] [Originally Added On: December 25th, 2009]
- Merry Newtonmas from Transhuman Goodness [Last Updated On: December 25th, 2009] [Originally Added On: December 25th, 2009]