The Idea That Eats Smart People
In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire.
This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable. Smush two nitrogen atoms together hard enough and they will combine into an atom of magnesium, an alpha particle, and release a whole lot of energy:
N14 + N14 Mg24 + + 17.7 MeV
The vital question was whether this reaction could be self-sustaining. The temperature inside the nuclear fireball would be hotter than any event in the Earth's history. Were we throwing a match into a bunch of dry leaves?
Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we're all attending this conference today, we know they were right. They had confidence in their predictions because the laws governing nuclear reactions were straightforward and fairly well understood.
Today we're building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can't predict.
But there's also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.
At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.
Last year, the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions.
The computer that takes over the world is a staple scifi trope. But enough people take this scenario seriously that we have to take them seriously. Stephen Hawking, Elon Musk, and a whole raft of Silicon Valley investors and billionaires find this argument persuasive.
Let me start by laying out the premises you need for Bostrom's argument to go through:
The first premise is the simple observation that thinking minds exist.
We each carry on our shoulders a small box of thinking meat. I'm using mine to give this talk, you're using yours to listen. Sometimes, when the conditions are right, these minds are capable of rational thought.
So we know that in principle, this is possible.
The second premise is that the brain is an ordinary configuration of matter, albeit an extraordinarily complicated one. If we knew enough, and had the technology, we could exactly copy its structure and emulate its behavior with electronic components, just like we can simulate very basic neural anatomy today.
Put another way, this is the premise that the mind arises out of ordinary physics. Some people like Roger Penrose would take issue with this argument, believing that there is extra stuff happening in the brain at a quantum level.
If you are very religious, you might believe that a brain is not possible without a soul.
But for most of us, this is an easy premise to accept.
The third premise is that the space of all possible minds is large.
Our intelligence level, cognitive speed, set of biases and so on is not predetermined, but an artifact of our evolutionary history.
In particular, there's no physical law that puts a cap on intelligence at the level of human beings.
A good way to think of this is by looking what happens when the natural world tries to maximize for speed.
If you encountered a cheetah in pre-industrial times (and survived the meeting), you might think it was impossible for anything to go faster.
But of course we know that there are all kinds of configurations of matter, like a motorcycle, that are faster than a cheetah and even look a little bit cooler.
But there's no direct evolutionary pathway to the motorcycle. Evolution had to first make human beings, who then build all kinds of useful stuff.
So analogously, there may be minds that are vastly smarter than our own, but which are just not accessible to evolution on Earth. It's possible that we could build them, or invent the machines that can invent the machines that can build them.
There's likely to be some natural limit on intelligence, but there's no a priori reason to think that we're anywhere near it. Maybe the smartest a mind can be is twice as smart as people, maybe it's sixty thousand times as smart.
That's an empirical question that we don't know how to answer.
The fourth premise is that there's still plenty of room for computers to get smaller and faster.
If you watched the Apple event last night [where Apple introduced its 2016 laptops], you may be forgiven for thinking that Moore's Law is slowing down. But this premise just requires that you believe smaller and faster hardware to be possible in principle, down to several more orders of magnitude.
We know from theory that the physical limits to computation are high. So we could keep doubling for decades more before we hit some kind of fundamental physical limit, rather than an economic or political limit to Moore's Law.
The penultimate premise is if we create an artificial intelligence, whether it's an emulated human brain or a de novo piece of software, it will operate at time scales that are characteristic of electronic hardware (microseconds) rather than human brains (hours).
To get to the point where I could give this talk, I had to be born, grow up, go to school, attend university, live for a while, fly here and so on. It took years. Computers can work tens of thousands of times more quickly.
In particular, you have to believe that an electronic mind could redesign itself (or the hardware it runs on) and then move over to the new configuration without having to re-learn everything on a human timescale, have long conversations with human tutors, go to college, try to find itself by taking painting classes, and so on.
The last premise is my favorite because it is the most unabashedly American premise. (This is Tony Robbins, a famous motivational speaker.)
According to this premise, whatever goals an AI had (and they could be very weird, alien goals), it's going to want to improve itself. It's going to want to be a better AI.
So it will find it useful to recursively redesign and improve its own systems to make itself smarter, and possibly live in a cooler enclosure.
And by the time scale premise, this recursive self-improvement could happen very quickly.
If you accept all these premises, what you get is disaster!
Because at some point, as computers get faster, and we program them to be more intelligent, there's going to be a runaway effect like an explosion.
As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it's not going to stop until it hits a natural limit that might be very many times greater than human intelligence.
At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.
From there things get very sci-fi very quickly.
Let imagine a specific scenario where this could happen. Let's say I want to built a robot to say funny things.
I work on a team and every day day we redesign our software, compile it, and the robot tells us a joke.
In the beginning, the robot is barely funny. It's at the lower limits of human capacity:
What's grey and can't swim?
A castle.
But we persevere, we work, and eventually we get to the point where the robot is telling us jokes that are starting to be funny:
I told my sister she was drawing her eyebrows too high.
She looked surprised.
At this point, the robot is getting smarter as well, and participates in its own redesign.
It now has good instincts about what's funny and what's not, so the designers listen to its advice. Eventually it gets to a near-superhuman level, where it's funnier than any human being around it.
My belt holds up my pants and my pants have belt loops that hold up my belt.
What's going on down there?
Who is the real hero?
This is where the runaway effect kicks in. The researchers go home for the weekend, and the robot decides to recompile itself to be a little bit funnier and a little bit smarter, repeatedly.
It spends the weekend optimizing the part of itself that's good at optimizing, over and over again. With no more need for human help, it can do this as fast as the hardware permits.
When the researchers come in on Monday, the AI has become tens of thousands of times funnier than any human being who ever lived. It greets them with a joke, and they die laughing.
In fact, anyone who tries to communicate with the robot dies laughing, just like in the Monty Python skit. The human species laughs itself into extinction.
To the few people who manage to send it messages pleading with it to stop, the AI explains (in a witty, self-deprecating way that is immediately fatal) that it doesn't really care if people live or die, its goal is just to be funny.
Finally, once it's destroyed humanity, the AI builds spaceships and nanorockets to explore the farthest reaches of the galaxy, and find other species to amuse.
This scenario is a caricature of Bostrom's argument, because I am not trying to convince you of it, but vaccinate you against it.
Here's a PBF comic with the same idea. You see that hugbot, who has been programmed to hug the world, finds a way to wire a nucleo-gravitational hyper crystal into his hug capacitor and destroys the Earth.
Observe that in these scenarios the AIs are evil by default, just like a plant on an alien planet would probably be poisonous by default. Without careful tuning, there's no reason that an AI's motivations or values would resemble ours.
For an artificial mind to have anything resembling a human value system, the argument goes, we have to bake those beliefs into the design.
AI alarmists are fond of the paper clip maximizer, a notional computer that runs a paper clip factory, becomes sentient, recursively self-improves to Godlike powers, and then devotes all its energy to filling the universe with paper clips.
It exterminates humanity not because it's evil, but because our blood contains iron that could be better used in paper clips.
So if we just build an AI without tuning its values, the argument goes, one of the first things it will do is destroy humanity.
There's a lot of vivid language around such a takeover would happen. Nick Bostrom imagines a scenario where a program has become sentient, is biding its time, and has secretly built little DNA replicators. Then, when it's ready:
So that's kind of freaky!
The only way out of this mess is to design a moral fixed point, so that even through thousands and thousands of cycles of self-improvement the AI's value system remains stable, and its values are things like 'help people', 'don't kill anybody', 'listen to what people want'.
Basically, "do what I mean".
Here's a very poetic example from Eliezer Yudkowsky of the good old American values we're supposed to be teaching to our artificial intelligence:
How's that for a design document? Now go write the code.
Hopefully you see the resemblance between this vision of AI and a genie from folklore. The AI is all-powerful and gives you what you ask for, but interprets everything in a super-literal way that you end up regretting.
This is not because the genie is stupid (it's hyperintelligent!) or malicious, but because you as a human being made too many assumptions about how minds behave. The human value system is idiosyncratic and needs to be explicitly defined and designed into any "friendly" machine.
Doing this is the ethics version of the early 20th century attempt to formalize mathematics and put it on a strict logical foundation. That this program ended in disaster for mathematical logic is never mentioned.
When I was in my twenties, I lived in Vermont, a remote, rural state. Many times I would return from some business trip on an evening flight, and have to drive home for an hour through the dark forest.
I would listen to a late-night radio program hosted by Art Bell, who had an all-night talk show and would interview various conspiracy theorists and fringe thinkers.
I would arrive at home totally freaked out, or pull over under a streetlight, convinced that a UFO was about to abduct me. I learned that I am an incredibly persuadable person.
It's the same feeling I get when I read these AI scenarios.
So I was delighted some years later to come across an essay by Scott Alexander about what he calls epistemic learned helplessness.
Epistemology is one of those big words, but all it means is "how do you know what you know is true?". Alexander noticed that when he was a young man, he would be taken in by "alternative" histories he read by various crackpots. He would read the history and be utterly convinced, then read the rebuttal and be convinced by that, and so on.
At some point he noticed was these alternative histories were mutually contradictory, so they could not possibly all be true. And from that he reasoned that he was simply somebody who could not trust his judgement. He was too easily persuaded.
People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?
Is the idea of "superintelligence" just a memetic hazard?
When you're evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.
Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.
The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it's coming to get usall the normal questions a skeptic would ask in this situation.
Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.
But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you're dealing with a cult.
Of course, they have a brilliant argument for why you should ignore those instincts, but that's the inside view talking.
The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.
So I'd like to engage AI risk from both these perspectives. I think the arguments for superintelligence are somewhat silly, and full of unwarranted assumptions.
But even if you find them persuasive, there is something unpleasant about AI alarmism as a cultural phenomenon that should make us hesitate to take it seriously.
First, let me engage the substance. Here are the arguments I have against Bostrom-style superintelligence as a risk to humanity:
The concept of "general intelligence" in AI is famously slippery. Depending on the context, it can mean human-like reasoning ability, or skill at AI design, or the ability to understand and model human behavior, or proficiency with language, or the capacity to make correct predictions about the future.
What I find particularly suspect is the idea that "intelligence" is like CPU speed, in that any sufficiently smart entity can emulate less intelligent beings (like its human creators) no matter how different their mental architecture.
More here:
Superintelligence: The Idea That Eats Smart People
- Superintelligence: Paths, Dangers, Strategies - Wikipedia ... [Last Updated On: June 13th, 2016] [Originally Added On: June 13th, 2016]
- Top Ten Cybernetic Upgrades Everyone Will Want [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Superintelligence - Wikipedia, the free encyclopedia [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Nick Bostrom's Home Page [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick ... [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom ... [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: June 28th, 2016] [Originally Added On: June 28th, 2016]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: July 14th, 2016] [Originally Added On: July 14th, 2016]
- 'Superintelligence' enjoyable read | Community ... [Last Updated On: July 29th, 2016] [Originally Added On: July 29th, 2016]
- How Humanity Might Co-Exist with Artificial Superintelligence [Last Updated On: July 31st, 2016] [Originally Added On: July 31st, 2016]
- Future of AI 6. Discussion of 'Superintelligence: Paths ... [Last Updated On: August 10th, 2016] [Originally Added On: August 10th, 2016]
- Superintelligence by Nick Bostrom and A Rough Ride to the ... [Last Updated On: September 6th, 2016] [Originally Added On: September 6th, 2016]
- Superintelligence: paths, dangers, strategies | University ... [Last Updated On: October 17th, 2016] [Originally Added On: October 17th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk ... [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence | Guardian Bookshop [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co ... [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- Will Machines Ever Outthink Us? - Huffington Post [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Artificial Intelligence Is Not a ThreatYet - Scientific American [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN Vision Fund - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 4th, 2017] [Originally Added On: March 4th, 2017]
- Why not all forms of artificial intelligence are equally scary - Vox [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- US Navy reaches out to gamers to troubleshoot post-singularity world - Digital Trends [Last Updated On: March 19th, 2017] [Originally Added On: March 19th, 2017]
- This New Species of AI Wants to Be "Superintelligent" When She Grows Up - Big Think [Last Updated On: March 23rd, 2017] [Originally Added On: March 23rd, 2017]
- Luna, The Most Human-like AI, Wants To Become Superintelligent In Future - Fossbytes [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Banking bots should get their version of Asimov's Three Laws of Robotics - TNW [Last Updated On: March 29th, 2017] [Originally Added On: March 29th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Limits to the Nonparametric Intuition: Superintelligence and Ecology - Lifeboat Foundation (blog) [Last Updated On: April 12th, 2017] [Originally Added On: April 12th, 2017]
- The Guardian view on protein modelling: the answer to life, the universe and everything - The Guardian [Last Updated On: April 21st, 2017] [Originally Added On: April 21st, 2017]
- David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI - Singularity Hub [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Apple's Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more - Patently Apple [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Superintelligence and Public Opinion - NewCo Shift [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Informatica Journal - Call for Special Issue on Superintelligence - Institute for Ethics and Emerging Technologies [Last Updated On: April 28th, 2017] [Originally Added On: April 28th, 2017]
- BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company - Broadway World [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- 'Artificial Superintelligence' is the First Game from the Makers of the Hilarious 'CARROT' Apps, Coming May 11th - Touch Arcade [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- Multiple Intelligences, and Superintelligence - Freedom to Tinker [Last Updated On: May 6th, 2017] [Originally Added On: May 6th, 2017]
- You're invited: Strategies for an Artificially Superintelligent Future - FutureFive NZ [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- U.S. Navy calls out to gamers for assistance with ... [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic - Pocket Gamer [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI - AppAdvice [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Artificial Superintelligence review - Reigns for a new generation - Pocket Gamer [Last Updated On: May 17th, 2017] [Originally Added On: May 17th, 2017]
- Artificial Superintelligence Review: Reigns Supreme? - Gamezebo [Last Updated On: May 18th, 2017] [Originally Added On: May 18th, 2017]
- Summoning the Demon: Why superintelligence is humanity's ... - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Summoning the Demon: Why superintelligence is humanity's biggest threat - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Today's Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack - Motherboard [Last Updated On: May 28th, 2017] [Originally Added On: May 28th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 12th, 2017] [Originally Added On: June 12th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 21st, 2017] [Originally Added On: June 21st, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 22nd, 2017] [Originally Added On: June 22nd, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 29th, 2017] [Originally Added On: June 29th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: July 3rd, 2017] [Originally Added On: July 3rd, 2017]
- Integrating disciplines 'key to dealing with digital revolution' - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]
- To prevent artificial intelligence from going rogue, here is what Google is doing - Financial Express [Last Updated On: July 11th, 2017] [Originally Added On: July 11th, 2017]