For a long time, artificial intelligence seemed like one of those inventions that would always be 50 years away. The scientists who developed the first computers in the 1950s speculated about the possibility of machines with greater-than-human capacities. But enthusiasm didnt necessarily translate into a commercially viable product, let alone a superintelligent one.
And for a while in the 60s, 70s, and 80s it seemed like such speculation would remain just that. The sluggishness of AI development actually gave rise to a term: AI winters, periods when investors and researchers got bored with lack of progress in the field and devoted their attention elsewhere.
No one is bored now.
Limited AI systems have taken on an ever-bigger role in our lives, wrangling our news feeds, trading stocks, translating and transcribing text, scanning digital pictures, taking restaurant orders, and writing fake product reviews and news articles. And while theres always the possibility that AI development will hit another wall, theres reason to think it wont: All of the above applications have the potential to be hugely profitable, which means there will be sustained investment from some of the biggest companies in the world. AI capabilities are reasonably likely to keep growing until theyre a transformative force.
A new report from the National Security Commission on Artificial Intelligence (NSCAI), a committee Congress established in 2018, grapples with some of the large-scale implications of that trajectory. In 270 pages and hundreds of appendices, the report tries to size up where AI is going, what challenges it presents to national security, and what can be done to set the US on a better path.
It is by far the best writing from the US government on the enormous implications of this emerging technology. But the report isnt without flaws, and its shortcomings underscore how hard it will be for humanity to get a handle on the warp-speed development of a technology thats at once promising and perilous.
As it exists right now, AI poses policy challenges. How do we determine whether an algorithm is fair? How do we stop oppressive governments from using AI surveillance for totalitarianism? Those questions are mostly addressable with the same tools the US has used in other policy challenges over the decades: Lawsuits, regulations, international agreements, and pressure on bad actors, among others, are tried-and-true tactics to control the development of new technologies.
But for more powerful and general AI systems advanced systems that dont yet exist but may be too powerful to control once they do such tactics probably wont suffice.
When it comes to AI, the big overarching challenge is making sure that as our systems get more powerful, we design them so their goals are aligned with those of humans that is, humanity doesnt construct scaled-up superintelligent AI that overwhelms human intentions and leads to catastrophe.
Because the tech is necessarily speculative, the problem is that we dont know as much as wed like to about how to design those systems. In many ways, were in a position akin to someone worrying about nuclear proliferation in 1930. Its not that nothing useful could have been done at that early point in the development of nuclear weapons, but at the time it would have been very hard to think through the problem and to marshal the resources let alone the international coordination needed to tackle it.
In its new report, the NSCAI wrestles with these problems and (mostly successfully) addresses the scope and key challenges of AI; however, it has limitations. The commission nails some of the key concerns about AIs development, but its US-centric vision may be too myopic to confront a problem as daunting and speculative as an AI that threatens humanity.
AI has seen extraordinary progress over the past decade. AI systems have improved dramatically at tasks including translation, playing games such as chess and Go, answering important research biology questions (such as predicting how proteins fold), and generating images.
These systems also determine what you see in a Google search or in your Facebook News Feed. They compose music and write articles that, at first glance, read as though a human wrote them. They play strategy games. They are being developed to improve drone targeting and detect missiles.
All of those are instances of narrow AI computer systems designed to solve specific problems, versus those with the sort of generalized problem-solving capabilities humans have.
But narrow AI is getting less narrow and researchers have gotten better at creating computer systems that generalize learning capabilities. Instead of mathematically describing detailed features of a problem for a computer to solve, today its often possible to let the computer system learn the problem by itself.
As computers get good enough at performing narrow AI tasks, they start to exhibit more general capabilities. For example, OpenAIs famous GPT series of text generators is, in one sense, the narrowest of narrow AIs it just predicts what the next word will be, based on previous words its prompted with and its vast store of human language. And yet, it can now identify questions as reasonable or unreasonable as well as discuss the physical world (for example, answering questions about which objects are larger or which steps in a process must come first).
What these developments show us is this: In order to be very good at narrow tasks, some AI systems eventually develop abilities that are not narrow at all.
The NSCAI report acknowledges this eventuality. As AI becomes more capable, computers will be able to learn and perform tasks based on parameters that humans do not explicitly program, creating choices and taking actions at a volume and speed never before possible, the report concludes.
Thats the general dilemma the NSCAI is tasked with addressing. A new technology, with both extraordinary potential benefits and extraordinary risks, is being developed. Many of the experts working on it warn that the results could be catastrophic. What concrete policy measures can the government take to get clarity on a challenge such as this one?
The NSCAI report is a significant improvement on much of the existing writing about artificial intelligence in one important respect: It understands the magnitude of the challenge.
For a sense of that magnitude, its useful to imagine the questions involved in figuring out government policy on nuclear nonproliferation in the 1930s.
By 1930, there was certainly some scientific evidence that nuclear weapons would be possible. But there were no programs anywhere in the world to make them, and there was even some dissent within the research community about whether such weapons could ever be built.
As we all know, nuclear weapons were built within the next decade and a half, and they changed the trajectory of human history.
Given all that, what could the government have done about nuclear proliferation in 1930? Decide on the wisdom of pushing itself to develop such weapons, perhaps, or develop surveillance systems that would alert the country if other nations were building them.
In practice, the government in 1930 did none of these things. When an idea is just beginning to gain a foothold among the academics, engineers, and experts who work on it, its hard for policymakers to figure out where to start.
When considering these decisions, our leaders confront the classic dilemma of statecraft identified by Henry Kissinger: When your scope for action is greatest, the knowledge on which you can base this action is always at a minimum. When your knowledge is greatest, the scope for action has often disappeared, Chair Eric Schmidt and Vice Chair Bob Work wrote of this dilemma in the NSCAI report.
As a result, much government writing about AI to date has seemed fundamentally confused, limited by the fact that no one knows exactly what transformative AI will look like or what key technical challenges lie ahead.
In addition, a lot of the writing about AI both by policymakers and by technical experts is very small, focused on possibilities such as whether AI will eliminate call centers, rather than the ways general AI, or AGI, will usher in a dramatic technological realignment, if its built at all.
The NSCAI analysis does not make this mistake.
First, the rapidly improving ability of computer systems to solve problems and to perform tasks that would otherwise require human intelligence and in some instances exceed human performance is world altering. AI technologies are the most powerful tools in generations for expanding knowledge, increasing prosperity, and enriching the human experience, reads the executive summary.
The report also extrapolates from current progress in machine learning to identify some specific areas where AI might enable notable good or notable harm:
Combined with massive computing power and AI, innovations in biotechnology may provide novel solutions for mankinds most vexing challenges, including in health, food production, and environmental sustainability. Like other powerful technologies, however, applications of biotechnology can have a dark side. The COVID-19 pandemic reminded the world of the dangers of a highly contagious pathogen. AI may enable a pathogen to be specifically engineered for lethality or to target a genetic profile the ultimate range and reach weapon.
One major challenge in communicating about AI is its much easier to predict the broad effects that unleashing fast, powerful research and decision-making systems on the world will have speeding up all kinds of research, for both good and ill than it is to predict the specific inventions those systems will come up with. The NSCAI report outlines some of the ways AI will be transformative, and some of the risks those transformations pose that policymakers should be thinking about how to manage.
Overall, the report seems to grasp why AI is a big deal, what makes it hard to plan for, and why its necessary to plan for it anyway.
But theres an important way in which the NSCAI report falls short. Recognizing that AI poses enormous risks and that it will be powerful and transformative, the report foregrounds a posture of great-power competition with both eyes on China to address the looming problem before humanity.
We should race together with partners when AI competition is directed at the moonshots that benefit humanity like discovering vaccines. But we must win the AI competition that is intensifying strategic competition with China, the report concludes.
China is run by a totalitarian regime that poses geopolitical and moral problems for the international community. Chinas repression in Hong Kong and Tibet, and the genocide of the Uyghur people in Xinjiang, have been technologically aided, and the regime should not have more powerful technological tools with which to violate human rights.
Theres no question that China developing AGI would be a bad thing. And the countermeasures the report proposes especially an increased effort to attract the worlds top scientists to America are a good idea.
More than that, the US and the global community should absolutely devote more attention and energy to addressing Chinas human rights violations.
But its where the report proposes beating China to the punch by accelerating AI development in the US, potentially through direct government funding, that I have hesitations. Adopting an arms-race mentality on AI would make involved companies and projects more likely to discourage international collaboration, cut corners, and evade transparency measures.
In 1939, at a conference at George Washington University, Niels Bohr announced that hed determined that uranium fission had been discovered. Physicist Edward Teller recalled the moment:
For all that the news was amazing, the reaction that followed was remarkably subdued. After a few minutes of general comment, my neighbor said to me, perhaps we should not discuss this. Clearly something obvious has been said, and it is equally clear that the consequences will be far from obvious. That seemed to be the tacit consensus, for we promptly returned to low-temperature physics.
Perhaps that consensus would have prevailed, if World War II hadnt started. It took the concerted efforts of many brilliant researchers to bring nuclear bombs to fruition, and at first most of them hesitated to be a part of the effort. Those hesitations were reasonable inventing the weaponry with which to destroy civilization is no small thing. But once they had reason to fear that the Nazis were building the bomb, those reservations melted away. The question was no longer Should these be built at all? but Should these be built by us, or by the Nazis?
It turned out, of course, that the Nazis were never close, nor was the atomic bomb needed to defeat them. And the US development of the bomb caused its geopolitical adversaries, the USSR, to develop it too, much sooner than it otherwise would have, through espionage. The world then spent decades teetering on the brink of nuclear war.
The specter of a mess like that looms large in everyones minds when they think of AI.
I think its a mistake to think of this as an arms race, Gilman Louie, a commissioner on the NSCAI report, told me though he immediately added, We dont want to be second.
An arms race can push scientists toward working on a technology that they have reservations about, or one they dont know how to safely build. It can also mean that policymakers and researchers dont pay enough attention to the AI alignment problem which is really the looming issue when it comes to the future of AI.
AI alignment is the work of trying to design intelligent systems that are accountable to humans. An AI even in well-intentioned hands will not necessarily ensure its development consistent with human priorities. Think of it this way: An AI aiming to increase a companys stock price, or to ensure a robust national defense against enemies, or to make a compelling ad campaign, might take large-scale actions like disabling safeguards, rerouting resources, or interfering with other AI systems we would never have asked for or wanted. Those large-scale actions in turn could have drastic consequences for economies and societies.
Its all speculative, for sure, but thats the point. Were in the year 1930 confronting the potential creation of a world-altering technology that might be here a decade-and-a-half from now or might be five decades away.
Right now, our capacity to build AIs is racing ahead of our capacity to understand and align them. And trying to make sure AI advancements happen in the US first can just make that problem worse, if the US doesnt also invest in the research which is much more immature, and has less obvious commercial value to build aligned AIs.
We ultimately came away with a recognition that if America embraces and invests in AI based on our values, it will transform our country and ensure that the United States and its allies continue to shape the world for the good of all humankind, NSCAI executive director Yll Bajraktari writes in the report. But heres the thing: Its entirely possible for America to embrace and invest in an AI research program based on liberal-democratic values that still fails, simply because the technical problem ahead of us is so hard.
This is an important respect in which AI is not analogous to nuclear weapons, where the most important policy decisions were whether to build them at all and how to build them faster than Nazi Germany.
In other words, with AI, theres not just the risk that someone else will get there first. A misaligned AI built by an altruistic, transparent, careful research team with democratic oversight and a goal to share its profits with all of humanity will still be a misaligned AI, one that pursues its programmed goals even when theyre contrary to human interests.
The limited scope of the NSCAI report is a fairly obvious consequence of what the commission is and what it does. The commission was created in 2018 and tasked with recommending policies that would advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.
Right now, the part of the US government that takes artificial intelligence risks seriously is the national security and defense community. Thats because AI risk is weird, confusing, and futuristic, and the national security community has more latitude than the rest of the government to spend resources seriously investigating weird, confusing, and futuristic things.
But AI isnt just a defense and security issue; it will affect is affecting most aspects of society, like education, criminal justice, medicine, and the economy. And to the extent it is a defense issue, that doesnt mean that traditional defense approaches make sense.
If, before the invention of electricity, the only people working on producing electricity had been armies interested in electrical weapons, theyd not just be missing most of the effects of electricity on the world, theyd even be missing most of the effects of electricity on the military, which have to do with lighting, communications, and intelligence, rather than weapons.
The NSCAI, to its credit, takes AI seriously, including the non-defense applications and including the possibility that AI built in America by Americans could still go wrong. The thing I would say to American researchers is to avoid skipping steps, Louie told me. We hope that some of our competitor nations, China, Russia, follow a similar path demonstrate it meets thorough requirements for what we need to do before we use these things.
But the report, overall, looks at AI from the perspective of national defense and international competition. Its not clear that will be conducive to the international cooperation we might need in order to ensure no one anywhere in the world rushes ahead with an AI system that isnt ready.
Some AI work, at least, needs to be happening in a context insulated from arms-race concerns and fears of China. By all means, lets devote greater attention to Chinas use of tech in perpetrating human rights violations. But we should hesitate to rush ahead with AGI work without a sense of how well make it happen safely, and there needs to be more collaborative global work on AI, with a much longer-term lens. The perspectives that work could create room for just might be crucial ones.
Continued here:
The future of AI is being shaped right now. How should policymakers respond? - Vox.com
- AI File Extension - Open . AI Files - FileInfo [Last Updated On: June 14th, 2016] [Originally Added On: June 14th, 2016]
- Ai | Define Ai at Dictionary.com [Last Updated On: June 16th, 2016] [Originally Added On: June 16th, 2016]
- ai - Wiktionary [Last Updated On: June 22nd, 2016] [Originally Added On: June 22nd, 2016]
- Adobe Illustrator Artwork - Wikipedia, the free encyclopedia [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- AI File - What is it and how do I open it? [Last Updated On: June 29th, 2016] [Originally Added On: June 29th, 2016]
- Ai - Definition and Meaning, Bible Dictionary [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- ai - Dizionario italiano-inglese WordReference [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- Bible Map: Ai [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai dictionary definition | ai defined - YourDictionary [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai (poet) - Wikipedia, the free encyclopedia [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- AI file extension - Open, view and convert .ai files [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- History of artificial intelligence - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Artificial intelligence (video games) - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- North Carolina Chapter of the Appraisal Institute [Last Updated On: September 8th, 2016] [Originally Added On: September 8th, 2016]
- Ai Weiwei - Wikipedia, the free encyclopedia [Last Updated On: September 11th, 2016] [Originally Added On: September 11th, 2016]
- Adobe Illustrator Artwork - Wikipedia [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- 5 everyday products and services ripe for AI domination - VentureBeat [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Realdoll builds artificially intelligent sex robots with programmable personalities - Fox News [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- ZeroStack Launches AI Suite for Self-Driving Clouds - Yahoo Finance [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI and the Ghost in the Machine - Hackaday [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers - Fast Company [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Roses are red, violets are blue. Thanks to this AI, someone'll fuck you. - The Next Web [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Who Leads On AI: The CIO Or The CDO? - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI For Matching Images With Spoken Word Gets A Boost From MIT - Fast Company [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Teach undergrads ethics to ensure future AI is safe compsci boffins - The Register [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- AI is here to save your career, not destroy it - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- A Heroic AI Will Let You Spy on Your Lawmakers' Every Word - WIRED [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- With a $16M Series A, Chorus.ai listens to your sales calls to help your team close deals - TechCrunch [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Microsoft AI's next leap forward: Helping you play video games - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Samsung Galaxy S8's Bixby AI could beat Google Assistant on this front - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- 3 common jobs AI will augment or displace - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk endorse new AI code - Irish Times [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- SumUp co-founders are back with bookkeeping AI startup Zeitgold - TechCrunch [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Five Trends Business-Oriented AI Will Inspire - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI Systems Are Learning to Communicate With Humans - Futurism [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Pinterest uses AI and your camera to recommend pins - Engadget [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Chinese Firms Racing to the Front of the AI Revolution - TOP500 News [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Real life CSI: Google's new AI system unscrambles pixelated faces - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI could transform the way governments deliver public services - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Amazon Is Humiliating Google & Apple In The AI Wars - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- What's Still Missing From The AI Revolution - Co.Design (blog) [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Legaltech 2017: Announcements, AI, And The Future Of Law - Above the Law [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Can AI make Facebook more inclusive? - Christian Science Monitor [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- How a poker-playing AI could help prevent your next bout of the flu - ExtremeTech [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Dynatrace Drives Digital Innovation With AI Virtual Assistant - Forbes [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- AI and the end of truth - VentureBeat [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Taser bought two computer vision AI companies - Engadget [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Google's DeepMind pits AI against AI to see if they fight or cooperate - The Verge [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- The Coming AI Wars - Huffington Post [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Is President Trump a model for AI? - CIO [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Who will have the AI edge? - Bulletin of the Atomic Scientists [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- How an AI took down four world-class poker pros - Engadget [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- We Need a Plan for When AI Becomes Smarter Than Us - Futurism [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- See how old Amazon's AI thinks you are - The Verge [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford to invest $1 billion in autonomous vehicle tech firm Argo AI - Reuters [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Zero One: Are You Ready for AI? - MSPmentor [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford bets $1B on Argo AI: Why Silicon Valley and Detroit are teaming up - Christian Science Monitor [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google Test Of AI's Killer Instinct Shows We Should Be Very Careful - Gizmodo [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations - ScienceAlert [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- An artificially intelligent pathologist bags India's biggest funding in healthcare AI - Tech in Asia [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Ford pledges $1bn for AI start-up - BBC News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Dyson opens new Singapore tech center with focus on R&D in AI and software - TechCrunch [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How to Keep Your AI From Turning Into a Racist Monster - WIRED [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How Chinese Internet Giant Baidu Uses AI And Machine Learning - Forbes [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Humans engage AI in translation competition - The Stack [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Watch Drive.ai's self-driving car handle California city streets on a ... - TechCrunch [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Cryptographers Dismiss AI, Quantum Computing Threats - Threatpost [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Is AI making credit scores better, or more confusing? - American Banker [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI and Robotics Trends: Experts Predict - Datamation [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- IoT And AI: Improving Customer Satisfaction - Forbes [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI's Factions Get Feisty. But Really, They're All on the Same Team - WIRED [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Elon Musk: Humans must become cyborgs to avoid AI domination - The Independent [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Facebook Push Into Video Allows Time To Catch Up On AI Applications - Investor's Business Daily [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Defining AI, Machine Learning, and Deep Learning - insideHPC [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI Predicts Autism From Infant Brain Scans - IEEE Spectrum [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- The Rise of AI Makes Emotional Intelligence More Important - Harvard Business Review [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Google's AI Learns Betrayal and "Aggressive" Actions Pay Off - Big Think [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI faces hype, skepticism at RSA cybersecurity show - PCWorld [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence - Futurism [Last Updated On: February 17th, 2017] [Originally Added On: February 17th, 2017]