AI Machine Learning is being debated due to the "update problem" of adaptiveness.
Humans typically learn new things on-the-fly.
Lets use jigsaw puzzles to explore the learning process.
Imagine that you are asked to solve a jigsaw puzzle and youve not previously had the time nor inclination to solve jigsaw puzzles (yes, there are some people that swear they will never do a jigsaw puzzle, as though it is beneath them or otherwise a useless use of their mind).
Upon dumping out onto the table all the pieces from the box, you likely turn all the pieces right side up and do a quick visual scan of the pieces and the picture shown on the box of what you are trying to solve for.
Most people are self-motivated to try and put all the pieces together as efficiently as they can, meaning that it would be unusual for someone to purposely find pieces that fit together and yet not put them together. Reasonable people would be aiming to increasingly build toward solving the jigsaw puzzle and strive to do so in a relatively efficient manner.
A young child is bound to just jump into the task and pick pieces at random, trying to fit them together, even if the colors dont match and even if the shapes dont connect with each other. After a bit of time doing this, most children gradually realize that they ought to be looking to connect pieces that will fit together and that also matches in color as depicted on the overall picture being solved for.
All right, youve had a while to solve the jigsaw puzzle and lets assume you were able to do so.
Did you learn anything in the process of solving the jigsaw puzzle, especially something that might be applied to doing additional jigsaw puzzles later on?
Perhaps you figured out that there are some pieces that are at the edge of the puzzle. Those pieces are easy to find since they have a square edge. Furthermore, you might also divine that if you put together all the edges first, youll have an outline of the solved puzzle and can build within that outline.
It seems like a smart idea.
In recap, you cleverly noticed a pattern among the pieces, namely that there was some with a straight or squared edge. Based on that pattern, you took an additional mental step and decided that you could likely do the edge of the puzzle with less effort than the rest of the puzzle, plus by completing the overall edge it would seem to further your efforts toward completing the rest of the puzzle.
Maybe you figured this out while doing the puzzle and opted to try the approach right away, rather than simply mentally filing the discovered technique away to use on a later occasion.
I next give you a second jigsaw puzzle.
What do you do?
You might decide to use your newfound technique and proceed ahead by doing the edges first.
Suppose though that Ive played a bit of a trick and given you a so-called edgeless jigsaw puzzle. An edgeless version is one that doesnt have a straight or square edge to the puzzle and instead the edges are merely everyday pieces that appear to be perpetually unconnected.
If you are insistent on trying to first find all the straight or square-edged pieces, youll be quite disappointed and frustrated, having to then abandon the edge-first algorithm that youve devised.
Some edgeless puzzles go further by having some pieces that are within the body of the puzzle that have square or straight edges, thereby possibly fooling you into believing that those pieces are for the true edge of the jigsaw.
Overall, heres what happened as you learned to do jigsaw puzzles.
You likely started by doing things in a somewhat random way, especially for the first jigsaw, finding pieces that fit together and assembling portions or chunks of the jigsaw. While doing so, you had noticed that there were some that appeared to be the edges and so you came up with the notion that doing the edges was a keen way to more efficiently solve the puzzle. You might have employed this discovery right away, while in the act of solving the puzzle.
When you were given the second jigsaw, you tried to apply your lesson learned from the first one, but it didnt hold true.
Turns out that the edge approach doesnt always work, though you did not perhaps realize this limitation upon initial discovery of the tactic.
As this quick example showcases, learning can occur in the act of performing a task and might well be helpful for future performances of the task.
Meanwhile, what youve learned during a given task wont necessarily be applicable in future tasks, and could at times confuse you or make you less efficient, since you might be determined to apply something that youve learned and yet it no longer is applicable in other situations.
Adaptive Versus Lock-down While Learning
Learning that occurs on-the-fly is considered adaptive, implying that you are adapting as you go along.
In contrast, if you arent aiming to learn on-the-fly, you can try to lock out the learning process and seek to proceed without doing any learning. This kind of lock-down of the learning process involves inhibiting any learning and making use of only what has previously been learned.
Voila, now its time to discuss Artificial Intelligence (AI).
Todays AI systems have seemingly gotten pretty good at a number of human-like tasks (though quite constrained tasks), partially as a result of advances in Machine Learning (ML).
Machine Learning involves the computer system seeking to find patterns and then leveraging those patterns for boosting the performance of the AI.
An AI developer usually opts to try out different kinds of Machine Learning methods when they are putting together an AI system (see my piece on ensemble ML) and typically settles on a specific ML that they will then embed into their AI system.
A looming issue that society is gradually uncovering involves whether AI Machine Learning should be adaptive as it performs its efforts, or whether it is better to lock-down the adaptiveness while the ML is undertaking a task.
Lets consider why this an important point.
Such a concern has been specially raised in the MedTech space, involving AI-based medical devices and systems that are being used in medicine and healthcare.
Suppose that an inventor creates a new medical device that examines blood samples and the device while using AI tries to make predictions about the health of the patient that provided the blood.
Usually, such devices would require federal regulatory approval before it could be placed into the marketplace for usage.
If this medical device is making use of AI Machine Learning, it implies that the system could be using adaptive techniques and therefore will try to improve its predictive capability while examining blood samples.
Any federal agency that initially tested the medical device to try and ensure that it was reliable and accurate prior to it being released would have done so at a point in time prior to those adaptive acts that are going to occur while the AI ML is in everyday use.
Thus, the medical device using AI ML is going to inevitably change what it does, likely veering outside the realm of what the agency thought it was approving.
On the downside, the ML is potentially going to learn things that arent necessarily applicable, and yet not realize that those aspects are not always relevant and proceed to falsely assess a given blood sample (recall the story of believing that doing the edge of a jigsaw can be done by simply finding the straight or squared pieces, which didnt turn out to be a valid approach in all cases).
On the upside, the ML might be identifying valuable nuances by being adaptive and self-improve itself toward assessing blood samples, boosting what it does and enhancing patient care.
Yes, some argue, there is that chance of the upside, but when making potentially life-or-death assessments, do we want an AI Machine Learning algorithm being unleashed such that it could adapt in ways that arent desirable and might, in fact, be downright dangerous?
Thats the rub.
Some assert that the adaptive aspects should not be allowed on-the-fly to adjust what the AI system does, and instead in a lock-down mode merely collect and identify potential changes that they would be inspected and approved by a human, such as the AI developers that put together the system.
Furthermore, in a regulatory situation, the AI developers would need to go back to the regulatory agency and propose that the AI system is now a newly proposed updated version and get agency approval before those adaptations were used in the real-world acts of the system.
This thorny question about adaptiveness running free or being locked down is often called the update problem and is raising quite a debate.
In case you think the answer is simple, always lock-down, unfortunately, life is not always so easy.
Those that dont want the lock-down are apt to say that doing so will hamstring the AI Machine Learning, which presumably has the advantage of being able to self-adjust and get better as it undertakes its efforts.
If you force the AI ML to perform in a lock-down manner, you might as well toss out the AI ML since it no longer is free to adjust and enhance what it does.
Trying to find a suitable middle ground, some suggest that there could be guardrails that serve to keep the AI ML from going too far astray.
By putting boundaries or limits on the kinds of adjustments or adaptiveness, you could maybe get the best of both worlds, namely a form of adaptive capability that furthers the system and yet keeps it within a suitable range that wont cause the system to seemingly become unsavory.
The U.S. Food and Drug Administration (FDA) has sketched a regulatory framework for AI ML and medical devices (see link here) and is seeking input on this update problem debate.
Overall, this element of AI ML is still up for debate across all areas of application, not just the medical domain, and brings to the forefront the trade-offs involved in deploying AI ML systems.
Heres an interesting question: Do we want true self-driving cars to be able to utilize AI Machine Learning in an adaptive manner or in a lock-down manner?
Its kind of a trick question or at least a tricky question.
Lets unpack the matter.
The Levels Of Self-Driving Cars
It is important to clarify what I mean when referring to true self-driving cars.
True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).
Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so Im not going to include them in this discussion about AI ML (though for clarification, Level 2 and Level 3 could indeed have AI ML involved in their systems and thus this discussion overall is relevant even to semi-autonomous cars).
For semi-autonomous cars, it is equally important that I mention a disturbing aspect thats been arising, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And Update Problem
For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
The AI driving software is developed, tested, and loaded into the on-board computer processors that are in the driverless car. To allow for the AI software to be updated over time, the driverless car has an OTA (Over-The-Air) electronic communication capability.
When the AI developers decide its time to do an update, they will push out the latest version of the AI driving software to the vehicle. Usually, this happens while the self-driving car is parked, say in your garage, perhaps charging up if its an EV, and the OTA then takes place.
Right now, it is rare for the OTA updating to occur while the car is in motion, though there are efforts underway for enabling OTA of that nature (there is controversy about doing so, see link here).
Not only can updates be pushed into the driverless car, the OTA can be used to grab up aspects from the self-driving car. For example, the sensors on the self-driving car will have collected lots of images, video, and radar and LIDAR data, doing so during a driving journey. This data could be sent up to the cloud being used by the automaker or self-driving tech firm.
We are ready now to discuss the AI Machine Learning topic as it relates to adaptiveness versus lock-down in the use case of self-driving cars.
Should the AI ML thats on-board the driverless car be allowed to update itself, being adaptive, or should the updates only be performed via OTA from the cloud and as based on presumably the latest updates instituted and approved by the AI developers?
This might seem rather abstract, so lets use a simple example to illuminate the matter.
Consider the instance of a driverless car that encounters a dog in the roadway.
Perhaps the AI ML on-board the self-driving car detects the dog and opts to honk the horn of the car to try and prod the dog to get out of the way. Lets pretend that the horn honking succeeds and the dog scampers away.
In an adaptive mode, the AI ML might adjust to now include that honking the horn is successful at prompting an animal to get off the road.
Suppose a while later, theres a cat in the road. The AI system opts to honk the horn, and the cat scurries away (though that cat is mighty steamed!).
So far, this horn honking seems to be working out well.
The next day, theres a moose in the roadway.
The AI system honks the horn, since doing so worked previously, and the AI assumes that the moose is going to run away.
Oops, turns out that the moose opts to charge toward the driverless car, having been startled by the horn and decides that it should charge at the menacing mechanical beast.
Now, I realize this example is a bit contrived, but Im trying to quickly illustrate that the AI ML of an adaptive style could adjust in a manner that wont necessarily be right in all cases (again, recall the earlier jigsaw story).
Rather than the on-board AI ML adjusting, perhaps it would be safer to keep it in lock-down.
But, you say, the on-board AI will be forever in a static state and not be improving.
Well, recall that theres the OTA capability of updating.
Presumably, the driverless car could have provided the data about the initial instance of the dog and the horn honking up to the cloud, and the AI developers might have studied the matter. Then, upon carefully adjusting the AI system, the AI developers might, later on, push the latest animal avoidance routine down into the driverless car.
The point being that there is an open question about whether we want to have multi-ton life-or-death cars on our roadways that are being run by AI that is able to adjust itself, or whether we want the on-board AI to be on lock-down and only allow updates via OTA (which presumably would be explicitly derived and approved via human hands and minds).
Thats the crux of the update problem for driverless cars.
Conclusion
There is a plethora of trade-offs involved in the self-driving car adaptiveness dilemma.
If a self-driving car isnt adjusting on-the-fly, it might not cope well with any new situations that crop-up and will perhaps fail to make an urgent choice appropriately. Having to wait maybe hours, days, or weeks to get an OTA update might prolong the time that the AI continues to be unable to adequately handle certain roadway situations.
Human drivers adapt on-the-fly, and if we are seeking to have the AI driving system be as good or possibly better than human drivers, wouldnt we want and need to have the AI ML be adaptive on-the-fly?
Can suitable system-related guardrails be put in place to keep the AI ML from adapting in some kind of wild or untoward manner?
Though we commonly deride human drivers for their flaws and foibles, the ability of humans to learn and adjust their behavior is quite a marvel, one that continues to be somewhat elusive when it comes to achieving the same in AI and Machine Learning.
Some believe that we need to solve the jigsaw puzzle of the human mind and how it works before well have AI ML thats of any top form.
This isnt a mere edge problem and instead sits at the core of achieving true AI.
See more here:
- AI File Extension - Open . AI Files - FileInfo [Last Updated On: June 14th, 2016] [Originally Added On: June 14th, 2016]
- Ai | Define Ai at Dictionary.com [Last Updated On: June 16th, 2016] [Originally Added On: June 16th, 2016]
- ai - Wiktionary [Last Updated On: June 22nd, 2016] [Originally Added On: June 22nd, 2016]
- Adobe Illustrator Artwork - Wikipedia, the free encyclopedia [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- AI File - What is it and how do I open it? [Last Updated On: June 29th, 2016] [Originally Added On: June 29th, 2016]
- Ai - Definition and Meaning, Bible Dictionary [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- ai - Dizionario italiano-inglese WordReference [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- Bible Map: Ai [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai dictionary definition | ai defined - YourDictionary [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai (poet) - Wikipedia, the free encyclopedia [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- AI file extension - Open, view and convert .ai files [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- History of artificial intelligence - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Artificial intelligence (video games) - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- North Carolina Chapter of the Appraisal Institute [Last Updated On: September 8th, 2016] [Originally Added On: September 8th, 2016]
- Ai Weiwei - Wikipedia, the free encyclopedia [Last Updated On: September 11th, 2016] [Originally Added On: September 11th, 2016]
- Adobe Illustrator Artwork - Wikipedia [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- 5 everyday products and services ripe for AI domination - VentureBeat [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Realdoll builds artificially intelligent sex robots with programmable personalities - Fox News [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- ZeroStack Launches AI Suite for Self-Driving Clouds - Yahoo Finance [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI and the Ghost in the Machine - Hackaday [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers - Fast Company [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Roses are red, violets are blue. Thanks to this AI, someone'll fuck you. - The Next Web [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Who Leads On AI: The CIO Or The CDO? - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI For Matching Images With Spoken Word Gets A Boost From MIT - Fast Company [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Teach undergrads ethics to ensure future AI is safe compsci boffins - The Register [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- AI is here to save your career, not destroy it - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- A Heroic AI Will Let You Spy on Your Lawmakers' Every Word - WIRED [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- With a $16M Series A, Chorus.ai listens to your sales calls to help your team close deals - TechCrunch [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Microsoft AI's next leap forward: Helping you play video games - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Samsung Galaxy S8's Bixby AI could beat Google Assistant on this front - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- 3 common jobs AI will augment or displace - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk endorse new AI code - Irish Times [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- SumUp co-founders are back with bookkeeping AI startup Zeitgold - TechCrunch [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Five Trends Business-Oriented AI Will Inspire - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI Systems Are Learning to Communicate With Humans - Futurism [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Pinterest uses AI and your camera to recommend pins - Engadget [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Chinese Firms Racing to the Front of the AI Revolution - TOP500 News [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Real life CSI: Google's new AI system unscrambles pixelated faces - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI could transform the way governments deliver public services - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Amazon Is Humiliating Google & Apple In The AI Wars - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- What's Still Missing From The AI Revolution - Co.Design (blog) [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Legaltech 2017: Announcements, AI, And The Future Of Law - Above the Law [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Can AI make Facebook more inclusive? - Christian Science Monitor [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- How a poker-playing AI could help prevent your next bout of the flu - ExtremeTech [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Dynatrace Drives Digital Innovation With AI Virtual Assistant - Forbes [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- AI and the end of truth - VentureBeat [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Taser bought two computer vision AI companies - Engadget [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Google's DeepMind pits AI against AI to see if they fight or cooperate - The Verge [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- The Coming AI Wars - Huffington Post [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Is President Trump a model for AI? - CIO [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Who will have the AI edge? - Bulletin of the Atomic Scientists [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- How an AI took down four world-class poker pros - Engadget [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- We Need a Plan for When AI Becomes Smarter Than Us - Futurism [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- See how old Amazon's AI thinks you are - The Verge [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford to invest $1 billion in autonomous vehicle tech firm Argo AI - Reuters [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Zero One: Are You Ready for AI? - MSPmentor [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford bets $1B on Argo AI: Why Silicon Valley and Detroit are teaming up - Christian Science Monitor [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google Test Of AI's Killer Instinct Shows We Should Be Very Careful - Gizmodo [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations - ScienceAlert [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- An artificially intelligent pathologist bags India's biggest funding in healthcare AI - Tech in Asia [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Ford pledges $1bn for AI start-up - BBC News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Dyson opens new Singapore tech center with focus on R&D in AI and software - TechCrunch [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How to Keep Your AI From Turning Into a Racist Monster - WIRED [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How Chinese Internet Giant Baidu Uses AI And Machine Learning - Forbes [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Humans engage AI in translation competition - The Stack [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Watch Drive.ai's self-driving car handle California city streets on a ... - TechCrunch [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Cryptographers Dismiss AI, Quantum Computing Threats - Threatpost [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Is AI making credit scores better, or more confusing? - American Banker [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI and Robotics Trends: Experts Predict - Datamation [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- IoT And AI: Improving Customer Satisfaction - Forbes [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI's Factions Get Feisty. But Really, They're All on the Same Team - WIRED [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Elon Musk: Humans must become cyborgs to avoid AI domination - The Independent [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Facebook Push Into Video Allows Time To Catch Up On AI Applications - Investor's Business Daily [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Defining AI, Machine Learning, and Deep Learning - insideHPC [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI Predicts Autism From Infant Brain Scans - IEEE Spectrum [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- The Rise of AI Makes Emotional Intelligence More Important - Harvard Business Review [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Google's AI Learns Betrayal and "Aggressive" Actions Pay Off - Big Think [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI faces hype, skepticism at RSA cybersecurity show - PCWorld [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence - Futurism [Last Updated On: February 17th, 2017] [Originally Added On: February 17th, 2017]