Google’s AI Fight Club Will Train Systems to Defend Against Future Cyberattacks – Futurism

In BriefGoogle Brain and data science platform Kaggle have announcedan "AI Fight Club" to train machine learning systems on how tocombat malicious AI. As computer systems become smarter,cyberattacks also become tougher to defend against, and thiscontest could help illuminate unforeseen vulnerabilities. Reinforcing AI Systems

When artificial intelligence (AI) is discussed today, most people are referring to machine learning algorithmsor deep learning systems. While AI hasadvanced significantly over the years, the principle behind these technologies remains the same. Someone trains a system to receivecertain data and asks it to produce a specified outcome its up to the machine to develop its own algorithm to reach this outcome.

Alas, while weve been able to create some very smartsystems, they are not foolproof. Yet.

Data science competition platform Kaggle wants to prepare AI systems for super-smart cyberattacks, and theyre doing so by pitting AI against AIin acontest dubbed the Competition on Adversarial Attacks and Defenses. The battle is organized by Google Brain and will be part of the Neural Information Processing Systems (NIPS) Foundations 2017 competition track later this year.

This AI fight club will feature three adversarial challenges. The first (non-targeted adversarial attack) involves getting algorithms to confuse a machine learning system so it wont work properly. Another battle (targeted adversarial attack) requires training one AI to force another to classify data incorrectly. The third challenge (defense against adversarial attacks) focuses on beefing up a smart systems defenses.

Its a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled,Jeff Clune, a University of Wyoming assistant professor whose own work involves studying the limits of machine learning systems, told the MIT Technology Review.

AI is actually more pervasive now than most people think, and as computer systems have become more advanced, the use of machine learning algorithms has become more common. The problem is that the same smart technology can be used to undermine these systems.

Computer security is definitely moving toward machine learning, Google Brain researcher Ian Goodfellow told theMIT Technology Review. The bad guys will be using machine learning to automate their attacks, and we will be using machine learning to defend.

Training AI to fight malicious AI is the best way to prepare for these attacks, but thats easier said than done.Adversarial machine learning is more difficult to study than conventional machine learning, explained Goodfellow. Its hard to tell if your attack is strong or if your defense is actually weak.

The unpredictability of AI is one of the reasons some,including serial entrepreneur Elon Musk,are concerned that the tech may prove malicious in the future. They suggest that AI development be carefully monitored and regulated, but ultimately, itsthe people behind these systemsand not the systems themselves that present the true threat.

In an effort to get ahead of the problem, the Institute of Electrical and Electronics Engineers has createdguidelines for ethical AI, and groups like the Partnership on AI have also set up standards. Kaggles contest could illuminate new AI vulnerabilities that must be accounted for in future regulations, and by continuing to approach AI development cautiously, we can do more to ensure that the tech isnt used for nefarious means in the future.

Here is the original post:

Google's AI Fight Club Will Train Systems to Defend Against Future Cyberattacks - Futurism

Related Posts

Comments are closed.