AI Is the Future of Cybersecurity, for Better and for Worse – Harvard Business Review

Posted: May 9, 2017 at 3:31 pm

Executive Summary

In the near future, as Artificial Intelligence (AI) systems become more capable, we will begin to see more automated and increasingly sophisticated social engineering attacks. The rise of AI-enabled cyber-attacks is expected to cause an explosion of network penetrations, personal data thefts, and an epidemic-level spread of intelligent computer viruses. Ironically, our best hope to defend against AI-enabled hacking is by using AI. But this is also very likely to lead to an AI arms race, the consequences of which may be very troubling in the long term, especially as big government actors join in the cyberwars. Business leaders would be well advised to familiarize themselves with the state-of-the-art in AI safety and security research. Armed with more knowledge, they can then rationally consider how the addition of AI to their product or service will enhance user experiences, while weighing the costs of potentially subjecting users to additional data breaches and other possible dangers.

In the near future, as artificial intelligence (AI) systems become more capable, we will begin to see more automated and increasingly sophisticated social engineering attacks. The rise of AI-enabled cyberattacks is expected to cause an explosion of network penetrations, personal data thefts, and an epidemic-level spread of intelligent computer viruses. Ironically, our best hope to defend against AI-enabled hacking is by using AI. But this is very likely to lead to an AI arms race, the consequences of which may be very troubling in the long term, especially as big government actors join the cyber wars.

My research isat the intersection of AI and cybersecurity. In particular, I am researching how we can protect AI systems from bad actors, as well as how we can protect people from failed or malevolent AI. This work falls into a larger framework of AI safety,attempts to create AI that is exceedingly capable but also safe and beneficial.

A lot has been written about problems thatmight arise with the arrival of true AI, either as a direct impact of such inventions or because of a programmers error. However, intentional malice in design and AI hacking have not been addressed to a sufficient degree in the scientific literature. Its fair to say that when it comes to dangers from a purposefully unethical intelligence, anything is possible. According to Bostroms orthogonality thesis, an AI system can potentially have any combination of intelligence and goals. Such goals can be introduced either throughthe initial design or throughhacking, or introduced later, in case of an off-the-shelf software just add your own goals. Consequently, depending on whose bidding the system is doing (governments, corporations, sociopaths, dictators, military industrial complexes, terrorists, etc.), it may attempt to inflict damage thats unprecedented in the history of humankind or thats perhaps inspired by previous events.

Even today, AI can be used to defend and to attack cyber infrastructure, as well as to increase the attack surface that hackers can target,that is, the number of ways for hackers to get into a system. In the future, as AIs increase in capability, I anticipate that they will first reach and then overtake humans in all domains of performance, as we have already seen with games like chessandGoand are now seeing with important human tasks such asinvestinganddriving. Its important for business leaders to understand how that future situation will differ from our current concerns and what to do about it.

If one of todays cybersecurity systems fails, the damage can be unpleasant, but is tolerable in most cases: Someone loses money orprivacy. But for human-level AI (or above), the consequences could be catastrophic. A single failure of a superintelligent AI (SAI) system could cause an existential risk event an event that has the potential to damage human well-being on a global scale. The risks are real, as evidenced by the fact that some of the worlds greatest minds in technology and physics, includingStephen Hawking, Bill Gates, and Elon Musk, have expressed concerns about the potential for AI to evolve to a point where humans could no longer control it.

When one of todays cybersecurity systems fails, you typically get another chance to get it right, or at least to do better next time. But with an SAI safety system, failure or success is a binary situation: Either you have a safe, controlled SAIor you dont. The goal of cybersecurity in general is to reduce the number of successful attacks on a system; the goal of SAI safety, in contrast, is to make sure noattacks succeed in bypassing the safety mechanisms in place. The rise of brain-computer interfaces, in particular, will create a dream target for human and AI-enabled hackers. And brain-computer interfaces are not so futuristic theyre already being used in medical devices and gaming, for example. If successful, attacks onbrain-computer interfaces would compromise not only critical information such as social security numbers or bank account numbers but also our deepest dreams, preferences, and secrets. There is the potential to create unprecedented new dangers for personal privacy, free speech, equal opportunity, and any number of human rights.

Business leaders are advised to familiarize themselves with the cutting edge ofAI safety and security research, which at the moment is sadly similar to the state of cybersecurity in the 1990s, andour current situation with the lack of security forthe internet of things. Armed with more knowledge, leaderscan rationally consider how the addition of AI to their product or service will enhance user experiences, while weighing the costs of potentially subjecting users to additional data breaches and possible dangers. Hiring a dedicated AI safety expert may be an important next step, as most cybersecurity experts are not trained in anticipating or preventing attacks against intelligent systems. I am hopeful that ongoing research will bring additional solutions for safely incorporatingAI into the marketplace.

More:

AI Is the Future of Cybersecurity, for Better and for Worse - Harvard Business Review

Related Posts