AI Adds a New Layer to Cyber Risk – Harvard Business Review

Posted: April 13, 2017 at 11:49 pm

Executive Summary

Cognitive and AI technologies are a certainty of our future. While they have the power to bring immense potential to our productivity and quality of life, we must also be mindful of potential vulnerabilities on an equally large scale. With humans, a security breach can often be localized back to the source and sealed. With cognitive and AI breaches, the damage can become massive in seconds. Balancing the demands between automation and information security should be about making cybersecurity integral not an afterthought to an organizations information infrastructure.

Cognitive computing and artificial intelligence (AI)are spawning what many are calling a new type of industrial revolution. While both technologies refer to the same process, there is a slight nuance to each. To be specific, cognitive uses a suite of many technologies that are designed to augment the cognitive capabilities of a human mind. A cognitive system can perceive and infer, reason and learn. Were defining AIhere as a broad term that loosely refers to computers that can perform tasks that once required human intelligence. Because these systems can be trained to analyze and understand natural language, mimic human reasoning processes, and make decisions, businesses are increasingly deploying them to automate routine activities. From self-driving cars to drones to automated business operations, this technology has the potential to enhance productivity, direct human talent on critical issues, accelerate innovation, and lower operating costs.

Yet, like any technology that is not properly managed and protected, cognitive systems that use humanoid robots and avatars and less human labor can also pose immense cybersecurity vulnerabilities for businesses, compromising their operations. The criminal underground has been leveraging this capability for years, using the concept of botnets which distribute tiny pieces of code across thousands of computers programmed to execute tasks that mimic the actions of tens and hundreds of thousands of users, resulting in mass cyberattacks and spamming of email and texts, and even making major websites unavailable for large periods of time via denial of service attacks.

How it will impact business, industry, and society.

In a digital world where there is greater reliance on business data analytics and electronic consumer interactions, the C-suite cannot afford to ignore these existing security risks. In addition, there are unique and new cyber risks associated with cognitive and AI technology. Businesses must be thoughtful about adopting new information technologies, employing multiple layers of cyber defense, and security planning to reduce the growing threat. As with any innovative new technology, there are positive and negative implications. Businesses must recognize that a technology powerful enough to benefit them is equally capable of hurting them.

First of all, theres no guarantee of reliability with cognitive technology. It is only as good as the information fed into the system, and the training and context that a human expert provides. In an ideal state, systems are designed to simulate and scale the reasoning, judgment, and decision making capabilities of the most competent and expertly trained human minds. But, bad human actors say, a disgruntled employee or rogue outsiders could hijack the system, enter misleading or inaccurate data, and hold it hostage by withholding mission-critical information or by teaching the computer to process data inappropriately.

Second, cognitive and artificial intelligence systems are trained to mimic analytical processes of the human brain not always through clear, step-by-step programming instructions like a traditional system, but through example, repetition, observation and inference.

But, if the system is sabotaged or purposely fed inaccurate information, it could infer an incorrect correlation as correct or learn a bad behavior. Since most cognitive systems are designed to have freedom, as humans do, they often use non-expiring and hard-coded passwords. A malicious hacker can use the same login credentials as the bot to gain access to much more data than a single individual is allowed. Security monitoring systems are sometimes configured to ignore bot or machine access logs to reduce the large volume of systemic access. But this can allow a malicious intruder, masquerading as a bot, to gain access to systems for long periods of time and go largely undetected.

In some cases, attempts to leverage new technology can have unintended consequences, and an entire organization can become a victim. In a now-classic example, MicrosoftsTwitterbot, Tay, which was designed to learn how to communicate naturally with young people on social media, was compromised shortly after going live when internet trolls figured out the vulnerabilities of its learning algorithms and began feeding it racist, sexist, and homophobic content. The result was that Tay began to spew hateful and inappropriate answers and commentary on social media to millions of followers.

Finally, contrary to popular thinking, cognitive systems are not protected from hacks just because a process is automated. Chatbots are increasingly becoming commonplace in every type of setting, including enterprise and customer call centers. By collecting personal information about users and responding to their inquiries, some bots are designed to keep learning over time how to do their jobs better. This plays a critical role in ensuring accuracy, particularly in regulated industries like healthcare and finance that possess a high volume of confidential membership and customer information.

But like any technology, these automated chatbots can also be used by malicious hackers to scale up fraudulent transactions, mislead people, steal personally-identifiable information, and penetrate systems. We have already seen evidence of advanced AI tools being used to penetrate websites to steal compromising and embarrassing information on individuals, with high-profile examples such as Ashley Madison, Yahoo and the DNC. As bad actors continue to develop advanced AI for malicious purposes, it will require organizations to deploy equally advanced AI to prevent, detect and counter these attacks.

But, risks aside, there is tremendous upside for cyber security professionals to leverage AI and cognitive techniques. Routine tasks such as analyzing large volumes of security event logs can be automated by using digital labor and machine learning to increase accuracy. As systems become more effective at identifying malicious and unauthorized access, cybersecurity systems can become self-healing actually updating controls and patching systems in real time as a direct result of learning and understanding how hackers exploit new approaches.

Cognitive and AI technologies are a certainty of our future. While they have the power to bring immense potential to our productivity and quality of life, we must also be mindful of potential vulnerabilities on an equally large scale. With humans, a security breach can often be localized back to the source and sealed. With cognitive and AI breaches, the damage can become massive in seconds. Balancing the demands between automation and information security should be about making cybersecurity integral not an afterthought to an organizations information infrastructure.

Follow this link:

AI Adds a New Layer to Cyber Risk - Harvard Business Review

Related Posts