Unpack the use of AI in cybersecurity, plus pros and cons – TechTarget

AI is under the spotlight as industries worldwide begin to investigate how the technology will help them improve their operations.

AI is far from being new. As a field of scientific research, AI has been around since the 1950s. The financial industry has been using a form of AI -- dubbed expert systems -- for more than 30 years to trade stocks, make risk decisions and manage portfolios.

Each of these use cases exploits expert systems to process large amounts of data quickly at levels that far exceed the ability of humans to perform the same tasks. For instance, algorithmic stock trading systems make millions of trades per day with no human interaction.

Cybersecurity seeks to use AI and its close cousin, machine learning -- where algorithms that analyze data become better through experience -- in much the same way that the financial services industry has.

For cybersecurity professionals, that means using AI to take data feeds from potentially dozens of sources, analyze each of these inputs simultaneously in real time and then detect those behaviors that may indicate a security risk.

Beyond the use of AI and machine learning in cybersecurity risk identification, these technologies can be used to improve access control beyond the weak username and password systems in widespread use today by including support for multifactor, behavior-based, real-time access decisions. Other applications for AI include spam detection, phishing detection and malware detection.

Today's networked environments are extremely complex. Monitoring network performance is challenging enough; detecting unwanted behavior that may indicate a security threat is even more difficult.

Traditional incident response models are based on a three-pronged concept: protect, detect and respond. Cybersecurity experts have long known that of the three, detect is the weak link. Detection is hard to do and is often not done well.

In 2016, Gartner unveiled its own predict, prevent, detect and respond framework that CISOs could use to communicate a security strategy. Machine learning is particularly useful in predicting, preventing and detecting.

There are enormous amounts of data that must be analyzed to understand network behavior. The integration of machine learning and the use of AI in cybersecurity tools will not just illuminate security threats that previously may have gone undetected, but will help enterprises diagnose and respond to incursions more effectively.

AI-based security algorithms can identify malicious behavior patterns in the huge volumes of network traffic far better than people can. However, this technology can only identify the behavioral patterns the algorithms have been trained to identify. With machine learning, AI can go beyond the limits of algorithms and automatically improve its performance through learning or experience. The ability for AI -- and machine learning in particular -- to make decisions based upon data rather than rules promises to yield significant improvements in detection.

Let's examine how the integration of AI and machine learning might help improve the performance of intrusion detection and prevention systems (IDSes/IPSes). A typical IDS/IPS relies upon detection rules, known as signatures, to identify potential intrusions, policy violations and other issues.

The IDS/IPS looks for traffic that matches the installed signatures. But the IDS/IPS can identify malicious traffic only if a signature matching that malicious traffic is installed: no signature, no detection. This means the IDS/IPS cannot detect attacks whose signatures have yet to be developed. In addition, a signature-based IDS/IPS may also be easy to circumvent by making small changes to attacks so that they avoid matching a signature.

To close this gap, IDSes/IPSes have for years employed something called heuristic anomaly detection. This lets systems look for behavior that is out of the ordinary, as well as attempt to classify anomalous traffic as either benign, suspicious or unknown. When suspicious or unknown traffic is flagged, these systems generate an alert, which requires a human operator to determine whether the threat is malicious. But IDSes/IPSes are hobbled by the sheer volume of data to be analyzed, the number of alerts generated and especially the large percentage of false positives. As a result, signature-based IDSes/IPSes dominate.

The use of malicious AI, also known as adversarial AI, is growing.

One way to help the heuristic IDS/IPS become more efficient would be the introduction of machine learning-generated probability scores that determine which activity is benign and which is harmful.

The challenge, however, is that, of the billions of actions that occur on networks, relatively few of them are malicious. It is kind of a double-edged sword: There is too much data for humans to process manually and too little malicious activity for machine learning tools to learn effectively on their own.

To address this issue, security analysts train machine learning systems by manually labeling and classifying potential anomalies in a process called supervised learning. Once a machine learning cybersecurity system learns about an attack, it can search for other instances that reflect the same or similar behavior. This method may feel like it's nothing more than automating the discovery and creation of attack signatures, but the knowledge an machine learning system learns about attacks can be applied in a far more comprehensive approach than traditional signature detection systems can muster.

That's because machine learning systems can look for and identify behavior that is similar or related to what it has learned rather than rigidly focus on behavior that exactly matches a traditional signature.

The use of AI in cybersecurity offers the possibility of using technology to cut through the complexity of monitoring current networks, thus improving risk and threat detection. However, the use of AI in cybersecurity is a two-way street. The use of malicious AI, also known as adversarial AI, is growing. A malicious actor could potentially use AI to make a series of small changes to a network environment that, while individually insignificant, could change the overall behavior of the machine learning cybersecurity system once they are integrated over time.

This adversarial AI security threat is not limited to AI used in cybersecurity. It is a potential threat wherever AI is used, including in common industrial control systems and computer vision systems used in banking applications.

This means the AI models themselves are becoming a new attack surface component that must be secured. New security practices will have to be adopted. Some protection strategies will look like things we already know how to do, such as rate-limiting inputs and input validation.

Over time, AI adversarial training could be included as part of the supervised learning process. The uses and benefits of AI and machine learning in cybersecurity are real and necessary. There is too much data to process. It can take months to detect intrusions in today's large network data sets. AI can help detect malicious traffic, but it will take significant effort to develop and train an effective AI cybersecurity system. And, as is the case with all technology, AI can also be deployed maliciously. Mitigating the impact of malicious AI is also a reality in today's security environment.

More:
Unpack the use of AI in cybersecurity, plus pros and cons - TechTarget

Related Posts
This entry was posted in $1$s. Bookmark the permalink.