Artificial Intelligence as Security Solution and Weaponization by Hackers – CISO MAG

Posted: December 13, 2019 at 3:24 pm

By Julien Legrand, Operation Security Manager, Socit Gnrale

Artificial intelligence is a double-edged sword that can be used as a security solution or as a weapon by hackers. AI entails developing programs and systems capable of exhibiting traits associated with human behaviors. The characteristics include the ability to adapt to a particular environment or to intelligently respond to a situation. AI technologies have extensively been applied in cybersecurity solutions, but hackers are also leveraging them to develop intelligent malware programs and execute stealth attacks.

Security experts have conducted a lot of research to harness the capabilities of AI and incorporate it into security solutions. AI-enabled security tools and products can detect and respond to cybersecurity incidents with minimal or zero input from humans. AI applications in cybersecurity have proved to be highly useful. Twenty-five percent of IT decision-makers attribute security as the primary reason why they adopt AI and machine learning in organizational cybersecurity. AI not only improves security posture, but it also automates detection and response processes. This cuts on the finances and time used in human-driven intervention and detection processes.

Organizations use AI to model and monitor the behavior of system users. The purpose of monitoring the interactions between a system and users is to identify takeover attacks. These are attacks where malicious employees steal login details of other users and use their accounts to commit different types of cybercrimes. AI learns the user activities over time such that it considers unusual behavior as anomalies. Whenever a different user uses the account, AI-powered systems can detect the unusual activity patterns and respond either by locking out the user or immediately alert system admins of the changes.

Antivirus tools with AI capabilities detect network or system anomalies by identifying programs exhibiting unusual behavior. Malware programs are coded to execute functions that differ from standard computer operations. AI antiviruses leverage machine learning tactics to learn how legitimate programs interact with an operating system. As such, whenever malware programs are introduced to a network, AI antivirus solutions can immediately detect them and block them from accessing systems resources. This contrasts from signature-based traditional antiviruses which scans a signature database to determine whether a program is a security threat.

Automated analysis of system or network data ensures continuous monitoring for prompt identification of attempted intrusions. Manual analysis is nearly impossible due to the sheer volume of data generated by user activities. Cybercriminals use command and control (C2) tactics to penetrate network defenses without being detected. Such tactics include embedding data in DNS requests to bypass firewalls and IDS/IPS. AI-enabled cyber defenses utilize anomaly detection, keyword matching, and monitoring statistics. As a result, they can detect all types of network or system intrusion.

Cybercriminals prefer email communication as the primary delivery technique for malicious links and attachments used to conduct phishing attacks. Symantec states that 54.6 percent of received email messages are spam and may contain malicious attachments or links. Anti-phishing emails with AI and machine learning capabilities are highly effective in identifying phishing emails. This is by performing in-depth inspections on links. Additionally, such anti-phishing tools simulate clicks on sent links to detect phishing signs. They also apply anomaly detection techniques to identify suspicious activities in all features of the sender. These include attachments, links, message bodies, among other items.

Hackers are turning to AI and using it to weaponize malware and attacks to counter the advancements made in cybersecurity solutions. For instance, criminals use AI to conceal malicious codes in benign applications.They program the codes to execute at a specific time, say ten months after the applications have been installed, or when a targeted number of users have subscribed to the applications. This is to maximize the impacts such attacks will cause. Concealing such codes and information requires the application of AI models and deriving private keys to control the place and time the malware will execute.

Notwithstanding, hackers can predefine an application feature as an AI trigger for executing cyber-attacks. The features can range from authenticating processes through voice or visual recognition to identity management features. Most applications used today contain such features, and this provides attackers with ample opportunities of feeding weaponized AI models, deriving a key, and attacking at will. The malicious models can be present for years without detection as hackers wait to strike when applications are most vulnerable.

Besides, AI technologies are unique in that they acquire knowledge and intelligence to adapt accordingly. Hackers are aware of these capabilities and leverage them to model adaptable attacks and create intelligent malware programs. Therefore, during attacks, the programs can collect knowledge of what prevented the attacks from being successful and retain what proved to be useful. AI-based attacks may not succeed in a first attempt, but adaptability abilities can enable hackers to succeed in subsequent attacks. Security communities thus need to gain in-depth knowledge of the techniques used to develop AI-powered attacks to create effective mitigations and controls.

Also, cyber adversaries use AI to execute intelligent attacks that self-propagate over a system or network. Smart malware can exploit unmitigated vulnerabilities leading to an increased likelihood of fully compromised targets. If an intelligent attack comes across a patched vulnerability, it immediately adapts to try compromising a system through different types of attacks.

Lastly, hackers use AI technologies to create malware capable of mimicking trusted system components. This is to improve stealth attacks. For example, cyber actors use AI-enabled malware programs to automatically learn the computation environment of an organization, patch update lifecycle, preferred communication protocols, and when the systems are least protected. Subsequently, hackers can execute undetectable attacks as they blend with an organizations security environment. For example, TaskRabbit was hacked compromising 3.75 million users, yet investigations could not trace the attack. Stealth attacks are dangerous since hackers can penetrate and leave a system at will. AI facilitates such attacks, and the technology will only lead to the creation of faster and more intelligent attacks.

Disclaimer: CISO MAG does not endorse any of the claims made by the writer. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same. Views expressed in this article are personal.

Here is the original post:

Artificial Intelligence as Security Solution and Weaponization by Hackers - CISO MAG

Related Posts