We explore how artificial intelligence (AI) and machine learning (ML) can be incorporated into cyber security
With devices used for work continuting to diversify, so have cyber attacks, but AI can help prevent them.
As cyber attacks get more diverse in nature and targets, its essential that cyber security staff have the right visibility to determine how to solve vulnerabilities accordingly, and AI can help to come up with problems that its human colleagues cant alone.
Cyber security resembles a game of chess, said Greg Day, chief security officer EMEA at Palo Alto Networks. The adversary looks to outmanoeuvre the victim, the victim aims to stop and block the adversarys attack. Data is the king and the ultimate prize.
In 1996, an AI chess system, Deep Blue, won its first game against world champion, Garry Kasparov. Its become clear that AI can both programmatically think broader, faster and further outside the norms, and thats true of many of its applications in cyber security now too.
With this in mind, we explore particular use cases for AI in cyber security that are in place today.
Day went on to expand on how AI can work alongside cyber security staff in order to keep the organisation secure.
We all know there arent enough cyber security staff in the market, so AI can help to fill the gap, he said. Machine learning, a form of AI, can read the input from SoC analysts and transpose it into a database, which becomes ever expanding.
Charlie Roberts, head of business development, UK, Ireland & EU at IDnow, discusses how combining AI and humans can help to tackle cyber fraud. Read here
The next time the SoC analyst enters similar symptoms they are presented with previous similar cases along with the solutions, based on both statistical analysis and the use of neural nets reducing the human effort.
If theres no previous case, the AI can analyse the characteristics of the incident and suggest which SoC engineers would be the strongest team to solve the problem based on past experiences.
All of this is effectively a bot, an automated process that combines human knowledge with digital learning to give a more effective hybrid solution.
Mark Greenwood, head of data science at Netacea, delved into the benefits of bots within cyber security, keeping in mind that companies must distinguish good from bad.
Today, bots make up the majority of all internet traffic, explained Greenwood. And most of them are dangerous. From account takeovers using stolen credentials to fake account creation and fraud, they pose a real cyber security threat.
But businesses cant fight automated threats with human responses alone. They must employ AI and machine learning if theyre serious about tackling the bot problem. Why? Because to truly differentiate between good bots (such as search engine scrapers), bad bots and humans, businesses must use AI and machine learning to build a comprehensive understanding of their website traffic.
Its necessary to ingest and analyse a vast amount of data and AI makes that possible, while taking a machine learning approach allows cyber security teams to adapt their technology to a constantly shifting landscape.
By looking at behavioural patterns, businesses will get answers to the questions what does an average user journey look like and what does a risky unusual journey look like. From here, we can unpick the intent of their website traffic, getting and staying ahead of the bad bots.
When considering certain aspects of cyber security that can benefit from the technology, Tim Brown, vice-president of security architecture at SolarWinds says that AI can play a role in protecting endpoints. This is becoming ever the more important as the amount of remote devices used for work rises.
By following best practice advice and staying current with patches and other updates, an organisation can be reactive and protect against threats, said Brown. But AI may give IT and security professionals an advantage against cyber criminals.
Gartner predicts that 75% of CEOs will be personally liable for cyber-physical security incidents by 2024, as the financial impact of breaches grows. Read here
Antivirus (AV) versus AI-driven endpoint protection is one such example; AV solutions often work based on signatures, and its necessary to keep up with signature definitions to stay protected against the latest threats. This can be a problem if virus definitions fall behind, either because of a failure to update or a lack of knowledge from the AV vendor. If a new, previously unseen ransomware strain is used to attack a business, signature protection wont be able to catch it.
AI-driven endpoint protection takes a different tack, by establishing a baseline of behaviour for the endpoint through a repeated training process. If something out of the ordinary occurs, AI can flag it and take action whether thats sending a notification to a technician or even reverting to a safe state after a ransomware attack. This provides proactive protection against threats, rather than waiting for signature updates.
The AI model has proven itself to be more effective than traditional AV. For many of the small/midsize companies an MSP serves, the cost of AI-driven endpoint protection is typically for a small number of devices and therefore should be of less concern. The other thing to consider is how much cleaning up costs after infection if AI-driven solutions help to avoid potential infection, it can pay for itself by avoiding clean-up costs and in turn, creating higher customer satisfaction.
With more employees working from home, and possibly using their personal devices to complete tasks and collaborate with colleagues more often, its important to be wary of scams that are afoot within text messages.
With malicious actors recently diversifying their attack vectors, using Covid-19 as bait in SMS phishing scams, organisations are under a lot of pressure to bolster their defences, said Brian Foster, senior vice-president of product management at MobileIron.
To protect devices and data from these advanced attacks, the use of machine learning in mobile threat defence (MTD) and other forms of managed threat detection continues to evolve as a highly effective security approach.
Machine learning models can be trained to instantly identify and protect against potentially harmful activity, including unknown and zero-day threats that other solutions cant detect in time. Just as important, when machine learning-based MTD is deployed through a unified endpoint management (UEM) platform, it can augment the foundational security provided by UEM to support a layered enterprise mobile security strategy.
With cyber attacks rising while employees have been working from home, we look at how edge device security can be ensured. Read here
Machine learning is a powerful, yet unobtrusive, technology that continually monitors application and user behaviour over time so it can identify the difference between normal and abnormal behaviour. Targeted attacks usually produce a very subtle change in the device and most of them are invisible to a human analyst. Sometimes detection is only possible by correlating thousands of device parameters through machine learning.
These use cases and more demonstrate the viability of AI and cyber security staff effectively uniting. However, Mike MacIntyre, vice-president of product at Panaseer, believes that the space still has hurdles to overcome in order for this to really come to fruition.
AI certainly has a lot of promise but as an industry we must be clear that its currently not a silver bullet that will alleviate all cyber security challenges and address the skills shortage, said MacIntyre. This is because AI is currently just a term applied to a small subset of machine learning techniques. Much of the hype surrounding AI comes from how enterprise security products have adopted the term and the misconception (willful or otherwise) about what constitutes AI.
Terry Greer-King, vice-president EMEA at SonicWall, discusses looking past the hype when it comes to blockchain and cyber security. Read here
The algorithms embedded in many modern security products could, at best, be called narrow, or weak, AI; they perform highly specialised tasks in a single, narrow field and have been trained on large volumes of data, specific to a single domain. This is a far cry from general, or strong, AI, which is a system that can perform any generalised task and answer questions across multiple domains. Who knows how far away such a system is (there is much debate ranging from the next decade to never), but no CISO should be factoring such a tool in to their three-to-five year strategy.
Another key hurdle that is hindering the effectiveness of AI is the problem of data integrity. There is no point deploying an AI product if you cant get access to the relevant data feeds or arent willing to install something on your network. The future for security is data-driven, but we are a long way from AI products following through on the promises of their marketing hype.
Continue reading here:
Use cases for AI and ML in cyber security - Information Age