Security Think Tank: Ignore AI overheads at your peril – ComputerWeekly.com

Artificial intelligence (AI) and machine learning (ML) have huge potential in many areas of business, particularly where there is a need to automate repetitive tasks.

This is of strategic importance for the IT security sector. Growing organisations dont always have the capability to scale up back-office compliance and security teams at a rate that is proportional to their expansion, leaving the existing function to do more with less; automating wherever possible reduces these pressures without compromising compliance.

Of course, AI and ML solutions are not new. We are already witnessing the success of adopting AI to automate everyday tasks such as identifying potential fraud, authenticating users and removing user access. It is ideal for repetitive tasks such as pattern analysis, source data filtering to determine factors such as whether something is an incident and, if so, whether it is critical, so tasks such as reviewing blocked emails, websites and images no longer have to be performed manually (ie by individuals).

AIs ability to simultaneously identify multiple data points that are indicators of fraud, rather than potential incidents having to be investigated line by line, also helps hugely with pinpointing malicious behaviour.

Predicting events before they occur is harder, but ML can help enterprises to stay ahead of potential threats using existing datasets, past outcomes and insight from security breaches with similar organisations all contribute to an holistic overview of when the next attack may occur. Fraud management solutions, security incident and event monitoring(SIEM), network traffic detection and endpoint detection all make use of learning algorithms to identify suspicious activity (based on previous usage data and shared pattern recognition) to establish normal patterns of use and flag outliers as potentially posing a risk to the organisation.

This capability is also critical in counteracting cyber attacks. Rather than manually trawling through a vast number of log files after an event has occurred, known intrusion methods can be identified in real time and mitigating action taken before much of the damage can occur.

To date, the main focus for the use of AI has been on the more technical security elements such as detection, incident management and other repeatable tasks. But these are early days, and there are many other areas that would benefit from its adoption. Governance, risk and compliance (GRC), for example, requires security professionals to crunch large amounts of data to spot risk trends and understand where non-compliance is causing incidents.

First discussions around AI saw it promise to revolutionise information security operations and reduce the amount of work that would need to be performed manually.

As outlined above, it has undoubtedly enabled new areas to be explored, while detecting attacks faster than any human manually looking through data. However, it is not a silver bullet and it comes with overheads, which are often forgotten.

It used to be that organisations installed logging systems that captured critical audit trails the challenge was in finding the time to look at the logs generated, a task that is now undertaken by AI scripts. However, while its easy enough to connect an application to an AI tool so that it can scan for suspicious activity, the AI system must first be set up so that it understands the format of the logs, and what qualifies as an event that needs flagging. In other words, to be effective, it needs training for the specific needs of each enterprise.

It is important not to underestimate these setup costs, along with the resource requirements to monitor the analytics AI provides. Incident management processes still need to be manually detailed so that once an event has been detected it can be investigated to make sure it wont impact the organisation.

Once AI is up and running it is a transformative tool for the organisation, but training it to interpret what action needs to be undertaken as well as rule out false positives is a time-consuming exercise that needs to be factored in to planning and budgets.

AI and ML introduce unprecedented speed and efficiency into the process of maintaining a secure IT estate, making them ideal tools for a predictive IT security stance.

But AI and ML cannot eliminate risk, regardless of how advanced they are, especially when there is an over-reliance on the capabilities of the technology, while its complexities are under-appreciated. Ultimately, risks such as false positives, as well as failure to identify all the threats faced by an organisation, are ever-present within the IT landscape.

Organisations deploying any automated responses therefore need to maintain a balance between specialist human input and technological solutions, while appreciating that AI and ML are evolving technologies. Ongoing training enables the team to stay ahead of the threat curve a critical consideration given that attackers also use AI and ML tools and techniques; defenders need to continually adapt in order to mitigate.

Successful AI and ML will mean different things to different organisations. Metrics may revolve around the time saved by analysts, how many incidents are identified, the number false positive removed, and so on. These should be weighed up against the resource required to configure, manage and review the performance of the tools. As with almost any IT security project, the overall value needs to be viewed through the eyes of the business and its role in achieving corporate objectives to reduce risk.

View original post here:

Security Think Tank: Ignore AI overheads at your peril - ComputerWeekly.com

Related Posts

Comments are closed.