The positive effects of AI on cybersecurity from Forcepoint
Artificial intelligence (AI) and machine learning (ML) are playing an increasing role in cybersecurity in 2021. One of the key reasons is their ability to accurately automate some of the more mundane tasks of cybersecurity like assessing alerts and removing the false positive noise that analysts would otherwise have to do manually.
American multinational corporation software company, Forcepoint, uses analytics to address two main challenges in cybersecurity. Firstly, AI automates and assists in task completion such as Security Operations Centre (SOC) triaging tasks. SOC analysts receive potentially hundreds to millions of alerts from multiple security systems. Analysts must wade through this flood of alerts and determine which are false alerts and which are actually alerts of interest that merit more investigation. The initial triage can be automated today with analytics. Using automated analytics as the ‘level 1’ analyst to cut out the false positives saves on the cost of the human analyst and makes alert responses more efficient as humans are only focusing on the alerts of interest.
Secondly, analytics can also help crunch numbers in scenarios where there is a significant amount of data and an application of large-scale analytics is required. This is no simple task, as each scenario must be approached differently and it becomes tricky in terms of what kind of AI analytical models will work well for each specific use case.
Forcepoint’s, Director of Research and Engineering, Audra Simons, says the key lies in having a sizeable relevant, unbiased data set for training and testing then you can apply multiple machine learning or statistical approaches to work out which method delivers the most accurate results for the scenario. “Some examples of where this application of methods is showing success in cybersecurity today are in the processing and categorisation of websites, identification of compromised websites and classification of binaries into malware and benign ware,” she says.
Ease The Burden Through Automation
The support which analytics bring, through automation, of threat detection and response can ease the burden on employees and potentially help identify and classify threats more efficiently than other software-driven approaches. Additionally, it can be used to analyse large amounts of data for patterns and critical insights. Simons says: “For example, binary classification of malware and benign ware has immensely benefited from deep learning. AI, as a cognitive prosthesis, does not make decisions for the user but enables them to make better decisions, such as through the use of visualisation. The point of analytics is about augmenting and assisting the human analyst, not replacing them; giving them shortcuts and tools to help them wade through large amounts of data, and investigations to monitor and fight threats to an organisation's IT infrastructure and to assess security systems and measures for weaknesses and possible improvements.”
Build On Strong Data
Data analysis uses algorithms to continuously improve itself over time, to make them accurate quality data is necessary to help these models operate efficiently. It needs a substantial amount of applicable labelled data, which is large enough to provide both model training and test data and to keep models accurate they need to be continually trained. Simons says: “Unfortunately, most security problems out there do not come along with that type of data, they are needle in a haystack exceptions to normality.
“What we need to bear in mind is that AI analytics is not the answer to all of our security problems and it needs relevant unbiased data in order to be effective, using a variety of numerical, categorical, time series, and text data,” she adds.
Understanding Behaviour
Forcepoint models users, their behaviours and data interaction as a baseline to detect data exfiltration and malevolent activities. The business uses a range of experts, including experimental psychologists to model and study human behaviour and how it manifests within socio-technical models which combine human behaviour with computer systems and applications.
“In cybersecurity, we are actively engaged in a cat and mouse game with the attackers. They come up with a new attack, we respond. They work around our response, we build better detections and protections,” says Simons.
“At the end of the day, AI analytics is just another computer programme with its own vulnerabilities. Understanding human behaviours and differences in an employee’s intent behind any supposed suspicious activity is crucial, whether it’s accidental, compromised or malicious. AI analytics solutions can help determine the context and intent of a particular user’s actions, like downloading large volumes of data, or logging on from multiple remote locations in a short period of time. Understanding what is normal behaviour and what is not can help to protect those who have had their accounts compromised and shed a light on any accidental breaches,” she adds.
With people and data now operating outside the traditional business boundaries and the mass global adoption of remote working, application of these kinds of solutions is becoming more important than ever.
“It’s critical to take the right steps to keep people protected without sacrificing productivity,” says Simons. “If we really understand user behaviour through the judicious usage of AI analytics, we can help our security analysts spot the truly risky behaviours amongst the noise of false positive alerts and develop security models that continually evaluate and react to changes in risk, protecting organisations, their users and data,” she adds.