What are the benefits and risks of AI in cybersecurity?
Pillsbury, an international law firm with a particular focus on the technology & media, energy, financial, and real estate & construction sectors, has published a new research report that seeks to highlight the important role artificial intelligence (AI) stands to play in defending against cyberattacks and data leaks.
Titled "Artificial Intelligence & Cybersecurity: Balancing Innovation, Execution and Risk" and written by The Economist Intelligence Unit (EIU), the report examines how AI can help strengthen cybersecurity, how the growing need for data to train AI systems is intensifying concerns around privacy, and how companies can anticipate risk.
"As the public and private sectors have embraced digital transformation, their vulnerability to cyber threats has expanded considerably," said Pillsbury's firmwide Technology Industry Group leader Justin Hovey. "However, AI tools are well-suited to address some of the largest gaps in existing cyber defences. Our hope is that this research can help organisations better understand the technology and therefore better protect themselves and the individuals they serve."
AI being used in the cybersecurity industry
In a recent EIU survey, nearly half of respondents cited AI and machine learning (ML) (48.9%) as the emerging technologies that would be best deployed to counter nation-state cyberattacks directed toward private organisations, followed by cloud computing (47.5%), which is also often touted as bringing enhanced cybersecurity. The Compound Annual Growth Rate (CAGR) for AI in cybersecurity is also predicted to increase at a rate of 23.6% from 2020 to 2027, reaching a market value of US$46.3bn by 2027.
Leveraging existing and emerging threat intelligence, AI can automate incident detection. The continuous monitoring that AI provides is therefore one of its main advantages, along with the fact that it can extract and monitor any minute hitches in the system to flag anomalies. A recent survey of 4,500 senior business decision-makers shows that data security was the main reason to implement AI within their organisations, ahead of process automation and business process optimisation, among other areas.
Can AI expose new risks?
There are three primary areas in which AI presents a cybersecurity risk, according to Jessica Newman, programme lead, AI Security Initiative, UC Berkeley.
Firstly, introducing AI could add complexity and opacity to the products, services, and infrastructure we rely upon. “There’s a shocking lack of industry best practices or regulations to ensure that those AI systems are actually reliable, robust, transparent and free of bias,” she says. “We are increasing the complexity of a good portion of the systems that we rely upon across industries, without adequate insight into how those AI systems are making decisions and whether they should be trusted.”
Second, there are unique vulnerabilities and safety considerations. “AI technologies are currently susceptible to adversarial attacks, such as data poisoning and input attacks,” explains Ms. Newman. Third, AI technologies are enabling mass creation of synthetic media. “AI can support the creation of disinformation through large language models that predict text,” continues Ms. Newman.
The combination of added complexity to systems by introducing AI, the fact that AI itself is susceptible to attacks, and that adversaries can use AI to create more sophisticated attacks, illustrates that the challenges are as tall as the opportunities when it comes to cybersecurity.