Mandiant highlights how AI could reduce cyberattacks

Here are some key findings from Google subsidiary Mandiant which highlights AI’s role in cyber threats and a need for more robust cybersecurity measures

Mandiant, a subsidiary of Google, has published a blog that showcases the various ways it has observed threat actors experimenting with and showing interest in using AI for malicious activities. 

Since at least 2019, Mandiant has tracked threat actor interest in and use of AI capabilities to facilitate a variety of malicious activity. The report highlights how AI could work to significantly impact malicious cyber activity and how more sophisticated cybersecurity measures are necessary to counter these threats.

This comes during a time of increased risk of cyber fraud in line with AI use, with cyber-attacks and identity fraud having intensified in both scale and sophistication. It is therefore important that regulators and government organisations continue to have ‘AI-for-good’ strategies in place.

Generative AI to help mitigate threats: Improving the cyber landscape

Ransomware and phishing hacks on global businesses are only continuing to increase worldwide, ultimately causing greater demand for rigorous cybersecurity measures to be in place.

Mandiant highlights in its report that generative AI technologies have the potential to significantly improve information operations actors’ capabilities in two key aspects: the efficient scaling of activity beyond the actors’ inherent means; and their ability to produce realistic fabricated content toward deceptive ends. 

According to the company, generative AI will enable information operations actors with limited resources and capabilities to produce higher quality content at scale. 

Mandiant has also observed financially motivated actors advertising AI capabilities, including deepfake technology services, in underground forums to potentially increase the effectiveness of cybercriminal operations, such as social engineering, fraud, and extortion, by making these malicious operations seem more personal in nature through the use of deepfake capabilities.

Threat actors regularly evolve their tactics with the use of new technology to operate within a constantly changing and more sophisticated cyber threat landscape. Mandiant has anticipated that bad actors of diverse origins and motivations will increasingly use generative AI as awareness and capabilities surrounding such technologies develop. 

It expects these malicious actors to continue capitalising on the public’s inability to differentiate between what is authentic and what is counterfeit and advises that users and enterprises alike should be cautious about the information they come into contact with.

However, Mandiant emphasises that while there is certainly threat actor interest in this technology, adoption has been limited thus far and may remain so in the near term.

The danger of LLMs being used to facilitate malware development

The blog post states that cyber criminals are expected to increase their use of LLMs to support malware development. LLMs can help threat actors write new malware and improve existing malware and the ability of these tools to assist in malware creation could become significant and further enable bad actors who might lack technical sophistication.

Mandiant has already observed threat actors in underground forums advertising LLM services, sales and API access as well as LLM-generated code between January and March 2023.

A user uploaded a video claiming to bypass LLM safety features to get it to write malware was able to bypass a particular endpoint detection and response (EDR) solution’s defences. The user then received payment after submitting this to a bug bounty program in late February 2023.

John Hultquist, Chief Analyst of Mandiant Intelligence, summarises the research by saying: “While we expect the adversary to make use of generative AI, and there are already adversaries doing so, adoption is still limited and primarily focused on social engineering. 

“There’s no doubt that criminals and state actors will find value in this technology, but many estimates of how this tool will be used are speculative and not grounded in observation.”


******

For more insights into the world of Cyber - check out the latest edition of Cyber Magazine and be sure to follow us on LinkedIn & Twitter.

Other magazines that may be of interest - Technology Magazine | AI Magazine.

Please also check out our upcoming event - Cloud and 5G LIVE on October 11 and 12 2023.

******

BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.

BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.

Share

Featured Articles

Tech & AI LIVE: Key Events that are Vital for Cybersecurity

Connecting the world’s technology and AI leaders, Tech & AI LIVE returns in 2024, find out more on what’s to come in 2024

MWC Barcelona 2024: The Future is Connectivity

Discover the latest in global technology and connectivity at MWC Barcelona 2024, where industry giants converge to discuss 5G, AI and more industry trends

AI-Based Phishing Scams Are On The Rise This Valentine’s Day

Research from Egress Threat Intelligence, Avast, Cequence Security & KnowBe4 outlines how AI is being used in dating app phishing scams on Valentine’s Day

Speaker Lineup Announced for Tech Show London 2024

Technology & AI

Darktrace predicts AI deepfakes and cloud vulnerabilities

Cloud Security

Secure 2024: AI’s impact on cybersecurity with Integrity360

Technology & AI