ChatGPT at one: A valuable tool for attackers and defenders

As we mark the first birthday of ChatGPT, we highlight how OpenAI’s language model is a valuable tool for both attackers and defenders in cybersecurity

OpenAI’s ChatGPT launched to the public on 30th November 2022. Within the first five days, it attracted over one million users. One year on and the hype hasn’t died down

For many, curiosity and excitement quickly gave way to earnest concern around the tool’s potential to advance bad actors’ agendas. Not only has the technology lowered the barrier of entry for less tech-savvy cyber criminals looking to get into hacking, but it also helps them craft compelling and persuasive phishing messages or write malicious code to facilitate an attack. On the other hand, ChatGPT is becoming an essential tool in the fight against threats, enhancing cybersecurity strategies to defend against these growing threats.

Dangerously armed 

In its first year, ChatGPT has proven a valuable tool for both attackers and defenders in cybersecurity. “It is important for organisations to be aware of how AI is being used on both sides of the cyber battlefield so that they can develop effective strategies to protect themselves,” Jason Keirstead, VP of Collective Threat Defense at Cyware, starts.

Threat actors have leveraged tools like ChatGPT to help write sophisticated, targeted phishing messages, making it increasingly difficult to distinguish between real and fake. In fact, a recent report revealed a 1265% surge in malicious phishing emails since Q4 2022, which coincides with ChatGPT’s launch. 

Keirstead explains: “AI models can generate realistic phishing emails, create deep fake videos, and develop new malware that can evade traditional detection methods. AI-generated phishing emails, for example, can be highly personalised and convincing, making them more likely to trick people into clicking on malicious links or opening attachments.”

However, there are some remedies that organisations can implement to address some of the risk. This starts with “publishing company-wide policies and guidance on Generative AI use among employees, notes Okey Obudulu, CISO at Skillsoft. “Organisations must provide comprehensive training to educate employees about identifying and mitigating risks associated with Generative AI-based attacks. This includes imparting knowledge about the latest phishing techniques, raising awareness about the risks of engaging with unknown entities and promoting vigilant behaviour online.”

Trust at face value 

Since ChatGPT entered the public’s consciousness, it has been cited as both a dream for employees and a nightmare for organisations that are trying to protect sensitive data. Chris Denbigh White, Chief Security Officer at Next DLP, suggests we need to ask the question: “Do we trust LLMs? 

“Just like the friend in the pub quiz who is totally convinced of an answer even though there’s no guarantee he’s right, LLMs are still a black box - and regulation that surrounds it is still a bone of contention and unlikely to be solved anytime soon.

“This is particularly tricky if you’re using these models for industries such as healthcare and patient prioritisation, as errors like these can have wide-ranging consequences. For cyber security professionals, it’s essential to collaborate closer on AI and LLMs and adopt a repeatable framework across the board.”

Got your back - AI aiding internal teams 

Whilst there is reasonable cause for concern when it comes to cybersecurity risk, ChatGPT has also become valuable in the fight against cybercrime. For example, its endless capacity to analyse large amounts of data quickly provides a distinct advantage for cybersecurity teams. 

Matt Hillary, Chief Information Security Officer of Drata, describes: “When configured and trained accordingly, AI can help suggest and even support the remediation of vulnerabilities and response to security alerts. Using AI in this manner also helps mitigate the risks associated with potentially missed analyses in routine tasks and exhaustive manual processes that too often plague traditional methods.”

One of the key challenges faced by organisations adopting DevSecOps is the ongoing lack of collaboration between development and security teams. However, generative AI is proving to be a very powerful tool to bridge the resource gap, explains Michal Lewy-Harush, Chief Information Officer at Aqua Security.

“Developers and security teams no longer need to spend countless hours manually reading advisories, searching for patches, and building verification steps before taking action. Instead, AI guides them with clear, concise instructions on how to complete the fix, and in addition, it helps the security teams focus on the most critical vulnerabilities. The efficiency allows developers to focus on the task at hand. Resolving the issue quickly and getting back to delivering new features rather than wasting time deciphering the complexities of the remediation process.”

Incentive minds 

There is a constant battle between organisations that rely on Generative AI use cases to safeguard their security systems and the threat actors that use it to conduct even more sophisticated and prevalent ransomware and phishing campaigns. 

HackerOne’s Senior Solutions Engineer, Chris Dickens, said “in the hands of ethical hackers, looking at an outsider mindset and an understanding of how GenAI can be exploited, it has also become a powerful tool for them to seek out vulnerabilities and protect organisations at even more speed and scale.”

In fact, HackerOne’s latest Hacker-Powered Security Report highlights that 53% of hackers are using GenAI in some way, with 61% of hackers looking to use and develop hacking tools from GenAI to find more vulnerabilities in 2024.

“We can therefore expect even greater applications of ChatGPT in cybersecurity strategies, reinforcing the fact that a successful cybersecurity program isn't about replacing human ingenuity with AI, but augmenting it,” Dickens adds. 

As we look ahead into 2024, ChatGPT and other LLMs are sure to bring more debate, advantages, controversies and speculation - within the cybersecurity industry and beyond. Friend or foe, colleague or criminal, ChatGPT’s innovation will always outpace regulation, so whilst companies should always check over their shoulders for any security issues or risks, they must face forward and embrace future transformation as it comes.

******

For more insights into the world of Technology - check out the latest edition of Technology Magazine and be sure to follow us on LinkedIn & Twitter.

Other magazines that may be of interest - AI Magazine | Cyber Magazine | Data Centre Magazine

Please also check out our upcoming event - Sustainability LIVE Net Zero on 6 and 7 March 2024.  

******

BizClik is a global provider of B2B digital media platforms that covers executive communities for CEOs, CFOs, CMOs, sustainability leaders, procurement & supply chain leaders, technology & AI leaders, fintech leaders as well as covering industries such as manufacturing, mining, energy, EV, construction, healthcare and food.

Based in London, Dubai, and New York, Bizclik offers services such as content creation, advertising & sponsorship solutions, webinars & events.

Share

Featured Articles

Norton: Report Highlights Rising Trend of AI Dating Scams

Norton report shows that as AI becomes more sophisticated and accessible, so do the risks of romantic scams conducted via AI

Barracuda: Why Businesses Struggle to Manage Cyber Risk

Barracuda Networks CIO report shows that six in 10 businesses struggle to manage cyber risk, with issues such as policy struggles and management buy-in

Evri, Amazon and Paypal Among Brands Most Used by Scammers

With the development of AI, cybercriminals are becoming more and more sophisticated in their attacks, using fake websites and impersonating popular brands

Tech & AI LIVE: Key Events that are Vital for Cybersecurity

Technology & AI

MWC Barcelona 2024: The Future is Connectivity

Technology & AI

AI-Based Phishing Scams Are On The Rise This Valentine’s Day

Cyber Security