IT decision makers consider the increase of AI as an area of concern, as business leaders fail to grasp the potentially devastating impact that this technology could have on businesses, if used unethically.
Deepfakes are inevitably becoming more advanced, which is making it harder to spot and stop those that are used with bad intentions.As access to synthetic media technology increases, deepfakes can be used to damage reputations, fabricate evidence and undermine trust.
With deepfake technology increasingly being used for mal-intent, businesses would do well to ensure that their workforce is fully trained and aware of the risks associated with AI-generated content.
Deepfakes continue to wreak havoc
Now nearly anyone can employ easy-to-use, readily available AI software to make content doing and saying things they never actually did - thereby making it easier for bad actors to defraud the public, along with other such crimes.
The danger with deepfakes is in not being able to recognise what is real versus what is AI. As former Chief Information Security Officer at SailPoint, Heather Gantt-Evans, explains to our sister publication Technology Magazine: “By now, everyone has seen fake videos produced by deep learning (DL) and AI techniques, better known as ‘deepfake’ videos.
“However, imagine receiving a phishing email with a deepfake video of your CEO instructing you to go to a malicious URL. Or an attacker constructing more believable, legitimate-seeming phishing emails by using AI to better mimic corporate communications. Modern AI capabilities could completely blur the lines between legitimate and malicious emails, websites, company communications, and videos,” she continues.
Deepfakes do have the potential to provide an opportunity to be a positive force on our lives - if used with good intentions. AI-Generated media has been shown to already be empowering and offer people a voice at a more impactful scale.
AI can benefit cybersecurity operations if professionals know how to harness it fully
Integrity360, a leading cybersecurity specialist, has announced findings from independent research into AI's impact on cyber security, the risks and advantages. The survey highlights mounting concerns over the use of AI - with the use of deepfakes in particular.
It points to how attacks have changed over the past year, becoming more sophisticated, with what it describes as ‘offensive AI’ being used in criminal activity like malware creation. AI is also being used to create more phishing messages with content that accurately mimics the language, tone and design of legitimate emails.
- 68% noted concerns about cybercriminals using deepfakes to target their organisations
- 59% agree that AI is increasing the number of cybersecurity attacks
- 46% disagreed with the statement that they do not understand the impact of AI on cybersecurity
- 61% expressed apprehension over the increase in AI, suggesting industry concerns
- 71% agree that AI is improving speed and accuracy of incident response
Despite concerns, the vast majority of respondents (73%) have agreed that AI is becoming increasingly important within cybersecurity strategies for incident response. It perhaps reflects how the industry is recognising that AI tools can be used both defensively and offensively.
One worry over AI compromising businesses is the cyber skills gap which may have implications for businesses. With this in mind, it is important to ensure that all employees are adequately trained to consider the role of AI in cybersecurity and the threats it poses.
Brian Martin, Head of Product Development, Innovation and Strategy at Integrity360 comments: “The use of AI for cyberattacks is already a threat to businesses, but recognising the future potential and the impact this can have, is just the start … Businesses need to be prepared for how to defend against this and discern what is and isn’t real, to avoid falling victim to an attack.”
He continues: “AI's role in cyber security is not only a matter of perception but a tangible reality. Conventional cyberattacks will ultimately become obsolete as AI technologies become increasingly available and more appealing and accessible as attackers look to expand their use for AI-enabled cyberattacks.
“As AI technologies continue to evolve, their integration into cyber security will follow. Organisations must remain proactive in embracing AI while also addressing the challenges it presents, ensuring that their cyber security defences keep pace.”
Please also check out our upcoming event - Net Zero LIVE on 6 and 7 March 2024.
BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.
BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.