Do companies need to address the threats posed by deepfakes?

As threat actors continue to make increasing use of deepfake technology, Europol warns that companies need to take more precautions

A global issue people are facing currently is disinformation, where false information is being spread with the intention to deceive. Today, threat actors are using disinformation campaigns and deepfake content to misinform the public about events, politics, and elections, contribute to fraud, and manipulate shareholders in a corporate context.

Many organisations have now begun to see deepfakes as an even bigger potential risk than identity theft (for which deepfakes can also be used), especially now that most interactions have moved online since the COVID-19 pandemic. 

The growing availability of disinformation and deepfakes will have a profound impact on the way people perceive authority and information media. With the increasing volume of deepfakes, trust in authorities and official facts is undermined.  

What is a deepfake? 

Deepfake technology uses artificial intelligence (AI) for audio and audio-visual content. Deepfake technology can produce content that convincingly shows people saying or doing things they never did, or create personas that never existed in the first place

Advances in machine learning and AI will continue enhancing the capabilities of the software used to create deepfakes.


Creating new security measure to contend with deepfakes 

Deepfake technology is set to be used extensively in organised crime over the coming years, according to new research by Europol.

Facing Reality? Law enforcement and the challenge of deepfakes, the first published analysis of the Europol Innovation Lab’s Observatory function, warned that law enforcement agencies will need to enhance the skills and technologies at officers’ disposal to keep pace with criminals use of deepfakes.

The analysis highlighted how deepfakes are being used in three key areas: disinformation, non-consensual pornography and document fraud. It predicts such attacks will become increasingly realistic and dangerous as the technology improves in the coming years.

In addition, the report observed that deepfakes could negatively impact the legal process, for example, by artificially manipulating or generating media to prove or disprove someone’s guilt. 

To effectively deal with these kinds of threats, Europol said law enforcement agencies must develop new skills and technologies. These include manual detection, which involves looking for inconsistencies, and automated detection techniques, including deepfake detection software using artificial intelligence that is being developed by organizations such as Facebook and security firm McAfee.

Policymakers also need to develop more legislation to set guidelines and enforce compliance around the use of deepfakes, the report added.

 

Share

Featured Articles

Tech & AI LIVE: Key Events that are Vital for Cybersecurity

Connecting the world’s technology and AI leaders, Tech & AI LIVE returns in 2024, find out more on what’s to come in 2024

MWC Barcelona 2024: The Future is Connectivity

Discover the latest in global technology and connectivity at MWC Barcelona 2024, where industry giants converge to discuss 5G, AI and more industry trends

AI-Based Phishing Scams Are On The Rise This Valentine’s Day

Research from Egress Threat Intelligence, Avast, Cequence Security & KnowBe4 outlines how AI is being used in dating app phishing scams on Valentine’s Day

Speaker Lineup Announced for Tech Show London 2024

Technology & AI

Darktrace predicts AI deepfakes and cloud vulnerabilities

Cloud Security

Secure 2024: AI’s impact on cybersecurity with Integrity360

Technology & AI