Do companies need to address the threats posed by deepfakes?

As threat actors continue to make increasing use of deepfake technology, Europol warns that companies need to take more precautions

A global issue people are facing currently is disinformation, where false information is being spread with the intention to deceive. Today, threat actors are using disinformation campaigns and deepfake content to misinform the public about events, politics, and elections, contribute to fraud, and manipulate shareholders in a corporate context.

Many organisations have now begun to see deepfakes as an even bigger potential risk than identity theft (for which deepfakes can also be used), especially now that most interactions have moved online since the COVID-19 pandemic. 

The growing availability of disinformation and deepfakes will have a profound impact on the way people perceive authority and information media. With the increasing volume of deepfakes, trust in authorities and official facts is undermined.  

What is a deepfake? 

Deepfake technology uses artificial intelligence (AI) for audio and audio-visual content. Deepfake technology can produce content that convincingly shows people saying or doing things they never did, or create personas that never existed in the first place

Advances in machine learning and AI will continue enhancing the capabilities of the software used to create deepfakes.


Creating new security measure to contend with deepfakes 

Deepfake technology is set to be used extensively in organised crime over the coming years, according to new research by Europol.

Facing Reality? Law enforcement and the challenge of deepfakes, the first published analysis of the Europol Innovation Lab’s Observatory function, warned that law enforcement agencies will need to enhance the skills and technologies at officers’ disposal to keep pace with criminals use of deepfakes.

The analysis highlighted how deepfakes are being used in three key areas: disinformation, non-consensual pornography and document fraud. It predicts such attacks will become increasingly realistic and dangerous as the technology improves in the coming years.

In addition, the report observed that deepfakes could negatively impact the legal process, for example, by artificially manipulating or generating media to prove or disprove someone’s guilt. 

To effectively deal with these kinds of threats, Europol said law enforcement agencies must develop new skills and technologies. These include manual detection, which involves looking for inconsistencies, and automated detection techniques, including deepfake detection software using artificial intelligence that is being developed by organizations such as Facebook and security firm McAfee.

Policymakers also need to develop more legislation to set guidelines and enforce compliance around the use of deepfakes, the report added.

 

Share

Featured Articles

CYBER LIVE LONDON: Day 2 highlights of the hybrid tech show

We take a look at highlights of the different stages at the Tech Live London show, including insights from Claroty, SalesForce and Oracle

TECH LIVE LONDON: An overview of the hybrid technology show

We take a look at the first day of Tech Live London with insights from technology leaders from companies such as IBM, Microsoft and Vodafone

Does a cashless society mean higher risk of fraud?

Armen Najarian, Chief Identity Officer at Outseer, spoke to Cyber Magazine about malicious apps and fraud within a cashless society

5 minutes with Gary Brickhouse, CISO of GuidePoint Security

Cyber Security

CTO at Passbolt explains the importance of password managers

Application Security

Nord Security raises US$100mn at US$1.6bn valuation

Cyber Security