Do companies need to address the threats posed by deepfakes?
A global issue people are facing currently is disinformation, where false information is being spread with the intention to deceive. Today, threat actors are using disinformation campaigns and deepfake content to misinform the public about events, politics, and elections, contribute to fraud, and manipulate shareholders in a corporate context.
Many organisations have now begun to see deepfakes as an even bigger potential risk than identity theft (for which deepfakes can also be used), especially now that most interactions have moved online since the COVID-19 pandemic.
The growing availability of disinformation and deepfakes will have a profound impact on the way people perceive authority and information media. With the increasing volume of deepfakes, trust in authorities and official facts is undermined.
What is a deepfake?
Deepfake technology uses artificial intelligence (AI) for audio and audio-visual content. Deepfake technology can produce content that convincingly shows people saying or doing things they never did, or create personas that never existed in the first place
Advances in machine learning and AI will continue enhancing the capabilities of the software used to create deepfakes.
Creating new security measure to contend with deepfakes
Deepfake technology is set to be used extensively in organised crime over the coming years, according to new research by Europol.
Facing Reality? Law enforcement and the challenge of deepfakes, the first published analysis of the Europol Innovation Lab’s Observatory function, warned that law enforcement agencies will need to enhance the skills and technologies at officers’ disposal to keep pace with criminals use of deepfakes.
The analysis highlighted how deepfakes are being used in three key areas: disinformation, non-consensual pornography and document fraud. It predicts such attacks will become increasingly realistic and dangerous as the technology improves in the coming years.
In addition, the report observed that deepfakes could negatively impact the legal process, for example, by artificially manipulating or generating media to prove or disprove someone’s guilt.
To effectively deal with these kinds of threats, Europol said law enforcement agencies must develop new skills and technologies. These include manual detection, which involves looking for inconsistencies, and automated detection techniques, including deepfake detection software using artificial intelligence that is being developed by organizations such as Facebook and security firm McAfee.
Policymakers also need to develop more legislation to set guidelines and enforce compliance around the use of deepfakes, the report added.