“Perfectly secure” algorithm could aid spread of free speech

A new algorithm has solid implications for information security, data compression and storage, but the real benefits could be seen among vulnerable groups

Researchers claim to have created a “perfectly secure” way to pass hidden information in plain sight and say their work could revolutionise social media and private messaging

The team, led by the University of Oxford in collaboration with Carnegie Mellon University, says it has achieved a breakthrough in secure communications by developing an algorithm that conceals sensitive information so effectively that it is impossible to detect anything hidden. 

The algorithm uses new advances in information theory to conceal one piece of content inside another in a way that cannot be detected, which may have substantial implications for information security besides further applications in data compression and storage.

The team says this method may soon be used in digital human communications, including social media and private messaging. In particular, the ability to send perfectly secure information may empower vulnerable groups, including humanitarian workers.

“Our method can be applied to any software that automatically generates content,” says co-lead author Dr Christian Schroeder de Witt of Oxford University’s Department of Engineering Science. “For instance, probabilistic video filters or meme generators. This could be very valuable, for instance, for journalists and aid workers in countries where the act of encryption is illegal. However, users still need to exercise precaution as any encryption technique may be vulnerable to side-channel attacks such as detecting a steganography app on the user’s phone.”

The algorithm applies to a setting called steganography: the practice of hiding sensitive information inside of innocuous content. Steganography differs from cryptography because the sensitive information is concealed in such a way that obscures the fact that something has been hidden. 

The researchers say an example could be hiding a Shakespeare poem inside an AI-generated cat image.

New algorithm uses information theory

Despite having been studied for more than 25 years, existing steganography approaches generally have imperfect security, meaning that individuals who use these methods risk being detected. This is because previous steganography algorithms would subtly change the distribution of innocuous content.

To overcome this, the research team used recent breakthroughs in information theory, specifically minimum entropy coupling, which allows one to join two distributions of data together such that their mutual information is maximised, but the individual distributions are preserved.

As a result, with the new algorithm, there is no statistical difference between the distribution of innocuous content and the distribution of content that encodes sensitive information.

The algorithm was tested using several models that produce auto-generated content, such as GPT-2, an open-source language model, and WAVE-RNN, a text-to-speech converter. Besides being perfectly secure, the new algorithm showed up to 40% higher encoding efficiency than previous steganography methods across various applications, enabling more information to be concealed within a given amount of data. This may make steganography an attractive method, even if perfect security is not required, due to the benefits of data compression and storage.

The research team has filed a patent for the algorithm but intends to issue it under a free licence to third parties for non-commercial responsible use. They will also present the new algorithm at the 2023 International Conference on Learning Representations in May.

‘The main contribution of the work is showing a deep connection between a problem called minimum entropy coupling and perfectly secure steganography,” says co-lead author Samuel Sokota, of Carnegie Mellon University’s Machine Learning Department. “By leveraging this connection, we introduce a new family of steganography algorithms that have perfect security guarantees.”

Share

Featured Articles

Norton: Report Highlights Rising Trend of AI Dating Scams

Norton report shows that as AI becomes more sophisticated and accessible, so do the risks of romantic scams conducted via AI

Barracuda: Why Businesses Struggle to Manage Cyber Risk

Barracuda Networks CIO report shows that six in 10 businesses struggle to manage cyber risk, with issues such as policy struggles and management buy-in

Evri, Amazon and Paypal Among Brands Most Used by Scammers

With the development of AI, cybercriminals are becoming more and more sophisticated in their attacks, using fake websites and impersonating popular brands

Tech & AI LIVE: Key Events that are Vital for Cybersecurity

Technology & AI

MWC Barcelona 2024: The Future is Connectivity

Technology & AI

AI-Based Phishing Scams Are On The Rise This Valentine’s Day

Cyber Security