Algorithm hunts down groups spreading violence and fake news

New method can identify abnormal communities and root out fake profiles in existing large-scale networks including Reddit and Wikipedia, say researchers

Researchers in Israel have developed a new algorithm to detect groups of anomalous users who might be spreading fake news or promoting violence.

The Ben-Gurion University of the Negev findings were published in the January edition of peer-reviewed journal Neural Processing Letters. “Due to the increase in volume and sophistication of cyber-threats, the ability to detect a group of entities whose linkage is abnormal regarding the other network’s edges, namely, the detection of anomalous communities, has become a necessity and a valuable field of research,” wrote lead author Dr Michael Fire.

An attribute of complex networks is the formation of communities, he says. For example, a group of social network users who share a common subject of interest, a team of coworkers exposing each other to virus transmission, a family of ingredients from a certain cuisine, or even a city neighbourhood corresponding to its water supply system. Analysing these community-structured networks can help researchers gain meaningful insights into these communities.

"The advantage of this study is that we can detect anomalous groups of users - such as groups of fake profiles - rather than single users,” says Fire, Head of the Data4Good Lab and a member of the Department of Software and Information Systems Engineering. “Uncovering groups of fake profiles is a challenging and less explored task. An anomalous user community might be one that is promoting violent behaviour or extremism, or it may be spreading fake news, but it could also potentially also help locate hot spots during pandemics.”

Generic method means potential for other platforms

The researchers have named this method the Co-Membership-based Generic Anomalous Communities Detection Algorithm (CMMAC) and say it is not restricted to a single type of network. 

"Our method is generic,” says Dr Fire. “Therefore, it can potentially work on different types of social media platforms. We tested it on several different types of networks, such as Reddit and Wikipedia, which is also a type of social network," explains Dr Fire.

After testing their method on randomly generated and real-world networks, they found that it outperformed other methods in various settings.

“Our method is based solely on network structural properties,” says Dr Fire. “That makes our method independent of vertices' attributes, the connections between users online. Thus, it is agnostic to the domain. When comparing our algorithm with other algorithms, it performed better on simulation and real-world data in many cases. It successfully detected groups of anomalous users' communities who presented peculiar online activity." 

Contributing researchers included Shay Lapid, an MA student, and PhD student Dima Kagan.

Share

Featured Articles

Gartner unveils top cybersecurity predictions for 2023-2024

Half of CISOs will formally adopt human-centric design practices into their cybersecurity programmes, while adoption of zero trust architecture will rise

DDoS protection market to grow amid increase in attacks

According to research by Cloudflare, DDoS attacks increased by 109% last year, with the last 12 months seeing some of the largest attacks the world

The impact data poisoning has on cyber and AI

We take a look at why the risks of data and AI poisoning is continuing to wreak havoc on the cybersecurity industry

Five innovative ways AI can help prevent cyber attacks

Cyber Security

SailPoint delivers new non-employee risk management solution

Cyber Security

Akamai shares details of Asia’s record-breaking DDoS attack

Network Security