Algorithm hunts down groups spreading violence and fake news

New method can identify abnormal communities and root out fake profiles in existing large-scale networks including Reddit and Wikipedia, say researchers

Researchers in Israel have developed a new algorithm to detect groups of anomalous users who might be spreading fake news or promoting violence.

The Ben-Gurion University of the Negev findings were published in the January edition of peer-reviewed journal Neural Processing Letters. “Due to the increase in volume and sophistication of cyber-threats, the ability to detect a group of entities whose linkage is abnormal regarding the other network’s edges, namely, the detection of anomalous communities, has become a necessity and a valuable field of research,” wrote lead author Dr Michael Fire.

An attribute of complex networks is the formation of communities, he says. For example, a group of social network users who share a common subject of interest, a team of coworkers exposing each other to virus transmission, a family of ingredients from a certain cuisine, or even a city neighbourhood corresponding to its water supply system. Analysing these community-structured networks can help researchers gain meaningful insights into these communities.

"The advantage of this study is that we can detect anomalous groups of users - such as groups of fake profiles - rather than single users,” says Fire, Head of the Data4Good Lab and a member of the Department of Software and Information Systems Engineering. “Uncovering groups of fake profiles is a challenging and less explored task. An anomalous user community might be one that is promoting violent behaviour or extremism, or it may be spreading fake news, but it could also potentially also help locate hot spots during pandemics.”

Generic method means potential for other platforms

The researchers have named this method the Co-Membership-based Generic Anomalous Communities Detection Algorithm (CMMAC) and say it is not restricted to a single type of network. 

"Our method is generic,” says Dr Fire. “Therefore, it can potentially work on different types of social media platforms. We tested it on several different types of networks, such as Reddit and Wikipedia, which is also a type of social network," explains Dr Fire.

After testing their method on randomly generated and real-world networks, they found that it outperformed other methods in various settings.

“Our method is based solely on network structural properties,” says Dr Fire. “That makes our method independent of vertices' attributes, the connections between users online. Thus, it is agnostic to the domain. When comparing our algorithm with other algorithms, it performed better on simulation and real-world data in many cases. It successfully detected groups of anomalous users' communities who presented peculiar online activity." 

Contributing researchers included Shay Lapid, an MA student, and PhD student Dima Kagan.

Share

Featured Articles

IT and OT security with Ilan Barda, CEO of Radiflow

Cyber Magazine speaks with Radiflow’s CEO, Ilan Barda, about converging IT and OT and how leaders can better protect businesses from cybersecurity threats

QR ‘Quishing’ scams: Do you know the risks?

QR code scams, or Quishing scams, are rising and pose a threat to both private users and businesses as cyberattacks move towards mobile devices

Zero Trust Segmentation with Illumio’s Raghu Nandakumara

Head of Industry Solutions at Illumio, Raghu Nandakumara, offers insight into the proposed ban on ransom payments and how businesses can utilise Zero Trust

Is the password dead? Legacy technology prevents the shift

Network Security

Fake Bard AI malware: Google seeks to uncover cybercriminals

Technology & AI

Gartner report highlights threat of supply chain attacks

Cyber Security