Algorithm hunts down groups spreading violence and fake news

New method can identify abnormal communities and root out fake profiles in existing large-scale networks including Reddit and Wikipedia, say researchers

Researchers in Israel have developed a new algorithm to detect groups of anomalous users who might be spreading fake news or promoting violence.

The Ben-Gurion University of the Negev findings were published in the January edition of peer-reviewed journal Neural Processing Letters. “Due to the increase in volume and sophistication of cyber-threats, the ability to detect a group of entities whose linkage is abnormal regarding the other network’s edges, namely, the detection of anomalous communities, has become a necessity and a valuable field of research,” wrote lead author Dr Michael Fire.

An attribute of complex networks is the formation of communities, he says. For example, a group of social network users who share a common subject of interest, a team of coworkers exposing each other to virus transmission, a family of ingredients from a certain cuisine, or even a city neighbourhood corresponding to its water supply system. Analysing these community-structured networks can help researchers gain meaningful insights into these communities.

"The advantage of this study is that we can detect anomalous groups of users - such as groups of fake profiles - rather than single users,” says Fire, Head of the Data4Good Lab and a member of the Department of Software and Information Systems Engineering. “Uncovering groups of fake profiles is a challenging and less explored task. An anomalous user community might be one that is promoting violent behaviour or extremism, or it may be spreading fake news, but it could also potentially also help locate hot spots during pandemics.”

Generic method means potential for other platforms

The researchers have named this method the Co-Membership-based Generic Anomalous Communities Detection Algorithm (CMMAC) and say it is not restricted to a single type of network. 

"Our method is generic,” says Dr Fire. “Therefore, it can potentially work on different types of social media platforms. We tested it on several different types of networks, such as Reddit and Wikipedia, which is also a type of social network," explains Dr Fire.

After testing their method on randomly generated and real-world networks, they found that it outperformed other methods in various settings.

“Our method is based solely on network structural properties,” says Dr Fire. “That makes our method independent of vertices' attributes, the connections between users online. Thus, it is agnostic to the domain. When comparing our algorithm with other algorithms, it performed better on simulation and real-world data in many cases. It successfully detected groups of anomalous users' communities who presented peculiar online activity." 

Contributing researchers included Shay Lapid, an MA student, and PhD student Dima Kagan.

Share

Featured Articles

Barracuda: Why Businesses Struggle to Manage Cyber Risk

Barracuda Networks CIO report shows that six in 10 businesses struggle to manage cyber risk, with issues such as policy struggles and management buy-in

Evri, Amazon and Paypal Among Brands Most Used by Scammers

With the development of AI, cybercriminals are becoming more and more sophisticated in their attacks, using fake websites and impersonating popular brands

Tech & AI LIVE: Key Events that are Vital for Cybersecurity

Connecting the world’s technology and AI leaders, Tech & AI LIVE returns in 2024, find out more on what’s to come in 2024

MWC Barcelona 2024: The Future is Connectivity

Technology & AI

AI-Based Phishing Scams Are On The Rise This Valentine’s Day

Cyber Security

Speaker Lineup Announced for Tech Show London 2024

Technology & AI