Algorithm hunts down groups spreading violence and fake news

New method can identify abnormal communities and root out fake profiles in existing large-scale networks including Reddit and Wikipedia, say researchers

Researchers in Israel have developed a new algorithm to detect groups of anomalous users who might be spreading fake news or promoting violence.

The Ben-Gurion University of the Negev findings were published in the January edition of peer-reviewed journal Neural Processing Letters. “Due to the increase in volume and sophistication of cyber-threats, the ability to detect a group of entities whose linkage is abnormal regarding the other network’s edges, namely, the detection of anomalous communities, has become a necessity and a valuable field of research,” wrote lead author Dr Michael Fire.

An attribute of complex networks is the formation of communities, he says. For example, a group of social network users who share a common subject of interest, a team of coworkers exposing each other to virus transmission, a family of ingredients from a certain cuisine, or even a city neighbourhood corresponding to its water supply system. Analysing these community-structured networks can help researchers gain meaningful insights into these communities.

"The advantage of this study is that we can detect anomalous groups of users - such as groups of fake profiles - rather than single users,” says Fire, Head of the Data4Good Lab and a member of the Department of Software and Information Systems Engineering. “Uncovering groups of fake profiles is a challenging and less explored task. An anomalous user community might be one that is promoting violent behaviour or extremism, or it may be spreading fake news, but it could also potentially also help locate hot spots during pandemics.”

Generic method means potential for other platforms

The researchers have named this method the Co-Membership-based Generic Anomalous Communities Detection Algorithm (CMMAC) and say it is not restricted to a single type of network. 

"Our method is generic,” says Dr Fire. “Therefore, it can potentially work on different types of social media platforms. We tested it on several different types of networks, such as Reddit and Wikipedia, which is also a type of social network," explains Dr Fire.

After testing their method on randomly generated and real-world networks, they found that it outperformed other methods in various settings.

“Our method is based solely on network structural properties,” says Dr Fire. “That makes our method independent of vertices' attributes, the connections between users online. Thus, it is agnostic to the domain. When comparing our algorithm with other algorithms, it performed better on simulation and real-world data in many cases. It successfully detected groups of anomalous users' communities who presented peculiar online activity." 

Contributing researchers included Shay Lapid, an MA student, and PhD student Dima Kagan.

Share

Featured Articles

Trustwave Reveals the Financial Sector's Cyber Threats

Although it's not new to think that financial services organisations are prime targets for cybercriminals, the threat landscape they find themselves in is

TCS and Google Cloud Join for Solution to Secure the Cloud

TCS partners with Google Cloud to launch a range of AI-powered cybersecurity solutions to help businesses secure their clouds against advanced threats

Cybersecurity Conglomerate Reveals Threats Facing Consumers

Cybersecurity Conglomerate Gen quarterly report reveals shocking statistics like the fact that consumers are now increasingly at risk from Ransomware

Decoding the US' Most Misunderstood Data Security Terms

Cyber Security

Orange Cyberdefense's Wicus Ross Talks Cyber Extortion Trend

Hacking & Malware

Palo Alto Networks Buy IBM's QRadar Assets in Win for SIEM

Network Security