Algorithm hunts down groups spreading violence and fake news
Researchers in Israel have developed a new algorithm to detect groups of anomalous users who might be spreading fake news or promoting violence.
The Ben-Gurion University of the Negev findings were published in the January edition of peer-reviewed journal Neural Processing Letters. “Due to the increase in volume and sophistication of cyber-threats, the ability to detect a group of entities whose linkage is abnormal regarding the other network’s edges, namely, the detection of anomalous communities, has become a necessity and a valuable field of research,” wrote lead author Dr Michael Fire.
An attribute of complex networks is the formation of communities, he says. For example, a group of social network users who share a common subject of interest, a team of coworkers exposing each other to virus transmission, a family of ingredients from a certain cuisine, or even a city neighbourhood corresponding to its water supply system. Analysing these community-structured networks can help researchers gain meaningful insights into these communities.
"The advantage of this study is that we can detect anomalous groups of users - such as groups of fake profiles - rather than single users,” says Fire, Head of the Data4Good Lab and a member of the Department of Software and Information Systems Engineering. “Uncovering groups of fake profiles is a challenging and less explored task. An anomalous user community might be one that is promoting violent behaviour or extremism, or it may be spreading fake news, but it could also potentially also help locate hot spots during pandemics.”
Generic method means potential for other platforms
The researchers have named this method the Co-Membership-based Generic Anomalous Communities Detection Algorithm (CMMAC) and say it is not restricted to a single type of network.
"Our method is generic,” says Dr Fire. “Therefore, it can potentially work on different types of social media platforms. We tested it on several different types of networks, such as Reddit and Wikipedia, which is also a type of social network," explains Dr Fire.
After testing their method on randomly generated and real-world networks, they found that it outperformed other methods in various settings.
“Our method is based solely on network structural properties,” says Dr Fire. “That makes our method independent of vertices' attributes, the connections between users online. Thus, it is agnostic to the domain. When comparing our algorithm with other algorithms, it performed better on simulation and real-world data in many cases. It successfully detected groups of anomalous users' communities who presented peculiar online activity."
Contributing researchers included Shay Lapid, an MA student, and PhD student Dima Kagan.
- Enea SVP on how cybercriminals exploit mobile communicationCyber Security
- Twitter fake news fans flock together in fear of missing outOperational Security
- A fantasy world of fake likes is big business in real lifeTechnology & AI
- Swimlane raises US$70mn to advance its security automationNetwork Security