Connecting ATO and transaction fraud dots
A wave of credential stuffing, with no attempt to use the accounts. A pause. The accounts are accessed but not leveraged. A pause. Then, a flood of transaction fraud, using either the taken-over accounts or new ones set up with similar personal information.
The catch: The stages of this process may occur days or weeks apart. And they may not all take place on the same websites.
What’s happening, and how does bot detection and analysis help clarify and prevent fraud?
The “Broken Telephone” Effect
There’s a popular children’s game called “telephone” in which children stand in a circle, and a whisper is passed around the circle. Generally, by the time it reaches the last child, who says the sentence out loud, enough miscommunication has occurred to alter or, in some cases, completely obscure the original message.
Something very similar happens within fraud prevention departments when tracing the links between an initial credential stuffing attack and later transaction fraud which is directly connected to it - but appears separate. This is particularly problematic because this type of attack is becoming increasingly common.
The credential stuffing attack provides the targeted data used for ATO — which is then leveraged for various types of attack, including transaction fraud. Fraud prevention teams struggle to see the whole picture for two reasons:
· The parts of the attack are often carried out by entirely different fraudsters or fraud rings
· The part of the team that focuses on bot detection is often siloed from the part that focuses on prevention transaction fraud
Bot detection is often seen as its own unique area, which made sense historically given the unique skills and kinds of patterns involved. But maintaining those divisions in today’s complex online ecosystem is problematic and reduces companies’ ability to understand the true nature of an attack.
Bots-as-a-Service
The increased specialisation of the online criminal ecosystem means that bot creation and deployment have become highly sophisticated. Traditional methods of catching bots, such as tracking mouse movements and other behavioural signals, have become less effective as a result.
The best bot creators have automated ways of getting around these traps. In the same way, anomaly detection can be foiled by bots that intercept network traffic and manipulate browser parameters to cover up anomalies.
What makes this sophistication pervasive is that bot creators often focus on their area of specialization, getting better and better at building the sneakiest and most successful bots they can. They then offer the bots, or the results of bot attacks, to the rest of the fraudster community. Effectively, bots-as-a-service. For example:
· Allowing other criminals to rent bots for their own use
· Offering bots for sale
· Carrying out credential stuffing attacks with bots, creating a shortlist of account details that can be used for attacks, and then selling the list
In the last scenario, you might have as many as 3 or 4 different groups of fraudsters involved — the bot creators, the fraudsters carrying out credential stuffing, the fraudsters peeking into the accounts to see what’s there (stored payment method, gift card, loyalty points, personal data, etc.), and the fraudsters who eventually monetize the data.
All of this means you need your bot experts working hand in hand with your fraud experts to track the patterns of behaviour to prepare for a future attack. If you can work across companies to protect against scenarios where credentials are attempted across multiple sites, then that’s even better.
Fraud Often Begins with Bots
In today’s complex, interconnected criminal underworld, fraud attacks often begin with bots. Being able to see the signs from early on puts fraud teams in a far stronger position to guard against whatever type of attack follows from the bots.
Whether it’s a coming ATO attack, hypersale abuse that threatens sites that engage in limited edition sales, flash sales, or a flood of fake accounts being created for fraud or promotion abuse — nowadays, it often starts with bots.
Bot creators have begun selling on accounts of different types that they’ve taken over using their skills. One example is particularly on marketplaces or second-hand selling websites. This illustrates how integrated online crime is becoming; in this specific scenario, you see bot expertise and its consequences, ATO, the online criminal marketplace in action, and all the possibilities that come with it. This leads to fraudulent transactions, data theft, social engineering, phishing and more – all starting with bots.
Many bot creators who work in marketplaces also work with a level of detail that’s incredibly refined, giving potential buyers insight into what they can expect. For example, a high validation rate means that the hacked account has a high chance of still being active and not blocked due to an anti-bot or anti-fraud system. Fast payout also means that a fraudster can use the account to cash-out quickly to monetize the attack, and so on.
Better Together: Bot Identification and Fraud Prevention
Only when you trace the work and impact of bots can you see the complete fraud scheme laid out and ensure that your company is protected from current threats and whatever is about to hit.
Bots and fraud prevention working together is better for detection and protection, ensuring a 360 view of what’s being deployed against your site or app, no matter how many sources are involved. It’s vital to ensure that knowledge sharing is real-time or close to it; otherwise, the fraudsters will move faster than your team can.
It’s great to have tools and teams that specialize in different areas. To get the real benefit from this expertise, you need to ensure that it’s brought together into a single, synchronous system of protection.