<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=41671&amp;fmt=gif">

 Imagine you’re a telecom fraudster. Depending on your customer satisfaction levels, maybe some people already consider you a telecom fraudster! But knowing that most operators these days have at least a minimal form of rules-based fraud prevention tools, what would your logical next step be? 

You’d probably start by getting as much information as possible. If your end goal is to commit subscription fraud for example, then you’d need to find an unwitting person’s personal details. After that, you’d need to effectively try to impersonate that new identity in order to access or open a subscription. When you’re through the “locked door” and have gained unauthorized access, the next stage is to exploit the weakness. So maybe you’d make some calls to a Premium Rate Number (PRN) you own, knowing that the hacked account is on the hook for the bills you’ll be bringing in. The final step, if successful, is to then put in place an automated process to maximise your gains while minimizing your effort. People working with telecom fraud software shouldn’t be too unfamiliar with this pattern. The basic steps of it drive the huge majority of rules-based fraud prevention tools.

As a keen follower of fraud trends in 2019, you might notice this pattern closely matches the basics for setting up a good machine-learning environment. Getting information, replicating it (in what might appear to be an original form), gaining access, performing an action and then automating the repetition of that action is what ML/AI was made for! So it shouldn’t be a huge shock to learn that fraudsters and criminals right now are using this new technology to take advantage of operators using yesterday’s technology.

Getting someone’s personal details
Returning to the example of subscription fraud, a criminal might use a Natural Language Processing (NLP) analysis algorithm to target groups of people vulnerable to phishing attempts or with a high income, based on their social media posts. Or if they have a specific target in mind, they could use image recognition tools to track down the person they need.

Assuming someone’s identity
In training a neural network, a fraudster would find it quite easy to create a fake email that appears to be from your company. By then sending it to the people already identified through the NLP analysis, it is quite easy to convince people to unwittingly hand over their details. In essence, they have first assumed your identity as an operator, and then the victim’s identity as a subscriber.

Accessing the network
For a fraudster looking to operate on a large scale, individually trying to access or open new services is a difficult process. That’s why captcha and object segmentation tests were introduced. But with ML, a fraudster with access to deep learning or image recognition can automate this with 95-98% success. They can also train networks to generate similar passwords to the most common ones to try and brute force their way into a subscription.

Committing fraud
This is the easy and obvious part. It’s not hard to imagine how an organised team of fraudsters utilising technology to gain access to other people’s accounts on a large scale might benefit from that. Large scale IRSF, PRSF, commission fraud, service reselling and asset acquisition are just a few of the ways they can do serious damage to an operator’s bottom line, and all of these frauds usually start with subscription fraud.

Automating the fraud
Fraudsters aren’t so different from you and me. They want to maximise their returns for the least amount of effort in the shortest possible time. Aside from the traditional means of programming these actions, ML and AI have given criminals a new tool: the hivenet. Imagine some criminals have access to a collection of bots, which could decide without supervision on the best actions, goals and behaviours available to them. If they noticed that hacked subscriptions involved in PRSF were being closed at a faster rate than those involved with IRSF, then they could automatically change their task to support the other and maximise their (illegal) returns. In terms of staying in the system, they can use ML and Generative Adversarial Networks (GAN) to minimise the chances of being detected by black-box machine learning based detection models. In short, ML gives fraudsters the ability to stay on your system longer, extract more money and do it on a larger scale.

What does this mean for you?
The principle underpinning much of telco fraud software and management up to now has been to stay one step ahead of the bad guys. The theory went that fraudsters were both at once unsophisticated, yet also able to defraud an operator in a myriad of ways. So that led to a game of cat and mouse where a fraudster tries their luck, the operator puts in a rule to stop it and the fraudster needs to find a new weakness.

However, ML/AI has blown that paradigm out the water. It means that fraudsters can take a huge leap in front of the operators, organise their actions over a huge scale and then essentially let a computer keep repeating and optimising the process. When one type of fraud, like subscription fraud, can then lead to another more damaging kind, like IRSF fraud, you need to be at least operating on the same level as the telecom fraudsters. This means it is absolutely essential that operators have access to their own ML/AI tools too.

Using ML/AI to fight back
WeDo Technologies is constantly assessing new tech and how it impacts on us, our clients, their end-clients, and of course the people who exist to exploit those new technologies.
We’ll be at MWC#19 in Barcelona to talk about not just about ML/AI and subscription fraud, but all the ways we can keep our customers ahead of the criminals.

Do you know the difference between myths and fraud?
If you want to get in more detail, please contact us.

Subscribe Our Blog

Let Us Know What You Thought about this Post.

Put your Comment Below.

You may also like:

Why ‘winging it’ with Wangiri won‘t work

Perhaps it is simply curiosity, or the fear of missing out on something, that provokes us to respond to or return missed...

Fraudsters using ‘deepfakes’ definitely not faking!

Wikipedia describes ‘deepfakes’ as media that take a person in an existing image or video and replace them with someone ...

Four Ways to Reduce the Threat of Robocalls

While regulation and standards bodies are increasingly tightening the screws on scammers using robocall calls and SMS ph...