How retailers can protect themselves from AI fraud online 

May 8, 2024

FEATURED

fraud

Xavier Sheikrojan, Senior Risk Intelligence Manager at Signifyd tells SJUK the influence of artificial intelligence (AI) in fraud cases and how retailers can protect themselves from attacks.

Like so many industries, AI has changed the game in ecommerce.

As in art where it is now hard to tell a real painting from an AI generated one, so too is it hard to tell a real customer from an AI generated bot.  

In the early days of online shopping, fraud was fairly simple.

A fraudster would impersonate a customer and see if they could wrangle money from a brand by insisting either that a product had not been delivered or that it was substandard and banks would usually take their side due to the nature of chargebacks.  

These fraud tactics were relatively tame by today’s standards, with fraudsters having to rely on their ingenuity and deceptiveness to succeed; such fraudsters would typically give up once blocked by a company.  

The picture today is very different, where criminal ingenuity is amplified through AI and other tools that are constantly and evolving, giving fraudsters new avenues for crime.

With the rise of ecommerce, retailers now deal with millions of online transactions every day and rather than the odd lone wolf fraudster, they’re having to contend with well-organised teams that move quickly to find vulnerabilities while learning from blockers and adjusting their methods to exploit gaps in protection. 

This ability to increasingly identify and exploit faster is down mainly to one thing: fraudsters capitalizing on AI chatbots bots.

The versatility of these bots is boundless, often seen in the recreation of legitimate customer complaints to gain access to sensitive information such as credit card numbers or other personal details. 

On a larger scale, fraudsters are also impersonating distributors and warehouses, making bogus claims about hundreds of thousands of products, rather than just one or two as a customer might. 

The evolution of retail fraud represents a significant threat to retailers bottom lines.

To combat these criminals, retailers need to know what they’re up against and – critically – how to use these same AI tools to fight back.  

How do criminals use AI chatbots? 

The real power with AI bots is that they make it near impossible for retailers to distinguish genuine customer service requests from fraudulent attempts to gain access to account information and credit card details.

A great example of this we’re seeing is the creation of phishing templates generated through ChatGPT and other generative AI tools to impersonate legitimate customers.

This occurs first and foremost through replicating customer service queries by using huge data stores to recreate the text or speech patterns of actual customers.  

Using this data, criminals are able to exploit retailers by, for example, claiming returns on purchases that were never made and pocketing the money. 

Herein lies the problem for retailers; not only are they losing money, but they have no way to defend themselves. Retailers lose out because, in online retail, shops absorb all the risk.

When chargebacks occur, banks take the side of consumers – even in cases of fraud and it is the retailer’s responsibility to pay the difference when money is debited back to the consumer.  

How do criminals use deepfakes and synthetic personalities? 

If chatbots are a challenge for retailers, deepfakes hold fatal potential.

Deepfakes; tech utilising AI to recreate a person’s likeness or even voice, are becoming so convincing that they’re being used to secure goods, full card details and billing information.  

Deepfakes are taking what works within a chatbot and evolutionising it to work smarter and faster against retailers and consumers.

For example, while a chatbot makes use of text to recreate the typing patterns of customers, deepfakes can recreate voicemails, pictures of customers and more.

In fact, in future, we may even see deepfakes becoming so advanced that they do not need fraudster’s input at all, rather, they will be fully automated and let loose on unsuspecting retailers; able to create a near-perfect digital doppelganger of real customers, travelling from store to store online and robbing from every retailer in sight. 

How do retailers protect themselves? 

High performance machine-learning models are a powerful defence retailers can use to keep up with rapidly improving bots and deepfakes, so many fraud prevention solutions have become focused on harnessing these tools to outpace fraudsters.  

In short, retailers need to fight fire with fire, or rather AI with AI. 

It’s not always realistic for retailers to create or invest in their own machine learning or AI solutions. That’s where partnerships come into force, providing support in two critical ways:  

·       Intelligence teams can evaluate existing fraud cases and collaborate closely with data science teams. Together, these groups feed information into their own AI models which can distinguish between legitimate and fraudulent transactions more quickly, accurately and affordably than humans can. 

·       Fraud prevention tools from ecommerce specialists allow retailers to tap into a huge network of data that allows for better vetting of authentic vs. fraudulent customers and transactions.

Even better, working with a fraud prevention provider often means they will take on the risk associated with approving a transaction that later turns out to be fraudulent.

As the fraud-scape continues evolving, this is an increasingly valuable tool to integrate into retailers’ back ends. 

The benefits of this approach include retailers being able to trust their customers more easily, knowing that they are protected from fraud by risk intelligence teams and fraud prevention providers.

Retailers need to maintain a friendly atmosphere if they are to maintain good customer relations, but fraudsters can (understandably) drive a wedge between the two groups.

With the help of intelligence teams, retailers can minimise the losses incurred by theft while maximising customer lifetime value by providing good experiences for loyal shoppers.  

To build resilience to fraud, merchants need strong technology and networks behind them to more easily identify anomalies and input them into their own models, allowing them to begin “learning,” and identifying fraud more easily.  

This process acts as a feedback loop for the “friendly” AI; the more fraudulent transactions it identifies, the easier it is for AI to identify similar cases.

This is effective at chasing off even the most persistent fraudsters; even those who may make fake accounts can be identified by the AI through link analysis, IP address, or other device information. 

The bad news is that as AI technology improves, so too will the scams that use it.

The good news is that as problems develop, so do solutions.  

To ensure the right solutions at the right time, retailers should ensure that they work with fraud prevention providers, have risk intelligence teams and also a robust fraud prevention strategy in place. This not only helps prevent attacks, but also derisks losses should crime occur.

Additionally, it allows retailers to act with confidence, processing transactions without fear of adverse impact.

Once protected, retailers are better able to focus on what is truly important; customer loyalty, retention, and profit. 

More Security News

Read Next