How AI is shaping the price tag of scams and disinformation in 2024 

May 1, 2024

FEATURED

Abhishek Karnik, Senior Director of Research at McAfee on staying safe in the age of AI.

2023 marked the acceleration of low cost and easy-to-use generative AI tools. For all the opportunities these tools unlock, there are cybercriminals seeking out ways to utilise them to spread disinformation and run sophisticated scams at scale. 

When it comes to AI-generated audio, most of the scams we’ve been learning of at McAfee involve a cybercriminal impersonating someone familiar to the victim, such as a friend or family member, and saying that they’ve been in a car accident or are in some kind of trouble and need urgent help… and money.  

Concerned by the efficacy of this scam, we started to investigate it a little more. We found that almost a quarter of Brits say themselves or a friend have already experienced an AI voice scam like this. Nearly 80% of victims confirmed they had lost money as a result. 

With 65% of adults not confident that they could identify a cloned version of a voice from the real thing, it’s no surprise that this technique gained momentum, especially as we found more than a dozen freely available AI voice-cloning tools on the internet. 

As we look ahead to 2024, we’re anticipating a new wave of elevated AI-generated scams, mis- and disinformation, and online threats. Here’s what to look out for. 

Disinformation and Elections 

2024 will see several pivotal elections across the globe, including the United States Presidential Election, the Indian general election, European Union Parliament elections and, potentially, a UK general election. 

With generative AI becoming more accessible than ever, it can be expected that this year’s election cycle will be significantly impacted by deepfakes and disinformation. Using just a small sample of a candidate’s voice, bad actors can create realistic voice clones that could be used to damage their reputation and credibility. 

A recent McAfee study found that when asked what potential uses of deepfakes are most concerning, 37% of Brits said influencing elections. In the last year, 35% of people have seen deepfake content with over a fifth (21%) coming across a video, image, or recording of a political candidate which they thought was real at first.  

While many voters will likely raise a sceptical brow at statements made by politicians to discredit their opponents, when supported by convincing deepfakes, any defamatory statements are likely to appear more believable. To avoid being misled by disinformation, it is important to check the facts using multiple sources, particularly if you are inclined to share the content as this could further spread disinformation. 

To help people protect themselves from being misled by online manipulation, at McAfee we are developing AI-powered innovations, such as Project Mockingbird, which uses a combination of contextual, behavioural, and categorical detection models to detect and expose AI-generated, maliciously altered audio in videos with 90% accuracy. 

Olympic size scams 

Aside from elections, the 2024 Summer Olympics in Paris are on the horizon and an event with this level of appeal is likely to attract cybercriminals looking to capitalise on fans’ excitement. 

Cybercriminals have hooked onto popular events for phishing texts and emails for years, but these scams are becoming harder to identify as generative AI removes the traditional hallmarks of misspelled words and poor grammar. Generative AI also allows scammers to custom create phishing websites in different languages in order to target individuals based on locale. Combine that with the excitement surrounding the Olympic games, and users may be tempted by that email or website promising a chance to win tickets with one click.  

Emails, text messages, plus other messaging channels like WhatsApp and Telegram, and even social platforms, are all fair game, so it’s essential to remain vigilant and pause to think before clicking links or giving out personal or banking information.  

Conflicts ramping up charity fraud 

Scammers exploit emotions – such as the excitement of the Olympics. Sadly, they also tap into fear and grief. A particularly heartless method of doing this is through charity fraud. While this takes many forms, it usually involves a criminal setting up a fake charity site or page to trick well-meaning contributors into thinking they are supporting legitimate causes or contributing money to help fight real issues. 

2024 will see this continue. We further see potential for this to increase given the conflicts in Ukraine and the Middle East. Scammers might also increase the emotional pull of the messaging by tapping into the same AI technology we predict will be used in the 2024 election cycle. Overall, expect their attacks to look and feel far more sophisticated than in years past. 

Used right, AI could be a cybersecurity hero 

There are two sides to every story, and AI is no different. While AI presents new cybersecurity risks, it also has the potential to transform cybersecurity in a positive direction by improving threat detection, prevention, and response.  

AI is constantly learning, which means it can analyse vast amounts of data, far more than human cybersecurity professionals, to identify patterns, anomalies, and other indicators of threats, and it does all of that in real-time. It can also leverage historical data to detect emerging threats, like phishing attempts, insider threats and malware.  

McAfee’s AI-powered Scam Protection, for example, proactively blocks dangerous links that appear in text messages, social media, or web browsers and allows users to engage with text messages, read emails, and browse the web peacefully and securely. 

I’m intrigued at what else 2024 has in store. What is certain though, as scammers continue to leverage AI, so too are the security professionals and threat researchers trying to stop them. 

TIPS: Staying safe in the age of AI  

  • When in doubt, use the THEFT acronym – THEFT stands for Tone, Hair, Eyes, Face, Teeth – the key warning signs of a deepfake. Check if the outline of the person is blurry or if small details like their hair or shadows seem off, they may have blotchy patches on their skin or an irregular skin tone. Is the hair a bit too perfect? Are their eyes looking expressionless or in the wrong direction? Head movement can also cause a slight glitch in the rendering of an image, and teeth don’t always render well, sometimes looking more like white bars instead of the usual irregularities we see in people’s smiles. 
  • Take a moment to stop, pause and verify the information – While AI is making scams increasingly sophisticated and difficult to spot, one classic tell-tale sign remains. Cybercriminals will play on emotions by creating a sense of urgency or even fear in a bid to catch you with your guard down. Go direct to the source or try to verify the information before responding, and certainly before sending any money. Always think, could this be a scam?  
  • Regularly monitor personal accounts – whether you think you might have encountered a scam or not, regularly monitor your accounts for any unfamiliar or unauthorised activity, such as attempted logins, messages sent from your account or transactions you didn’t make. If there is something suspicious, report it immediately. 
  • Use AI to fight AI – consider investing in tools to help identify online scams. There are features that detect and protect you in real time from never-before-seen threats and scams – whether that’s dangerous links shared on text, email, search results, or social media. In addition, McAfee recently announced deepfake detection is on the horizon, furthering our commitment to use AI to fight AI scams and help arm consumers with the ability to detect deepfakes.  

About the author

Abhishek Karnik is the Senior Director of Research at McAfee and leads a global team of experts on cybersecurity threats and intelligence with a focus on providing protection content to McAfee products.

More UK Security News

Read Next