Talk to a fraud fighter
dark side of artificial intelligence

The Dark Side of AI: How Fraudsters Are Using Artificial Intelligence to Scam at Scale

February 11, 2025

If any topic dominates cybersecurity in 2025, it will be artificial intelligence. How is AI helping fraudsters carry out digital fraud?

Let’s explore the flip side of the technology: AI-generated scams.

AI, a powerful force in the current fraud landscape, is both a friend and foe to fraud fighters. While AI strengthens fraud prevention systems and improves detection accuracy on the security side, it also makes fraud attempts more accessible, targeted, scalable, convincing, and ultimately much more dangerous.

Five examples of AI-powered fraud

Let’s explore the key areas of fraud where AI provides the greatest advantage to cybercriminals.

1. Deepfake video scams

Deepfakes are undoubtedly the most well-known and recognizable form of AI-generated scams. These convincing, artificially generated videos designed for impersonation are the ultimate weapon of social engineering. Moreover, as AI advances, their quality and accessibility continue to improve.

In digital fraud, deepfakes are used for various forms of impersonation scams—whether in advanced CEO/CFO scams, as the Arup example showed us, or in impersonations of celebrities promoting get-rich-quick schemes. This is also why Elon Musk is the most common celebrity used in deepfake scams—he’s famous, wealthy, and therefore highly convincing in investment schemes.

However, it would be a mistake to assume that only celebrities or heads of multinational corporations are targeted by deepfake videos. With the rise of social media, where many users share videos of themselves speaking, virtually everyone is providing material for deepfakes—and scammers know how to exploit it. Deepfakes will therefore soon be everywhere: the number of deepfakes online is doubling every six months. In 2025, we can expect to see around 8 million deepfakes shared online.

Financial institutions cannot fail to notice this threat because it is literally knocking on their doors. In the financial sector, deepfake incidents surged by 700% in 2023 compared to the previous year. A 2024 Medius survey revealed that over half of finance professionals in the US and UK had been targeted by deepfake-powered financial scams, with 43% admitting that they had fallen victim to these attacks. Deepfakes also hit the cryptocurrency sector hard, with a staggering 654% rise in deepfake-related incidents between 2023 and 2024.

2. Voice cloning

Deepfake videos are not the only way to convincingly impersonate someone. Voice cloning schemes, where fraudsters use AI to artificially create “deepfake” voice messages, are gaining in popularity.

The use of voice cloning for fraud is highly varied. There have been numerous cases of CEO/CFO fraud—a prime example being a voice cloning scheme aimed at the CEO of the world’s largest ad group. There is also a growing number of grandparent scams that target older people and their sense of responsibility for their families. In these cases, scammers imitate the voice of a relative who is supposedly in distress. Similar tactics are also used by scammers for extortion, pretending to have kidnapped a loved one.

For the reasons above, AI-generated voice scams continue to spread. Research by Starling Bank showed that 28% of UK adults think they have been targeted by an AI voice cloning scam in the past year. Alarmingly, nearly half (46%) of UK adults do not know this type of scam even exists. Additionally, a McAfee survey revealed that out of 7,000 respondents, one in four had encountered an AI-generated voice scam, either personally or through someone they know. Meanwhile, 37% of organizations globally reported being targeted by a deepfake voice attempt, according to Medius.

3. Synthetic identity fraud

Synthetic fraud is another AI-driven cyber threat, where fraudsters create fake identities by combining real and AI-generated information. These synthetic personas allow criminals to open accounts, apply for loans, and carry out various fraudulent activities.

In creating a synthetic identity, known as a Frankenstein ID, fraudsters take advantage of the increasingly prevalent data breaches. By mixing stolen information (such as Social Security numbers) with fabricated names, addresses, and dates of birth, these synthetic identities can be used to bypass banks’ application and verification processes. Fraudsters then obtain lines of credit with no intention of repaying them.

Although not always obvious to the general public, synthetic fraud is a serious threat to financial institutions. It is the fastest-growing financial crime in the United States. A 2016 study revealed that synthetic fraud cost banks a staggering $6 billion. With the help of AI, this amount is expected to rise rapidly. Deloitte estimates that synthetic identity fraud could result in at least $23 billion in losses by 2030.

4. Advanced financial malware

Using AI to create advanced financial malware is one of the most widely discussed threats in cybersecurity. AI-powered financial malware can dynamically adapt, learn, and optimize its attacks in real time—unlike traditional malware, which relies on pre-written code with fixed behaviors.

AI-powered malware is thus harder to detect and mitigate. It can evade traditional antivirus measures by altering its behavior or appearance based on the security environment it encounters. This is known as polymorphic malware, and reports suggest that OpenAI’s ChatGPT has already been used to create a new strain of it.

Even though some argue that, for now, AI-based malware doesn’t pose an imminent danger and has not yet been widely used in real-world attacks, AI can still help fraudsters develop and target malware more easily, lowering the entry barriers to creating malicious content—and that alone is worrying.

5. More dangerous phishing

While phishing has been around for a long time, the rise of AI ensures it’s not disappearing—quite the opposite. According to many security experts, phishing attacks enhanced by large language models (LLMs) are among the most concerning AI-driven threats today.

Fraudsters use LLMs to write phishing emails and create websites that closely mimic the tone and style of trusted brands or authorities. This LLM-powered content also lacks common red flags, such as grammatical errors and awkward phrasing, that potential victims often rely on to identify scams. Another problem is that these AI-generated scams can trick both human targets and enterprise security systems designed to detect them by altering the phrasing of phishing messages.

According to an IEEE study, as much as 60% of participants fell victim to AI-automated phishing, which is comparable to the success rates of phishing messages crafted by human experts. AI also automates the entire phishing process, reducing phishing attack costs by more than 95% while achieving equal or greater success rates.

This trend is especially worrying in the case of spear phishing—i.e., highly targeted phishing attacks. In the past, spear phishing was expensive and time-consuming, although effective. With the rise of AI-powered phishing, these highly targeted attacks are becoming cheaper and more easily scalable, to the point where they are comparable to mass, non-personalized phishing.

One thing is clear: phishing is likely to grow in both volume and sophistication as LLMs continue to advance.

How to mitigate AI fraud?

How can we eliminate the threat of AI in the hands of fraudsters? The only way is to keep up technologically and leverage AI in fraud detection. In identifying sophisticated fraud, AI-enhanced fraud prevention systems can play a crucial role—whether it’s detecting phishing emails and websites, monitoring transactions, or utilizing a sophisticated behavioral intelligence-based system like that offered by ThreatMark.

The threat and potential of AI are intrinsic to the technology. For good to prevail, we must harness it to its fullest capacity to combat fraud. Indeed, fraudsters recognize AI’s power and exploit it extensively. We can expect to see many more examples of this in 2025. The question is—will we be ready?

Learn more about behavioral intelligence