Illustration: Gemini, prompt design by Josef Šlerka 2026-02-02
Illustration: Gemini, prompt design by Josef Šlerka 2026-02-02
They exploit fears about loved ones and concerns about superiors. And they have artificial intelligence at their disposal. With AI’s help, fraudsters can simulate the voice or appearance of any person. Investigace.cz explains how a new era of fraud works.
When US citizen Sharon Brightwell’s cell phone rang last July, the number on display showed that it was her daughter calling. When she answered, it was herdaughter’s familiar voice that came through. Brightwell heard her daughter stammer that she had just hit a pregnant woman with her car and needed urgent help. Then she heard a man who introduced himself as her daughter’s lawyer take the phone and explain that her daughter urgently needed $15,000 to pay her bail. It was like a bad dream.
In the end, all agreed that Brightwell would withdraw the money from her account and hand it over in a box to the lawyer’s assistant, who would come to pick it up. And so Sharon Brightwell lost her money, becoming a victim of fraud using artificial intelligence and so-called deepfake voice technology.
“Attackers mastered these technologies very quickly. In the context of deepfakes, the porn industry was the fastest — a year or two ago, 98 percent of such content consisted of pornographic material into which the faces of famous people were inserted. As the technology improved, the possibilities for attacks expanded. Today, we see attacks on biometric systems (systems that verify unique characteristics such as fingerprints or facial features, ed.), the creation of materials to manipulate public opinion, identity theft, and so-called vishing [voice phishing]. Combined with a stolen ID card and virtual identity, attackers can gain access to financial systems. The range is wide — from financial fraud to evidence tampering and hate propaganda,” says Associate Professor Kamil Malinka of Brno University of Technology.
A Skyrocketing Increase in Fraud
According to statistics, the number of attempts at such fraud has skyrocketed since 2023, increasing by three thousand percent. Creating such a deepfake is simple and inexpensive. What used to be done only by large film studios can now be done by an attacker in a matter of minutes using commonly available internet services or their own computer.
There are essentially two basic technologies. The first is voice synthesis. Synthesizers are based on two principles: text-to-speech, or TTS, which creates a model capable of generating the victim’s voice based on text input, and voice conversion, where the attacker maps their voice onto the victim’s recording. “Due to the quality of the output, TTS is used almost exclusively for attacks, and it is a pre-prepared recording,” explains Associate Professor Kamil Malinka from the Faculty of Information Technology at Brno University of Technology, describing the attack procedure for Investigace.
Today, all it takes is approximately 3 to 10 seconds of your voice for AI to create a model that can say anything you want. To be on the safe side, however, the attacks follow a set scenario: the victim hears a short emotional speech from a loved one, after which a “lawyer” or “police officer” takes over the call. The interactive part of the call no longer requires ad hoc voice imitation. Sharon Brightwell fell victim to this type of attack.
“It’s not difficult to obtain such a sample. All I have to do is stop you on the street, have a phone in my pocket that will record our conversation, and ask you for directions to the nearest pub. In a moment, I have the 30 seconds I need. But that’s a very paranoid scenario; the attacker will choose a less demanding route. Similarly, any YouTube video or public presentation can be used,” explains Malinka.
The second technology, increasingly used for cyber fraud, is the generation of an artificial video. AI learns the face and movements of the “victim” and either generates them directly or synchronizes lip movements with the fake audio.
It is not yet able to maintain perfect facial consistency, but that is only a matter of time.
A financial officer at the Hong Kong branch of British engineering firm Arup fell victim to such a scam. In January 2024, he joined a video conference where he saw his CFO and other colleagues from the London headquarters. However, they were all deepfakes: the attackers used publicly available videos to train AI models. The result was 15 transfers totaling $25.6 million (approximately CZK 530 million). We have also experienced attempts at these attacks in the Czech Republic, with GymBeam, for example, becoming a target. It should be noted that the attack was unsuccessful.
Direct fraud is, of course, only one way to exploit deepfake technology in cybercrime. We are increasingly encountering cases where deepfakes are used to circumvent biometric authentication. This type of attack is called an injection attack and poses a major threat to KYC (Know Your Customer) processes. We saw this, for example, in the case of a Hong Kong fraud network broken up in 2025, which used deepfakes and stolen documents to set up fraudulent bank accounts. The total losses amounted to $193 million (approximately CZK 4.05 billion).
“Biometric systems are inherently vulnerable to deepfake attacks — if the fake is of sufficient quality, its characteristics are identical to the original. Biometrics alone has no chance of detecting deepfakes. Protection therefore relies on additional liveness detection, which verifies that a real person is sitting in front of the camera, and on a deepfake detector – however, the reliability of both approaches is insufficient in the context of current AI-based attacks,” explains Malinka.
Democratization of Attacks
As the cost of carrying out this type of attack decreases, the number of groups affected is growing. At the top of the list are victims of so-called CEO frauds, where attackers pose as senior company executives and order urgent money transfers. Given the complexity of preparing an attack, it pays to target large sums of money. One well-known case is the attack on a British energy company in 2019, when the “CEO of the German parent company” ordered an urgent transfer of €220,000 (over CZK 5 million) via a phone call.
The aforementioned attacks exploiting emergency situations are also spreading. They are called Grandparent Scams (or, more generally, Family Emergency Scams). They are actually an evolution of fraudulent phone calls. Previously, a supposed friend of the grandchild or a police officer would call to say that your loved one had had an accident. Now, instead of a friend of the grandchild, the “grandchild himself” calls in his own voice. This significantly increases the success rate of the attack, as the victim hears a voice they know intimately. Emotions and a sense of urgency play a major role.
Another type of attack involves stealing from people who believe in long-term relationships built on dating sites or social networks, which ends with the victim being robbed of their life savings. This type of attack is called a Romance Scam or, contemptuously, Pig Butchering. In these instances, f do not try to impersonate a specific existing person. Instead, AI-generated selfies and deepfake video calls create the illusion of a real person, while the fraudster can conduct dozens of parallel relationships with different victims. In October 2024, Hong Kong police broke up an organized group of 27 scammers who used deepfakes on dating platforms to steal $46 million (almost 1 billion Czech korunas) from victims across Asia.
Finally, other frequent targets are banking and financial institutions. Here, fraudsters use deepfakes to set up bank accounts in other people’s names so that they can launder money, for example. One example is the OnlyFake platform, uncovered in 2024, which generated fake IDs for as little as $15. Attacks on banking systems are typically two-stage: the first step is to produce a high-quality forged document, and the second step is to overcome biometric verification and liveness checks when opening an account. AI makes both steps significantly easier.
How to Defend Yourself
All these scams often involve simple psychological manipulation. It is either fear for a loved one or a superior, or love for a virtual partner. In addition, people often fail to recognize deepfake scams. According to research, people correctly identify high-quality deepfakes in only 24.5 percent of cases — which is worse than a random guess. A full 68 percent of deepfake videos today are nearly indistinguishable from authentic material.
“Four years ago, people’s success rate in recognizing deepfakes was around 70-80 percent, but with the development of new generations of synthesizers, the quality of fakes has improved so much that we can no longer rely on our ability to recognize deepfakes. Furthermore, our research shows that if people do not know in advance that they are evaluating a deepfake — which is the scenario that occurs during an attack, because unlike a scientist, an attacker typically does not warn you about the nature of the attack — their ability to recognize it drops to virtually zero,” says Kamil Malinka.
The situation is also evolving rapidly. “When it comes to deepfake videos, attackers now have real-time tools that allow them to respond to the victim’s prompts directly during the call. The situation is slightly better with voice attacks, which still require time to prepare. However, I expect that within approximately three years, sufficiently high-quality voice conversion systems will be commonly available, making it possible to carry out audio attacks in real time,” says Malinka.
“However, people are beginning to suspect that something like this exists and are ceasing to believe everything they see on the internet. We are moving towards a situation where we will assume that digital content may be fake and will verify it more, but vulnerable groups will still exist. It is interesting to speculate that the low quality of some deepfakes may not be accidental. It can act as a filter—if a person cannot recognize even a poor-quality hoax, they are an ideal target for further manipulation by an attacker. Every day, we live with a number of cybersecurity risks, such as viruses and fraudulent emails. And deepfakes will just be another ‘bug’ that we have to learn to live with,” adds Malinka.
How can we defend ourselves against all this when our ability to detect fraud fails? According to experts from the World Economic Forum and security companies such as Reality Defender and Pindrop, there are basically two types of defense. The first is technical, such as deploying software that attempts to detect deepfakes.
“However, it is important to realize that technical detection is only the last layer of defense — when it fails, the attack is successful. That is why it is crucial to build multi-layered protection: from minimizing publicly available recordings of one’s own voice and photographs, through various regulatory requirements for model creators, to procedural measures — such as limiting the amount of payments authorized by voice alone. Detection software comes last. However, it is still in its infancy, and there are still few commercially available and truly reliable detectors,” Malinka points out.
The second type of defense is process change. This is actually the only step that can be taken immediately: introduce an additional verification factor or stop relying on voice and other biometrics for authentication during video calls. No money transfer should be possible based on a single call from a superior. If the bank or your boss calls with an urgent request, simply hang up and call back on a verified number. For family and close friends, it may help to set up a security question. Agree to ask a question that only the real caller would know the answer to. It sounds a bit like something out of Harry Potter, but deepfake technology is truly magical in some ways.
In the past, the rule was to trust but verify. Today, we live in an online world of zero trust where we can’t even rely on our own eyes and ears.
This article is a compilation of the Czech version of this article and interview published on Investigace.cz.
Subscribe to Goulash, our original VSquare newsletter that delivers the best investigative journalism from Central Europe straight to your inbox!
Josef Šlerka has worked as a data analyst and reporter at Czech Centre for Investigative Journalism since 2021. He used to head the Czech Fund for Independent Journalism (NFNZ). He is also the head of the Department of New Media Studies at Charles University in Prague.