Detecting and preventing deepfake attacks in the wild

Claire Nunez

5/15/20255 min read

A MacBook with lines of code on its screen on a busy desk
A MacBook with lines of code on its screen on a busy desk

Originally posted on IBM Think on May 15, 2025. Co-written with Dr. Marco Simioni.

While the near-universal embrace of AI-powered tools and processes speeds up mundane tasks and makes organizations more efficient, some are not using generative AI for good. Cyber criminals and scammers alike often use AI-generated imagery and videos to dupe unsuspecting audiences. And even if you are highly vigilant, there have likely been some instances when you haven’t recognized some of this generated content yourself.

In fact, according to the IBM Threat Intelligence Index 2025, generative AI is emerging as a new addition to threat actors' toolboxes, especially for social engineering and developing malicious code. A deepfake attempt occurred every five minutes in 2024, according to Entrust, while Sumsub claims there was a 245% year-over-year increase of deepfakes worldwide in 2024. A recent study by Medius reveals that one in two finance professionals have been hit with a deepfake scamming attempt, but what's really concerning is that Medius claims almost half of them have been successfully scammed, showing just how effective these attacks can be.

These attempts will only continue to both increase and succeed as generative AI models improve. And unfortunately, we will likely only see more deepfakes as the technology gets cheaper as well. X-Force researchers were able to create realistic deepfakes for as little as USD 5 worth of cloud computing resources and in less than an hour. In a few months or years, the cost and time required will likely drop. It is imperative to start preparing your organization to recognize deepfake social engineering today. Together, let’s review the dangers of deepfakes, the ease with which this fraudulent content can be created and recommendations to reduce the risk of compromise.

The dangers of deepfakes

According to a 2023 study from the National Institute of Health (NIH), when individuals were given a warning that one out of five videos was a deepfake, only 21.6% of respondents could correctly identify which video was AI-generated. A recent MIT study suggests that individuals are getting better at identifying synthetic videos, but one out of five deepfakes still go unnoticed even when all the clues and the sensory information are available. It can be difficult for us to identify a deepfake, especially if we do not know the victim well or aren’t paying close attention.

As humans, we are generally predisposed to believe what we see, and in our uber digital age (especially if that messaging is repeated), different sources of information are constantly trying to grab our attention, making it more difficult for us to determine fact from fiction. We may not recognize that a political figure’s voice is slightly different or zoom in and see an inorganic blinking pattern, especially if we are not constantly observant.

Threat actors and scammers use this to their advantage, making users fall victim to misinformation. Chances are, if you use popular social media sites, you will encounter an AI-generated video while scrolling. These videos are often weird or comical, but sometimes, deepfakes of influencers or other public figures will be used to peddle products and misinformation.

AI outputs are getting better and better, and a sixth finger—once a prevalent glitch in the initial AI image creations, and later a subject of internet memes—will not always be a dead giveaway. It takes a well-trained eye to recognize that some images are not real. Some social media apps or websites require disclosure that content shared is AI-generated, but there are loopholes that creators can use to avoid doing this.

It is surprisingly easy to make a deepfake

Generative AI engines typically have safeguards around them, preventing users from creating graphic or harmful content such as pornography or political disinformation; however, it is possible to use open-source software or prompt engineering to create a dangerous output. According to Google Trends, “free AI voice cloning” searches increased over 200% between 2022 and 2025, and “AI voice cloning software” searches jumped by over 450% within the same time frame.

In IBM X-Force Cyber Range experiences, clients often request that we make deepfakes of their executive teams to validate how their staff react, and to observe how they protect themselves and their business from this threat. While we do this with the individual’s explicit consent (they must approve the script, which footage is used and sign a consent form—we created this process in partnership with the IBM Tech Ethics board), threat actors leverage whatever images or videos of the victim they can find online.

If you have good footage of someone—meaning the audio is clear and the person is positioned properly within the video frame—and a decent model, it could be very easy and quick for someone to create a deepfake. All it takes to create a good output is time, energy, patience and computing power. When we create deepfakes for our clients, we typically run the engine two to three times—which for a threat actor is not a lot of effort.

How to prepare your organization to recognize deepfake attacks

While innovative solutions like Reality Defender can offer a crucial line of defense, the most important element of preventing deepfake fraud is awareness. The second most significant is vigilance. So, how can you recognize a deepfake? And more importantly, how can you get your organization to collectively recognize deepfakes and prevent compromise?

  1. Identify: Typical giveaways are anatomical, sociocultural or functional implausibilities—extra fingers, unusual positions or activities and unlikely interactions (or fashion choices – see former Pope Francis in a floor-length puffer jacket). For videos, movements will typically be unnatural, with mouth or eye movements not coinciding with the audio. Stylistically, AI-generated imagery and videos are often hyper-realistic in the fore and backgrounds. There is a lack of dimension.

  2. Validate: In the case of a live deepfake, or an impersonation of someone on a video call, the element of surprise is what makes these attacks so successful. Validating individuals (especially if it is an executive on the other end of the camera) can feel intimidating and awkward, but other attendees can always ask a question that only that individual would know. Think of a distinct question that may not have been easily answered by an interview.

  3. Educate: To best educate your organization’s employees, generative AI topics need to be included in the conversation regularly. This means including deepfakes (and how to recognize them) in regular security conversations and training, making AI policies known and encouraging employees to slow down. Most security mistakes are made when employees act too quickly and forget to validate. Remind teams to follow their gut instincts—if something seems too good to be true or there is too much pressure on an ask, question it.

Generative AI deepfakes will continuously improve, but if we are all a bit more attentive, we can prevent the spread of misinformation and fraud.

The IBM X-Force Cyber Range simulates cyber crises for executive and technical teams. Deepfakes can be embedded into the scenarios to educate on the dangers of AI-generated social engineering. Learn more about the X-Force Cyber Range here.