Companies are deploying surprisingly simple tactics to combat sophisticated AI impersonation scams. These low-tech solutions are proving highly effective against deepfake audio and video calls. The trend is gaining traction across various industries.
According to The Wall Street Journal, these methods include asking a caller to draw a smiley face. Employees are also instructed to request the caller pan their webcam around the room. These actions disrupt pre-recorded or AI-generated deepfake content.
Why Analog Defenses Are Outsmarting Digital Fakes
Security experts confirm these simple checks work. They force a real-time, unpredictable interaction that AI cannot easily fake. The request for a physical action, like drawing, creates a major hurdle for fraudsters.
This human-layer defense is crucial. Deepfake technology has become incredibly convincing. Losses from these scams have surged into the hundreds of millions globally this year.
The goal is to break the scammer’s script. A curveball question only a real colleague would know is another powerful tool. It shifts the interaction from passive listening to active verification. This simple step can prevent massive financial damage.
New Tech and Protocols Strengthen the Fight
Technology is also evolving to support these efforts. Google is integrating C2PA Content Credentials into its Pixel cameras. This adds a cryptographic “nutrition label” to authentic images and videos, proving their origin.
Industry standards bodies like NIST emphasize a layered approach. They recommend combining authentication, verification, and human evaluation. This multi-step process is essential for security.
Law enforcement is also improving its response. Italian police recently froze nearly €1 million from an AI-voice scam. The scam impersonated a high-level government official to extort business leaders.
The fight against AI impostors isn’t about complex software alone. It is about blending smart policy with simple human intuition. These low-tech checks provide a critical and effective layer of defense against digital deception.
Info at your fingertips
What is an AI impostor scam?
It is a fraud where criminals use AI-generated deepfake audio or video to impersonate a trusted executive or colleague. They often aim to trick an employee into authorizing a fraudulent wire transfer.
How does drawing a picture stop a deepfake?
A deepfake is a pre-generated video. It cannot respond in real-time to a unique request to perform a physical action like drawing on paper and showing it to the camera. This instantly reveals the fraud.
Are businesses losing money to these scams?
Yes, losses are significant. Reports indicate global losses exceeded $200 million in just the first quarter of 2025. This figure is likely underreported as many companies avoid public disclosure.
What is the “call back” verification method?
If a video call request seems suspicious, employees are trained to hang up. They then immediately call the person back on a known, pre-verified phone number from company records to confirm the request’s legitimacy.
Is this only a problem for large corporations?
No, small and medium-sized businesses are also prime targets. They often have fewer security protocols. Any organization that conducts wire transfers is potentially at risk.
Trusted Sources: The Wall Street Journal, Reuters, Associated Press, National Institute of Standards and Technology (NIST)
Get the latest News first — Follow us on Google News, Twitter, Facebook, Telegram , subscribe to our YouTube channel and Read Breaking News. For any inquiries, contact: [email protected]