Slam dance of phishing, deepfakes, and synthetic identities.
Old tricks, new tools slamming together and creating a witches brew affecting billions of people.
Phishing never really went away — it has been evolving. Now, generative AI, voice cloning, and synthetic-identity farms are taking old school social engineering tactics and making them faster, cheaper, and far more convincing. It has become a slam dance of old and new. Punk phishing now re-imagined with faster compute power and cheaper tools. This combined with an ocean of data to use about almost all us just sitting there out on the internets. It has become the AI recipe for military grade phishing against regular people as well as high profile people.
“AI is making old new again, presenting a double set of challenges — you’re fighting old-style tricks and new-style, AI-supercharged social engineering at the same time.” — Alan W Silberberg, Founder of Digjjaks Group.
The last few years have shown a clear pattern. That of adversaries who are combining classic pressure-and-trust tactics with AI-generated audio, video and hyper-personalized text. I picked three major, well-documented incidents that clearly articulate why synthetic identities and deepfakes together create a dangerously amplified threat.

Three major AI-assisted phishing/deepfake incidents:
1) Arup — HK$200m / ~£20 million deepfake video conference fraud (reported 2024)
In early 2024, engineers at global firm Arup were tricked during what was presented as a video conference using AI-generated images/voice; the result was large transfers to fraudsters’ accounts (reported losses around HK$200m / £20m). The attack used realistic video and voice to impersonate senior officers and direct transfers. This case demonstrates how AI video + human pressure can bypass normal financial controls when verification procedures aren’t strictly enforced. It should serve as a warning for all boardrooms, big and small, profit or not.
2) WPP — attempted deepfake executive impersonation (May 2024)
WPP was targeted by fraudsters who set up fake WhatsApp accounts and used voice-cloning and video snippets to impersonate executives in virtual meetings. The attempt was detected and stopped, but the episode confirmed that attackers are orchestrating multi-channel social engineering campaigns: WhatsApp, voice, and virtual meetings combined to impersonate senior leadership.
3) LastPass — thwarted AI voice phishing attempt (April 2024)
LastPass publicly disclosed an attempted voice-cloning (“vishing”) attack in which an employee received WhatsApp messages, missed calls, and a voicemail using a cloned voice of the CEO. The employee recognized anomalies, reported it, and the incident had no operational impact — but it’s a clear example of how accessible voice-cloning tools let attackers scale targeted BEC-style enterprise phishing scams.
Why this is worse than “just another phishing wave”?
These incidents share a combination of dangerous trends:
- Multichannel impersonation — attackers no longer rely on email alone. They combine SMS/WhatsApp, voicemail, video, and email in the same social-engineering flow, increasing credibility.
- AI reduces the craft barrier — models and tools turn publicly available audio/video into convincing clones fast; script-writing and personalization is automated at scale. This is a main driver of agencies and enterprises seeing more attempts reported.
- Synthetic identities + deepfakes = compounded risk — synthetic identities let attackers create plausible personas (email history, social presence, credit footprint). Deepfakes provide the persuasive veneer (voice/video). Together they let attackers bypass complex enterprise and personal security alike. Industry reporting and government advisories have flagged this combined threat over the last few years.
A “witches’ brew” of synthetic identities + deepfakes.
Think of synthetic identity as the long con: fake names, email history, social profiles, and payment rails that look legitimate to identity checks. Pair that with a real-time deepfake video or voice message that establishes authority and urgency — and you get a scenario where humans, controls and automated systems all fail simultaneously.
- Automated detectors may flag one piece (anomalous email), while a convincing audio clip delivered on a trusted messaging app defeats heuristic checks.
- Humans rely on social cues (a familiar voice, a known executive), and that mental shortcut is exactly what deepfakes exploit.
- Financial controls without strict, out-of-band verification are particularly vulnerable
Immediate, practical defenses (what Digijaks recommends)
You can’t stop attackers from using new tools — but you can make the intersection of human trust and technical controls much narrower.
- Out-of-band verification for any financial or credential change — always validate critical requests by a second channel (voice call to a previously verified number, signed request, or in-person approval). Train staff that “CEO asked” is never sufficient. (This should be part of every policy + playbook.)
- Assume always on multichannel attack surface — phishing simulations and awareness must include SMS, WhatsApp, voice, and video scenarios, not just email. Run tabletop exercises with this threat model.
- Stricter channel controls — prohibit business-critical decisions via personal messaging apps; block or monitor attachments and links from non-corporate messaging channels.
- Identity hardening — use multi-factor tied to software and hardware keys for finance and privileged systems; increase friction for account changes; deploy synthetic-identity detection for new account onboarding.
- Deepfake detection + human training — layered detection (AI detectors + metadata checks) combined with continuous, realistic training where employees learn to spot pressure tactics and odd channels.
A final thought.
AI hasn’t invented social engineering — it has industrialized it. That means organizations must combine better tech with smarter human processes. Treat every unexpected channel and every “urgent” ask as suspect, and design your policies to force attackers to fail more often than they succeed.
Click here or the image below to start your cybersecurity reputation subscription.

