Deepfake and Cyber security
Looking at the news of the current year, I was triggered to know that, as per multiple cyber security reports, 2025 has already seen hundreds of deepfake incidents across the globe, with losses in the hundreds of millions of dollars. Deepfake is projected to grow from 500,000 shared files in 2023 to 8 million by 2025. DeepStrike. While deepfakes started as entertaining experiments in AI, today they are one of the most disruptive forces in cyber security.
Among these incidents, two stand out:
The Most Shocking Case: Arup’s $25 Million Scam in Hong Kong
In early 2025, an employee at the multinational engineering giant Arup attended what seemed like a routine video call with senior executives, including the CFO. During the call, he was instructed to transfer over US$25.6 million to external accounts. The horrifying twist: every single participant on that call was a deepfake avatar—their faces and voices generated by AI CNN. This case shattered assumptions about digital trust, proving that even video conferencing is no longer a safe ground.
The Biggest Systemic Impact: $200M+ Global Corporate Fraud Wave
Beyond one-off cases, deepfake-driven corporate fraud has become a global cybercrime epidemic. In just the first quarter of 2025, businesses collectively lost more than US$200 million to deepfake scams. Attackers impersonated CEOs, cloned voices, and faked entire video calls to authorize fraudulent transfers eSecurityPlanet. Analysts called it the fastest-growing financial cyberattack trend ever, with attempts surging by over 1,700% compared to 2022.
Why Deepfakes Are So Dangerous
-
Hyper-Realism: AI now creates videos and voices that are nearly indistinguishable from those of a real person.
-
Accessibility: Tools are now available online, often at no cost or a low cost.
-
Exploiting Trust: We’re conditioned to believe what we see and hear, especially from familiar faces or voices.
-
Scalable Attacks: One scammer can now target thousands of victims with convincing fake media.
Causes Behind the Deepfake Explosion
-
AI democratisation: Open-source models and tools are widely available.
-
Weak digital verification: Few companies have robust systems to verify voices or videos.
-
Slow regulations: Laws have only recently begun to catch up
-
High ROI for criminals: The potential payoff from one successful deepfake scam is massive.
How to Minimise the Losses
1. For Businesses
-
Multi-Channel Verification: Never authorize financial transfers based solely on video/voice instructions. Confirm through separate channels.
-
Employee Training: To provide training to all the employees regarding these cases.
-
Deepfake Detection Tools: Deploy AI-driven detection systems such as Microsoft Video Authenticator or enterprise-grade forensic tools.
-
Incident Response Plans: Prepare protocols specifically for synthetic media threats.
2. For Individuals
-
Don’t trust viral videos or voice notes without context.
-
Limit oversharing of personal images and voice recordings online.
-
If approached with suspicious “celebrity” endorsements or investment schemes, verify through official sources.
3. For Governments & Platforms
-
Regulatory Frameworks: Enforce laws against non-consensual deepfakes and fraud.
-
Content Moderation: Social platforms must invest in real-time detection and quick takedowns.
-
Awareness Campaigns: Public education is the key to prevent further loss.
Can the Problem Ever Be Fixed?
The reality is deepfakes aren’t going away. The technology behind them is also powering incredible innovation in all the fields. The challenge isn’t to “ban” deepfakes but to:
-
Detect them faster
-
Educate users better
-
Build layered defenses that combine human analysis with AI-driven verification
Just as spam email and phishing never disappeared but became manageable through filters, and training, deepfakes will demand a similar balance of technology, policy, and awareness.
Comments
Post a Comment