CeFPro Connect

News
AI Deepfakes Top Key Financial Crime Threat Poll
A new industry survey shows artificial intelligence and deepfakes are overwhelming traditional financial crime controls. Compliance teams warn that outdated systems, fragmented regulation and fast evolving criminal tactics are eroding trust and stretching defenses beyond their limits.
Feb 09, 2026
Tags: Financial Crime Industry News
AI Deepfakes Top Key Financial Crime Threat Poll
The views and opinions expressed in this content are those of the thought leader as an individual and are not attributed to CeFPro or any other organization
  • AI and deepfakes are driving a trust crisis in financial crime prevention
  • Three quarters of compliance professionals rate AI as a high risk threat
  • Fraud as a service is lowering barriers for sophisticated criminal activity
  • Traditional identity checks are no longer reliable on their own
  • Legacy data and IT systems are weakening AI driven defenses
  • Fragmented regulation is creating opportunities for criminal exploitation

A new report published this week reveals a growing crisis of trust at the heart of the global financial system, as artificial intelligence and deepfake technology rapidly outpace the defenses designed to prevent fraud, money laundering and other financial crimes.

According to the Global AFC Threats Report 2026 from the Association of Certified Anti-Money Laundering Specialists, investigators are facing an increasingly hostile operating environment in which criminals are exploiting advanced technology, geopolitical instability and regulatory fragmentation to stay ahead of compliance teams.

The report is based on a global survey of more than 1,400 compliance professionals, most of them working in banks, insurers and other financial institutions.

It paints a picture of anti financial crime teams struggling to adapt as baseline expectations for detection and prevention rise sharply.

“What people were doing two years ago is not going to be sufficient for what they need to be doing in two years’ time,” said Justine Walker, executive vice president of thought leadership at ACAMS, who warned that the pace of change is forcing organizations to rethink long established controls.

For the third year in a row, respondents ranked the malicious use of generative AI as the single most significant risk facing financial crime teams.

Three quarters of those surveyed said AI now poses a high or very high risk, as criminals use the technology to automate scams, impersonate individuals and create convincing false identities at scale.

The report highlights the emergence of fraud as a service platforms, where advanced AI tools are packaged and sold to less sophisticated criminals.

This has lowered the barrier to entry for complex scams and made it harder for institutions to distinguish genuine customer activity from carefully engineered deception.

As a result, anti financial crime resources are being redirected. Eighty four percent of respondents said their primary focus is now on fighting scams and fraud against individuals, reflecting the surge in consumer level attacks enabled by AI driven impersonation.

AI powered identity fraud was ranked as the second biggest threat. Deepfakes, holograms and synthetic media are steadily eroding confidence in traditional identity verification methods.

Biometric checks and document security features that once formed the backbone of due diligence are no longer reliable on their own.

Walker said that checking documents such as passports or bank statements is increasingly ineffective, describing the deepfake challenge as one that makes traditional verification processes “essentially useless” without additional layers of intelligence.

In response, institutions are experimenting with new detection techniques, including behavioral pattern analysis, device intelligence and contextual risk signals.

Many are also turning to AI to fight AI driven crime, with more than half of organizations saying they are already piloting AI based tools.

However, the report warns that legacy infrastructure remains a critical weakness. Without unified, high quality datasets, AI models are prone to bias and false positives, placing further strain on already stretched compliance teams.

More than half of respondents said outdated data and legacy IT systems represent a high or very high risk to their anti financial crime programs.

The challenges are compounded by a lack of regulatory cohesion. Most respondents expect significant changes to AI and cryptocurrency regulation within the next year, creating uncertainty at the same time as criminal networks exploit gaps between jurisdictions.

ACAMS also points to the rapid digitization of underground banking networks, including technologically advanced hawala systems using encrypted messaging and crypto assets.

These networks have adapted faster than many financial institutions, exploiting regulatory weaknesses and skills shortages.

“There is no global agreed approach for how to respond,” Walker said, adding that resilience, rooted in modern data architecture and adaptive systems, will define which institutions can withstand the disruption ahead.

Sign in to view comments
You may also like...
ad
Related insights