Join a community of professionals and get:
on all CeFPro events.
unlock speaker decks and audience polls.
Full library access the moment you sign up.
Digital Content

- Unlimited access to peer-contribution articles and insights
- Global research and market intelligence reports
- Discover Connect Magazine, a monthly publication
- Panel discussion and presentation recordings
- Regulators worldwide move to shape AI use in financial services
- EU leads with AI Act, setting strict compliance framework
- US and China weigh innovation against tighter controls
- UK pursues flexible model compared to EU rules-heavy approach
- Southeast Asia, Australia, Italy, and Canada adopt hybrid strategies
- US state-level initiatives add to complexity
- Regulators demand validation and auditability of AI-driven systems
- RegTechs help bridge trust gap with transparency tools
- Education and collaboration key to responsible AI adoption
- Firms must balance innovation with compliance in global markets
Financial regulators across the world are accelerating efforts to shape the use of artificial intelligence in banking, aiming to capture the benefits of digital transformation while addressing rising risks in compliance and fraud.
Charmian Simmons, FinCrime and Compliance Expert at SymphonyAI, said financial institutions are under growing pressure to keep pace with diverging regulatory frameworks that are emerging across major jurisdictions.
The European Union has taken the lead with its landmark AI Act, the most comprehensive framework yet for governing AI in financial services.
The legislation sets strict requirements for high-risk applications of the technology, including areas such as fraud detection, customer onboarding, and risk management.
Other markets are adopting different strategies. The United States and China are weighing how to balance innovation against stricter controls, while the United Kingdom is pursuing a more flexible model than the EU’s rules-heavy approach.
This divergence is creating uncertainty for financial firms as they attempt to deploy AI responsibly across multiple regions.
Simmons pointed out that the variety of approaches reflects deeper questions about governance. Some regulators are leaning toward principles-based models, while others emphasize prescriptive rulebooks.
In Southeast Asia, Australia, Italy, and Canada, regulators are adopting hybrid frameworks, and in the United States, state-level initiatives are beginning to emerge as federal authorities debate broader oversight.
The global conversation also highlights differing levels of AI adoption within financial services. Simmons noted that regulators are closely monitoring how AI is being applied to fraud detection, compliance checks, and client monitoring.
Growing attention is being paid to the need for proactive validation and auditability of AI-driven systems, ensuring that decisions remain explainable and accountable.
Regulatory expectations are not confined to rule-making alone. Simmons emphasized the importance of education, collaboration, and co-creation between regulators, financial institutions, and technology providers.
RegTech firms, she said, play a crucial role in bridging the trust gap by offering tools that make AI applications more transparent and auditable.
Responsible AI is becoming the guiding principle across jurisdictions. Regulators are increasingly focused on ensuring AI is deployed in ways that protect customers, maintain fair outcomes, and reinforce trust in financial markets.
For firms, that means embedding ethical standards into AI systems, alongside the technical safeguards needed for compliance.
As regulators move at different speeds, financial firms are left navigating a patchwork of expectations. Simmons warned that the challenge will only intensify as global oversight expands.
For institutions, the race is no longer just to innovate with AI, but to prove they can do so responsibly under the scrutiny of regulators worldwide.