Join a community of professionals and get:
on all CeFPro events.
unlock speaker decks and audience polls.
Full library access the moment you sign up.
Digital Content

- Unlimited access to peer-contribution articles and insights
- Global research and market intelligence reports
- Discover Connect Magazine, a monthly publication
- Panel discussion and presentation recordings
- Explores the problems financial insititutions are trying to solve by introducing AI into risk modelling.
- Examines how firms are addressing explainability and governance challenges when deploying AI within model risk frameworks.
- Highlights how supervisory expectations around AI are evolving.
- Discusses how risk teams use automation and predictive AI without weakening human oversight.
Ahead of Risk Evolve 2026, we spoke with Ratul Ahmed to explore AI-driven risk modelling and supervisory alignment. Ahead of the extended interactive session taking place at Risk Evolve 2026.
What problem are financial institutions most urgently trying to solve by introducing AI into risk modelling today?
Financial institutions are racing to use AI to drive efficiency and innovation, but our immediate struggle is how to scale safely and quickly under tightening and fragmented rules. Its all about balancing speed, trust, and compliance in a rapidly evolving regulatory landscape—especially as AI permeates processes beyond traditional risk/pricing models
How are firms practically addressing explainability and governance challenges when deploying AI within model risk frameworks?
In several ways:
- Reviewing the 3LoD framework is it functional with the operational velocity we have today
- Deploying Risk based tiering
- Enterprise gating, ensuring shared services scale with control
- Providing a clear definition of AI through the organisation and ensuring that AI systems can be differentiated from AI use cases and AI models
From your perspective, how are supervisory expectations around AI evolving — and where do institutions most commonly misjudge them?
Regulatory expectations differ across geography and there is fragmentation across regimes
- Conflating AI systems with Models
- Underestimating explainability especially for capital and credit decisions
- Inventory and traceability gaps
How can risk teams use automation and predictive AI without weakening human oversight or accountability in decision-making?
Risk teams can safely use automation when AI accelerates decisions but never obscures who is accountable, supported by dual-loop governance, explainability, explicit human decision rights, and traceable control architecture.
What one insight or practical takeaway do you hope participants will leave this workshop with?
If there’s one thing I want participants to leave this workshop with, it is this:
Clarity of intent is the real foundation of safe and effective AI.
Before choosing tools, architectures, or controls, teams must be able to articulate — with precision — what decision is being supported, what risk is being introduced, and what accountability must remain human.
AI doesn’t fail because the technology is immature; it fails when the purpose is unclear, the governance boundary is undefined, or accountability becomes blurred.
When organisations start with clarity of intent:
AI becomes easier to govern, audit, and scale
Human decision rights remain protected
Risk and value can be balanced without friction
Regulatory expectations become far easier to meet
And teams can innovate confidently rather than defensively
In short: define the “why” before the “how.”
and that we as Model Risk Practitioners can be part and parcel of providing that clarity
Biography coming soon