PREMIUM CONTENT
This is premium content, available to Connect Plus users only
Unlock this content and more with Connect Plus membership.
Join a community of professionals and get:
Join a community of professionals and get:
15% discount
on all CeFPro events.
on all CeFPro events.
Post-event access:
unlock speaker decks and audience polls.
unlock speaker decks and audience polls.
Instant insights:
Full library access the moment you sign up.
Full library access the moment you sign up.
Digital Content

Log in to continue
Thank you for visiting CeFPro Connect and reading our latest industry updates. To continue reading more, please create your free account. You'll enjoy the following great benefits:
WHAT'S INCLUDED —
- Unlimited access to peer-contribution articles and insights
- Global research and market intelligence reports
- Discover Connect Magazine, a monthly publication
- Panel discussion and presentation recordings
Log in to continue or register for free
WHAT'S INCLUDED:
Access to peer-contribution articles and insights
Access to the latest global research and market intelligence reports
Access to the latest Connect Magazine, a monthly publication
Insight articles, panel discussions, webinars, podcasts and peer-led interviews
CONNECT+ MEMBERSHIP
Become a Connect+ member for unlimited access to our knowledge hub, receive 15% discount on all events, and access to audience insights and speaker presentations for up to three CeFPro events.
Log in or register for free in order to save this content
WHAT'S INCLUDED:
Unlimited access to peer-contribution articles and insights
Global research and market intelligence reports
Discover Connect Magazine, a monthly publication
Panel discussion and presentation recordings
Magazine
Why Explainability is Not the Problem in Machine Learning Risk
Joe Breeden challenges the widespread focus on explainability in AI risk management, arguing that the real issue lies in predictive uncertainty. Explainability is only valuable when model outputs are reliable, yet most institutions fail to measure confidence in predictions. This can lead to misplaced trust in both outputs and explanations. The article advocates for a shift toward embedding uncertainty into decision-making frameworks and implementing fallback mechanisms when predictions are weak. Ultimately, effective risk management depends on understanding when models should be trusted, not just how they work.
Apr 24, 2026
Joe Breeden, Founder and Chief Executive, Deep Future Analytics
Tags:
AI and Technology (including Fintech)
The views and opinions expressed in this content are those of the thought leader as an individual and are not attributed to CeFPro or any other organization
- Explainability is not the core issue
- Predictive uncertainty is often ignored
- Confidence in outputs is critical
- Fallback mechanisms are necessary
- Governance must shift focus
Log in to continue or register for free
WHAT'S INCLUDED:
Unlimited access to peer-contribution articles and insights
Global research and market intelligence reports
Discover Connect Magazine, a monthly publication
Panel discussion and presentation recordings
Sign in to view comments
Related insights —