CeFPro Connect

Magazine
Why Explainability is Not the Problem in Machine Learning Risk
Joe Breeden challenges the widespread focus on explainability in AI risk management, arguing that the real issue lies in predictive uncertainty. Explainability is only valuable when model outputs are reliable, yet most institutions fail to measure confidence in predictions. This can lead to misplaced trust in both outputs and explanations. The article advocates for a shift toward embedding uncertainty into decision-making frameworks and implementing fallback mechanisms when predictions are weak. Ultimately, effective risk management depends on understanding when models should be trusted, not just how they work.
Apr 24, 2026
Joe Breeden
Joe Breeden, Founder and Chief Executive, Deep Future Analytics
Tags: AI and Technology (including Fintech)
Why Explainability is Not the Problem in Machine Learning Risk
The views and opinions expressed in this content are those of the thought leader as an individual and are not attributed to CeFPro or any other organization
  • Explainability is not the core issue
  • Predictive uncertainty is often ignored
  • Confidence in outputs is critical
  • Fallback mechanisms are necessary
  • Governance must shift focus
Log in to continue or register for free
WHAT'S INCLUDED:
Unlimited access to peer-contribution articles and insights
Global research and market intelligence reports
Discover Connect Magazine, a monthly publication
Panel discussion and presentation recordings
Sign in to view comments
ad
Related insights