Join a community of professionals and get:
on all CeFPro events.
unlock speaker decks and audience polls.
Full library access the moment you sign up.
Digital Content

- Unlimited access to peer-contribution articles and insights
- Global research and market intelligence reports
- Discover Connect Magazine, a monthly publication
- Panel discussion and presentation recordings

Examines the practical impact of the OECD AI Principles on MRM functions
Explores how firms can integrate AI and traditional models within one governance framework
Discusses metrics for explainability, fairness, bias, and safety
Highlights the shift toward continuous validation and lifecycle oversight
Looks ahead to evolving regulatory expectations and cross-functional governance
Ahead of Advanced Model Risk Europe, we spoke with Odile Hounkpatin about how responsible AI principles are reshaping model risk management. As AI and generative AI models become embedded in critical decision-making, institutions are being challenged to expand governance frameworks, enhance validation practices, and embed fairness, explainability, and safety into everyday model oversight.
The OECD AI Principles are increasingly referenced in discussions around responsible AI. From a model risk perspective, what do these principles practically mean for how MRM functions operate today?
The OECD AI Principles are increasingly being used as a guiding framework for responsible AI risk management, and are driving several meaningful changes in how model risk management (MRM) functions operate today.
One of the most notable shifts is that MRM is becoming increasingly cross-functional. Alongside traditional model risk teams, legal, procurement, IT, and compliance functions are now actively involved in model oversight- reflecting the broader reality that AI/GenAI models have impacts that extend well beyond their immediate outputs.
Model inventories are also evolving. Organisations must now track not just the models they use, but critically, the context in which those models are deployed and the decisions they inform.
Validation and monitoring practices are similarly transforming, moving away from a linear approach of pre-deployment validation followed by post-deployment monitoring, towards a model of continuous validation and monitoring throughout the full model lifecycle.
Finally, traditional actors such as model validators and control functions are operating with an extended remit, which now includes assessing compliance with the Principles and providing assurance that goes beyond classical performance metrics.
In summary, the OECD AI Principles are reshaping MRM from a siloed, model-focused activity into a broader, enterprise-wide oversight function, ensuring that AI adoption is safe, ethical, and aligned with both regulatory expectations and the responsible safeguarding of societal welfare.
As AI and GenAI models become more embedded in decision-making, how can firms build an integrated MRM framework that consistently covers both traditional models and newer AI-driven approaches?
The key pillars of MRM - risk identification, governance, development, validation, and monitoring - remain foundational, meaning firms can build an integrated framework by extending existing structures to accommodate new challenges such as generative outputs, and organisational and broader societal risk considerations.
Framework scope, policies, and processes must be expanded to explicitly cover AI/GenAI models alongside traditional quantitative models. To support this, firms should consider establishing a dedicated cross-functional AI governance committee with clearly defined accountability, ensuring that decision-making, risk ownership, and escalation paths are transparent and enforceable.
The model inventory can similarly be extended to capture not only model type and owner, but also context, intended use, and operational dependencies, ensuring that both traditional and AI models are accounted for, risk-assessed, and appropriately governed.
Validation metrics and monitoring tools should also be embedded throughout the model lifecycle to enable continuous oversight of performance, fairness, robustness, and alignment with intended use.
In summary, an integrated MRM framework should blend traditional rigour with AI/GenAI specific oversight, enabling organisations to manage risk consistently, maintain accountability, and build trust across all decision-making processes, whether driven by conventional or AI-based models.
Explainability, fairness, bias, and safety are central to responsible AI. What metrics or indicators do you see as most effective for assessing these dimensions within AI and GenAI models?
Explainability, fairness, bias, and safety are indeed central to responsible AI, and assessing these dimensions effectively requires a combination of quantitative metrics, qualitative evaluation, and ongoing monitoring.
Explainability can be measured using tools such as SHAP and LIME, which quantify the contribution of each input to a model output, helping stakeholders understand model behaviour and decision logic. Attention maps and saliency scores are also useful in this space, particularly for deep learning models, as they highlight which features or data points most influenced a given prediction.
Fairness and bias can be assessed through metrics such as equal opportunity and equalised odds, which evaluate whether error rates are consistent across demographic groups, ensuring that AI models do not systematically disadvantage specific populations. Demographic parity, which checks whether positive outcome rates are proportionally consistent across groups, and the disaggregated performance analysis across subgroups are also valuable tools for surfacing hidden disparities that aggregate metrics might obscure.
Safety can be evaluated through adversarial testing, which assesses model robustness against manipulated or intentionally harmful inputs, ensuring reliable behaviour under real-world conditions. Stress testing and out-of-distribution detection further complement this by probing model performance under extreme or unexpected scenarios. Uncertainty quantification, i.e. measuring how confidently a model makes predictions, is also increasingly recognised as a key safety indicator, as overconfident models in high-stakes settings can pose significant risks.
Ultimately, effective assessment of responsible AI must combine these quantitative indicators with qualitative evaluation and human oversight, ensuring that as models evolve, organisations maintain trust, compliance, and ethical standards.
Looking ahead, how do you expect model validation activities and governance processes to evolve as regulatory expectations around ethical and responsible AI continue to mature?
Regulatory expectations around responsible AI are evolving rapidly, and what was once guidance is increasingly becoming structured obligation. This shift will fundamentally change how model validation and AI governance operate within institutions.
The scale and sensitivity of AI and generative AI risks demand cross-functional governance. Organisations are responding by establishing dedicated AI governance committees that bring together all relevant stakeholders- from risk and compliance to legal, technology, and ethics functions. Lifecycle oversight is also deepening, with continuous monitoring, ongoing bias and drift assessment, real-time oversight for harmful outputs, and formal incident response mechanisms now becoming standard expectations. Governance, in short, is becoming enterprise-wide, more dynamic, and more adaptive.
Validation is similarly evolving, moving from retrospective testing to forward-looking assurance. Beyond checking performance after deployment, validation will increasingly involve scenario-based testing, bias analysis, robustness assessments, and documented evidence that ethical risks were identified, considered, and mitigated at the design and development stage.
A further important shift is the move from accuracy metrics to impact metrics. Error rates are no longer sufficient. Fairness, explainability, robustness, and real-world impact are becoming measurable and monitored criteria, complemented by drift detection, ongoing performance monitoring, documented change management, and traceable audit trails.
The broader direction of travel is clear: the ethical and responsible AI dimension is extending and deepening the scope of MRM- transforming governance from a technical control function into a broader accountability framework, and model validation from a technical checkpoint into a strategic assurance capability that is central to trust, compliance, and the sustainable scaling of AI.
Why are industry forums like Advanced Model Risk Europe important for helping practitioners tackle these emerging AI governance and model risk challenges, and for translating high-level principles into practical execution?
Industry forums such as Advanced Model Risk Europe play a critical role in helping practitioners navigate the emerging challenges of AI governance and model risk management.
First, they provide a platform for the exchange of best practices, enabling participants to learn directly from each other experiences. Practitioners can explore how peers are translating high-level AI principles into practical MRM frameworks, policies, and controls, bridging the gap between regulatory intent and operational reality.
Second, they create a space where new ideas can be assessed, challenged, and refined through constructive dialogue. Presentations, case studies, and open discussions give participants the opportunity to critically examine different approaches, identify gaps in their own frameworks, and take away lessons that are immediately applicable.
Third, the diversity of participants - practitioners, regulators, researchers, vendors, and independent consultants - ensures a richness of perspective that few other settings can replicate. This complementary expertise deepens discussions, accelerates learning, and drives meaningful improvements in risk management practice. When insights from multiple viewpoints are brought together, the industry as a whole benefits from more robust, effective, and grounded approaches to AI and model risk.
In short, forums like these are far more than networking events: they are engines for collective learning and innovation, and play an important role in accelerating the translation of high-level principles into actionable, industry-wide standards.
Odile Hounkpatin has over 25 years experience in the financial industry, during which she has held various leadership positions including Head of Model Development and Model Validation. Her expertise spanned Complex Derivatives Pricing, Traded and Banking Book Valuation and Risk models, Stress Testing, Capital as well as AI/ML/GenAI models. She has worked with various institutions in Paris and London including Natixis, ABN AMRO, Nomura and Santander UK.
