CeFPro Connect

Event Q&A
AI in Model Risk: From Documentation Burden to Evidence-Native Governance
As AI becomes embedded in model risk management, firms are focusing on practical applications that improve inventory oversight, documentation review, and risk monitoring. With regulatory expectations rising around transparency and accountability, this Q&A explores how AI can enhance governance efficiency while maintaining strong controls, traceability, and second-line independence.
Feb 19, 2026
Suresh Sankaran
Suresh Sankaran, Prudential Regulation Lead, NatWest
Tags: Model risk
AI in Model Risk: From Documentation Burden to Evidence-Native Governance
The views and opinions expressed in this content are those of the thought leader as an individual and are not attributed to CeFPro or any other organization
  • Explores AI applications in model inventory management and risk mapping
  • Discusses responsible automation in documentation and audit processes
  • Examines how AI-driven monitoring can strengthen drift detection and governance
  • Highlights the importance of traceability, citations, and workflow gating
  • Looks ahead to “evidence-native” governance tools that balance efficiency with explainability

Ahead of Advanced Model Risk Europe, we spoke with Suresh Sankaran, Prudential Regulation Lead at NatWest, about the practical realities of embedding AI into model risk governance. As firms look to automate documentation, monitoring, and portfolio oversight, the focus is shifting from theoretical capability to operational control - ensuring that efficiency gains are matched by transparency, traceability, and regulator-ready evidence. 


AI is often discussed in abstract terms, but your session focuses on practical applications.  Where are you seeing AI deliver the most immediate value today in areas like model inventory management and portfolio-wide risk mapping?

The most immediate value is in inventory hygiene and evidence linking: using AI to classify model artefacts, extract consistent metadata, and connect models to requirements and risk themes so you get a living map of what matters, where, and why.


In practice, inventory work fails in two predictable ways: it becomes out of date, or it becomes performative.  AI helps by doing the dull but essential reading at scale: pulling key fields from documentation, flagging gaps, and creating a consistent structure so that owners and second line can spend time on judgement rather than transcription.  In our own world, we already frame AI for document classification, automated extraction and summarisation, and risk identification style insights.  That maps neatly onto model inventory management where the core job is to keep a central view of purpose, ownership, limitations, and materiality up to date.

At portfolio level, the value is in risk mapping: clustering models by shared drivers, data dependencies, regulatory touchpoints, or known limitations, then surfacing concentrations.  The objective is not to pretend AI can replace governance.  It is to let governance see the full landscape without needing to read several tonnes of PDFs first.  


Practical proof points:

  • AI-assisted document review and gap analysis that compares internal documentation to requirements and highlights inconsistencies or omissions.

  • A model inventory should be centralised, dynamic, and metadata-rich, including classification by materiality and complexity.  AI accelerates population and refresh of that metadata, while humans retain accountability.

AI is excellent at reading.  This is fortunate, because our industry produces reading material at an industrial scale!

Automation through NLP and machine learning is transforming documentation and audit processes.  How can firms responsibly use these tools to reduce manual effort while still maintaining strong regulatory confidence and oversight?

Responsible automation in documentation and audit means reducing manual effort without compromising regulatory confidence. NLP and ML should be deployed as controlled assistants – operating within a human-in-the-loop framework with full traceability, strong change control, and a clearly documented boundary between what the tool proposes and what the firm approves.


Responsible automation starts with being explicit about the risk: regulators do not object to efficiency, they object to unverifiable efficiency.  The pattern that works is staged adoption.  In the short term, AI supports reviewers by checking consistency between regulatory text, commentary, and evidence, while the accountable team signs off.  Your Teams note describes exactly that near-term construct: populate commentary and evidence, then ensure alignment and consistency with regulatory text.

Over time, you can increase automation, but you keep regulatory confidence by embedding four controls:

  1. Citations and provenance: every AI suggestion links back to the underlying artefact it used.

  2. Workflow gating: no automated output becomes official until an accountable role approves it.

  3. Versioning and change management: treat prompt changes, model updates, and taxonomy changes like any other controlled change.

  4. Independent challenge: second line oversight remains second line oversight, even if the first draft is machine-generated.  Your longer-term vision explicitly includes second-line review and governance sign-off outputs, which is the right shape. 

Practical proof points:

  • AI can reduce manual error by highlighting inconsistencies and omissions during document review and gap analysis.

  • A mature MRM framework expects clear roles across lines of defence, documentation standards, and change control.  Automation must sit inside that, not beside it.

We are not outsourcing accountability to a robot.  We are outsourcing the first draft, which is what most of us were doing anyway.


AI-driven monitoring tools can now detect model drift, anomalies, and performance shifts in near real time.  What are the key considerations when embedding these capabilities into existing governance and validation frameworks?

AI monitoring should be treated as an early warning system that feeds existing governance: clear thresholds, ownership, escalation routes, and an auditable link from alert to decision to action.

The main consideration is avoiding “dashboard theatre”.  Monitoring only adds value if it changes outcomes.  To embed it properly, align it with three governance elements already familiar to model risk:

  • Defined monitoring metrics and trigger thresholds (what constitutes drift, instability, or anomaly).

  • Accountability and escalation (who investigates, who approves remediation, and by when).

  • Link to change management and validation (when an alert triggers a model change, a limitation, a compensating control, or a re-validation).

We already frame “real-time monitoring and alerts” for regulatory change impacts and compliance status.  The same control logic applies to model monitoring: alerts should be explainable, logged, and tied to a governed response path, not simply observed.


Practical proof points:

  • Intended AI use cases include model validation and challenger models and self-assessments, which naturally complement monitoring by providing challenge when drift appears.

  • Strong MRM expects clear documentation of objectives, assumptions, limitations, and controlled change management.  Monitoring outputs should update those artefacts, not live in a separate universe.

If an alert triggers no action, it is not monitoring.  It is modern wallpaper.


Looking ahead, how do you see AI-enabled governance tools evolving over the next few years - particularly in balancing efficiency gains with growing expectations around transparency and explainability? (forward-looking)

The next phase is evidence-native governance: tools that do not just automate tasks, but automatically produce audit-ready trails, citations, and explainable summaries that fit into existing committees and controls.

Over the next few years, I expect three shifts.

  • First, governance tooling will become more traceable by design.  The direction of travel in internal discussion is already toward better citations and constraints on what tools can ingest and how they justify outputs.  The focus will be on practical limitations (input structures) and the mechanics needed for reliable outputs, including attention to citations.  That is precisely the route to explainability: less mystique, more provenance.

  • Second, we will see more workflow-integrated agents that sit inside existing governance steps rather than creating a parallel process.  We will start to frame automation for compliance checking, summaries, and monitoring, which are governance-adjacent capabilities that will increasingly plug straight into model lifecycle controls.  

  • Third, efficiency will be tolerated only where transparency keeps pace.  The winning tools will be those that make it easier to answer the question: “How did you reach that conclusion, and what did you use to do it”.  That is a much more durable story than “the model said so”.

In governance, the future is not faster decisions.  It is faster evidence for the same decisions.


As AI becomes more embedded in model risk oversight, why are industry forums like Advanced Model Risk Europe important for helping practitioners share real-world case studies and move from theory to practical implementation?

Forums like Advanced Model Risk Europe matter for moving from theory to implementation because the hardest part is not the algorithm.  It is operationalising it under scrutiny.  Forums let practitioners swap what actually worked, what failed, and how they built regulator-ready controls around the technology.

Industry forums accelerate maturity in two ways: shared patterns and shared language.  First, they let firms compare practical approaches to issues like evidence trails, human oversight, validation, and monitoring, so we stop reinventing the same control set in slightly different fonts.  Second, they help build a consistent narrative about responsible use, which matters when expectations on transparency and explainability are rising.

Our engagements with external peer forums like Advanced Model Risk Europe shows the value of convening practitioners across institutions to discuss governance in real operating conditions.  It is a venue where case studies and implementation detail can move the conversation from “interesting idea” to “repeatable method”.

Theory is elegant.  Implementation is where the paperwork lives.

Suresh Sankaran Bio

Biography coming soon

Suresh Sankaran
Sign in to view comments
You may also like...
ad
Related insights