Ahead of Advanced Model Risk Europe, we spoke with Arpit Jain about how model risk management is evolving as AI adoption accelerates and regulatory expectations rise. He shares insights on closing the skills gap, strengthening collaboration between lines of defence, and expanding MRM’s role beyond validation to support responsible innovation across the model lifecycle.
Widening the skills gap in model risk management is often talked about. From your perspective, which combinations of skills - quant, AI/ML, domain expertise, or governance - are proving the hardest to find today?
I can best answer this by sharing a snippet from a conversation I had with a senior manager in MRM at the beginning of my career. She told me that the future of MRM belongs to those who are effectively bilingual, by being both technically fluent and risk-aware. Usually, professionals tend to fall into one of the two categories, either highly technical and analytical specialists, or domain experts with strong governance and judgement. It is very rare to find holistic thinkers who can genuinely bridge the gap between advanced technical analysis and robust risk management practices.
It is comparatively easy to find strong mathematicians and statisticians, but very few of them would understand why MRM exists in the first place. Many candidates are proficient in AI, machine learning techniques, open-source tools, and experimental modelling approaches, yet struggle to translate these capabilities into credit risk, market risk, or financial crime contexts in a way that aligns with regulatory expectations. Conversely, many professionals with deep domain expertise can readily understand validation theory, apply it to complex models, and clearly articulate model limitations to senior management, risk committees, and regulators. However, they often lack the technical depth required to substantiate their theoretical understanding.
Therefore, Model risk management increasingly requires validators who can move seamlessly between code, controls, and committees. Being able to challenge a complex model based on regulations is only half the job; articulating that challenge and supporting your claim with technical analysis is equally critical. This blend of analytical rigour and professional judgement remains rare, particularly among young talent. It typically develops only through sustained exposure and experience that cannot be easily accelerated. Cultivating this balance requires time, curiosity, and humility. But it is precisely what makes this profession both challenging and deeply rewarding.
Financial institutions are experimenting with different ways to build cross-functional talent. What have you seen actually work in practice - whether it’s structured training, certification pathways, or rotational programmes?
In my experience, what works in practice is the combination of collaborative structured training sessions and rotational programmes. This builds an ecosystem of development that recognises how model risk capabilities can be improved.
Structured training sessions play a critical role in establishing foundational knowledge, particularly early in a career. Training in areas such as statistics, econometrics, artificial intelligence, machine learning, and risk management frameworks is essential to creating a common technical and regulatory language, as well as a base for standard of rigour. However, training alone rarely produces strong model risk validators. The most impactful sessions I have seen are collaborative and problem-led, rather than syllabus-driven sessions delivered by an individual expert. When people come together to discuss real models, real data limitations, interpretation of governance standards, and practical validation challenges, it fosters a learning environment that supports the development of both young talent and experienced MRM professionals.
These rotations are most effective when rotations are long enough for individuals to experience the real consequences of their decisions and understanding the importance of each and every process, rather than simply observing those processes. A modeller who has had to defend their work to validation, a validator who has experienced the pressures of model delivery, or a model user who understands the mathematical foundations behind model construction will inevitably develop a more balanced, credible, and pragmatic perspective on MRM. My own career began in a graduate programme that allowed me to rotate between model development and model validation. Experiencing both sides of the lifecycle not only deepened my understanding of the overall process, but also enabled me to make a more informed decision about where my skills were best aligned.
Ultimately, excellence in MRM does not come from isolated training or narrow specialisation. It emerges from shared problem-solving, and exposure across model lifecycle. Organisations that invest in both collaborative learning and meaningful rotations are far more likely to build a pool of cross functional talent.
Collaboration models are changing as AI adoption grows. How are organisations practically enabling first-line teams while still ensuring strong second-line oversight and independence?
What I have seen work best at NatWest is not a dilution of independence, but a clearer articulation of roles, supported by earlier involvement and more informed engagement from the second line. NatWest has been embedding model risk validators earlier in the model lifecycle, particularly during the design and data selection stages. This early engagement helps first-line developers better understand regulatory expectations around data governance, control standards, and model risk considerations.
At NatWest, this collaboration is facilitated through regular meetings between model users and developers, known as Model User Groups (MUGs), which are also attended by MRM experts. Their involvement is explicitly advisory rather than permissive. Independence is preserved because formal validation opinions are formed later based on documented evidence and with clear separation of accountability.
Both the first and second lines maintain their own guidance, first line has development guidelines, while the second line maintains validation standards. Although both are derived from PRA regulations, each reflecting the distinct perspective and responsibilities of the respective line of defence. Strong second-line function should be investing in clear and reusable artefacts, such as modelling standards and pre-defined validation guidelines for specific model suites (like separate standards for IFRS9 and IRB models). This shifts the dynamic from case-by-case debate to a shared understanding of what is best for the organisation, enabling first-line teams to move faster while reducing downstream friction.
Independence is only meaningful when the second line is technically credible. Organisations that have successfully enabled collaboration ensure their MRM teams can engage deeply with modelling methods, not just to replicate development work, but also to challenge assumptions, stress limitations, and assess control adequacy with confidence. This technical credibility reduces defensiveness and improves the robustness of challenge.
Finally, collaboration is effective when it is underpinned by explicit governance. Disagreements are inevitable; what matters is that escalation paths are clear, decision rights are transparent, and outcomes are properly documented. This clarity enables the first line to operate efficiently while allowing the second line to maintain independence.
Looking ahead, as MRM shifts from “checking models” to co-creating controls and guiding responsible AI innovation, how do you see the mandate of the function evolving over the next five years?
Over the next few years, I expect the authority and scope of MRM duties to expand materially, driven by several structural shifts in how models are developed, deployed, monitored and governed.
Traditionally, model risk functions have been strongest at point-in-time validation, only assessing models once they are built and deciding whether they meet approval standards. While this will remain a core responsibility, I anticipate much deeper involvement across the full model lifecycle. MRM will be involved in every step and stage from sourcing data and designing model through ongoing monitoring, change management, and eventual decommissioning. This will not be about owning models, but about stewarding the control environment within which models operate.
As models become more complex and less transparent, especially with the growing use of advanced AI and Machine learning techniques, the idea that compliance can be reduced to checklist validation becomes increasingly untenable. The future mandate will place greater emphasis on professional judgement for assessing limitations, understanding trade-offs between predictive performance and business intuition, and clearly articulating residual risks to decision-makers. This shift elevates MRM from a primarily technical gatekeeper to a trusted risk advisory function.
In this digital age, regulators and customers are mainly concerned with how automated decisions are made and governed. Model risk functions will play a vital role in translating complex technical model behaviour into narratives that are comprehensible to non-technical stakeholders as well. This interpretive role will be as crucial as traditional regulatory validation.
The future of MRM is neither narrow nor mechanical, it is broader, judgment-driven, and influential. Effective MRM in this AI era will not be defined by saying “no” more often, nor by saying “yes” more easily. Relatively, it will be defined by enabling responsible progress through clarity, competence, and independence that is actively exercised rather than passively asserted.
