Join a community of professionals and get:
on all CeFPro events.
unlock speaker decks and audience polls.
Full library access the moment you sign up.
Digital Content

- Unlimited access to peer-contribution articles and insights
- Global research and market intelligence reports
- Discover Connect Magazine, a monthly publication
- Panel discussion and presentation recordings

Models struggle with scarce or imperfect data.
Illiquid markets can suddenly disrupt pricing.
Back-testing is less reliable with thin data.
Advanced analytics reveal dependencies without excess complexity.
Adjustments require careful governance and documentation.
Too many overrides may indicate model issues.
Ahead of Risk Evolve 2026, we spoke to
Danny Dieleman about how markets evolve amid uncertainty, traditional valuation
frameworks face mounting pressures from scarce or imperfect data. Danny
Dieleman, Head of Wholesale Banking Capital Treasury at ING, explores how firms
can adapt risk and valuation practices, leverage advanced analytics, and ensure
defensible pricing.
Why are
traditional valuation frameworks coming under pressure in today’s uncertain and
data-constrained market environment?
There
is probably more data available than ever in human history. Not only the amount
of available data is incomprehensible, also the timeliness of the data is
astonishing. We can obtain in many cases real-time data, or otherwise a few
times a day. At the same time we are more and more relying on
this data for key processes and hence we are more vulnerable to the
availability of data and its quality. This certainly holds for financial
models: they are used to mark-to-market transactions, for capital and risk
management purposes, as well as other key processes in the bank. All models are
in essence a simplification of reality and should therefore be treated with
care. Certainly in areas where historical patterns are shifting, or where
data by nature is scarce, skewed, or imperfect.
Where
are you seeing the most critical vulnerabilities in model assumptions as market
structures continue to shift?
Despite
the availability of data, there are still certain asset classes that remain
relative illiquid and where data is scarce. But also liquid markets can turn
illiquid in case of a market disruption and then instruments that can normally
be priced by a simple look at a Bloomberg terminal can in the blink of a eye
become difficult to price. The risk that you are running if data is difficult
to come by is that your pricing can be way off, or alternatively, your price is
correct, but you have a hard time explaining this to auditors, or your
regulator.
How
should back-testing evolve to remain credible when historical data is less
reliable?
Back
testing will always remain an important tool to verify whether the model makes
sense and whether the current calibration is still in line with the market.
But, if observable prices are thin, or not well representative for the
portfolio that needs to be valued, the information that is obtained from back
testing will be less robust. As a consequence, it cannot be ruled out that
there is more “model risk”, as either the underlying assumptions of the model,
nor the calibration can be judged as adequate. This may have consequences for
any model risk assessment that needs to be made as part of the model
development and/or monitoring process.
What
role can advanced analytics play in improving pricing transparency without
adding unnecessary model complexity?
This is
a tricky question, as advanced analytics has the risk of adding unexplained
complexity to the model, and this is something that regulators, but also
management is usually not keen on. There is, however, certainly a role for
advanced analytics, as these tools can bring hidden dependancies to light, or
they can help in areas where data is incomplete. This can be embedded in
different ways in the model development process.
How are
firms embedding adjustments to the modelled fair value into risk and
valuation processes while maintaining consistency and defensibility?
This
needs to be addressed in the governance of the valuation process. It is an
important issue, as any adjustments, or overwrites to a model may come across
as arbitrary, or cherry picking. So, a careful analysis, with input of all
relevant stakeholders and a well documented process are some of the key
elements to avoid any misconceptions. Also, too many overwrites of model
outcomes may lead to the conclusion that the model is not performing well and a
redevelopment needs to be performed. This may, or may not be desirable, but one
needs to keep this in mind during the decision making process.
Biography coming soon
