Digital Content
- Unlimited access to peer-contribution articles and insights
- Global research and market intelligence reports
- Discover iNFRont Magazine, an NFR publication
- Panel discussion and presentation recordings
Video
AI is expected to be increasingly used to make critical business decisions in critical business applications - so early adopters will probably have some efficiency gains.
Early adopters will be able to improve the speed or the throughput of their work and will probably realize the return on investment they make.
Risk should be considered in two ways - the risk of the kind of model itself, and the risks of actually implementing it
Drift is one of the key risks - will your AI model be able to adapt quickly enough to changing changing environments or changing data? Will it develop or learn some sort of bias as it learns new rules? Organizations need to ensure they have a very clear use case for AI
Important to ensure organizations are mindful of privacy and security concerns when training AI or deploying it within the context of personally identifiable information.
Because of their probabilistic nature, and no matter how well AI is trained, even the most state-of-the-art models will make a mistake
Organizations need to embrace the fact that AI will make mistakes - they can be used to retrain AI or enhance AI training
People really value and prioritize human interaction and it’s important to them, but we need to recognize that AI is actually really good at fulfilling certain functions and improving people's interactions with, you know, with a bank.
Managing risk and mitigating potential exposure doesn't have to be a zero-sum game - there’s a lot to gain from proactively mitigating risk.
Collaborations between people who are passionate about AI performance and risk mitigation could help mitigate exposures caused by AI
Alexandra is an Underwriter of AI performance risks at Munich Re. She partners with AI providers to de-risk their customers’ operations and boost sales, working across the full value chain of their solution, from technical due diligence on AI capabilities to go-to-market strategies to improve adoption of their offering. She spearheads the development of a new insurance product for liabilities arising from discrimination/bias in AI; and moderates the podcast, AI – Safe and Sound, exploring AI in business. Alexandra holds a Juris Doctor (Master’s of Law) from the University of Western Australia.