CeFPro Connect

News
AI Regulation Blind Spots Put Banks at Risk
Banks are being warned not to rely solely on AI regulatory definitions when managing risk. Experts argue that real exposure lies in how systems behave in practice, pushing firms to strengthen testing, governance, and oversight beyond what current laws explicitly require.
Apr 07, 2026
Tags: AI and Technology (including Fintech) Industry News
AI Regulation Blind Spots Put Banks at Risk
The views and opinions expressed in this content are those of the thought leader as an individual and are not attributed to CeFPro or any other organization
  • Banks warned not to equate regulatory scope with actual AI risk
  • Many AI tools fall outside high-risk categories but still impact outcomes
  • Regulators increasing focus on testing, governance, and real-world performance
  • QA teams becoming central to AI risk management and validation
  • Poor testing can lead to operational, reputational, and compliance failures
  • Structured assurance models emerging across risk tiers
  • Synthetic data still requires strong governance and validation
  • Documentation and auditability critical for regulatory confidence
  • Fairness extends beyond legal discrimination definitions
  • Human oversight must be accountable and meaningful 

Banks and financial institutions navigating the rapidly evolving landscape of artificial intelligence regulation are being cautioned against equating legal definitions of risk with the realities of operational exposure.

As firms map their obligations under frameworks such as the EU AI Act and the Colorado AI Act, a growing concern is emerging that many are focusing too narrowly on whether systems fall into formally defined categories.

According to Jey Kumarasamy, legal director of the AI Division at ZwillGen, this approach risks overlooking more significant vulnerabilities.

“If our system isn’t ‘high risk’ under the EU Artificial Intelligence Act or the Colorado AI Act, why do I need to do anything about it?” Kumarasamy asked, highlighting what he sees as a fundamental misunderstanding among firms deploying AI into critical workflows.

He argues that relying on statutory definitions alone is insufficient. “The assumption is that risk is whatever statutes say it is, and only that. But that framing is narrow, unsafe for consumers and bad for business,” he said.

For banks, the implications are significant. Many AI tools used across operations, customer service, fraud detection, and internal decision-making may fall outside formal high-risk classifications but still influence outcomes in ways that carry material risk.

Regulators are increasingly reflecting this broader perspective. In the United Kingdom, both the Bank of England and the Prudential Regulation Authority have intensified scrutiny of model governance, validation, explainability, and oversight of third-party AI systems.

At the same time, the Financial Conduct Authority has emphasized the importance of live testing, synthetic data controls, and ongoing assurance after deployment.

These developments signal a shift in expectations. Compliance is no longer limited to classification and documentation. Instead, firms are being required to demonstrate that AI systems are controlled, testable, and accountable under real-world conditions.

Kumarasamy warns that firms should not wait for regulators to define every risk scenario. “The laws create a focus on categories that regulators prioritise and enforce. They do not attempt to say that these categories are the extent of the risks businesses may face by using other AI systems,” he said.

In practice, this means that testing and quality assurance functions are becoming central to governance. Their role is expanding beyond technical validation to include assessing robustness, explainability, drift, and the effectiveness of controls.

“If a system has not been ensured via sufficient and appropriate testing to perform its expected function in a reliable, understandable, and predictable way, then why would it be embedded into business operations?” Kumarasamy said.

The challenge is particularly acute in areas where AI systems influence decision-making but do not meet formal high-risk thresholds.

Examples include employee performance tools, customer sentiment analysis, and chatbots.

While these applications may appear low risk from a regulatory perspective, failures can still lead to reputational damage, customer complaints, and supervisory attention.

“If a tool shapes decisions, such as who gets attention, how people are judged, or how customers are responded to, then zero oversight is rarely the right answer,” he said.

As a result, banks are being pushed toward more structured and evidence-based assurance models.

Lower-risk systems may require only basic functional testing and documentation, while medium-risk tools demand performance monitoring, fairness checks, and contingency planning.

For higher-risk applications, the expectations are far more stringent, including extensive validation, human oversight, continuous monitoring, and independent review.

Data governance is also becoming a critical component of this framework. The growing use of synthetic data to test AI systems does not eliminate risk. Regulators and industry experts emphasize that such data must still be subject to auditability, privacy assessment, and rigorous testing.

Kumarasamy stresses that documentation is essential to this process. Firms must be able to demonstrate how systems were built, tested, and monitored, including the datasets used, performance metrics, and known limitations.

“Document what datasets were used, test protocols, metrics, cohort-level results, known limitations, and mitigations accepted or deferred, with rationale,” he said.

The question of fairness further complicates the landscape. Kumarasamy notes that fairness cannot be reduced to traditional legal definitions of discrimination.

Instead, firms must assess how outcomes vary across different groups and ensure that these differences are understood and justified.

Human oversight remains an important safeguard, but it must be meaningful rather than symbolic. “This means assigning clear responsibility and making the reviewer accountable, not a rubber stamp,” he said.

The broader message for financial institutions is clear. Regulatory thresholds provide a starting point, but they do not define the full scope of risk associated with AI.

As these systems become more embedded in core operations, the burden is shifting toward firms to prove that they function safely and reliably.

Kumarasamy’s conclusion underlines this shift. “Artificial intelligence governance laws create certain categories where assessment, testing, and documentation are mandatory. They certainly matter,” he said.

“However, these thresholds cannot define the total risk posture for a company building, buying or operating AI in ways integral to its operations.”

For banks, the implication is that governance is no longer a compliance exercise alone. It is a core component of resilience, ensuring that systems perform as intended and that risks are identified and managed before they escalate.

Sign in to view comments
You may also like...
ad
Related insights