CeFPro Connect

News
UK Lawmakers Demand Tougher Oversight of AI in Finance
British lawmakers warned that regulators are moving too slowly to address risks from artificial intelligence in financial services, urging AI specific stress tests and clearer consumer protection rules as automated systems spread across banking, insurance, and markets.
Jan 26, 2026
Tags: AI and Technology (including Fintech) Industry News
UK Lawmakers Demand Tougher Oversight of AI in Finance
The views and opinions expressed in this content are those of the thought leader as an individual and are not attributed to CeFPro or any other organization
  • Lawmakers say regulators are moving too slowly on AI risks
  • Treasury Committee urges AI specific stress tests for finance
  • FCA called on to clarify consumer protection rules by 2026
  • Concerns raised over opaque credit decisions and exclusion risks
  • Experts warn of financial stability threats from AI driven trading
  • Reliance on major US tech firms seen as a concentration risk
  • Regulators say they will review and respond to recommendations

Britain’s financial regulators are not doing enough to prevent artificial intelligence from harming consumers or destabilizing markets, lawmakers said on Tuesday, calling on watchdogs to abandon a cautious wait and see approach as AI becomes embedded across financial services.

In a report on AI in finance, Parliament’s Treasury Committee said the Financial Conduct Authority and the Bank of England should begin running AI specific stress tests to help firms prepare for market shocks triggered by automated decision making systems.

The committee said the rapid deployment of AI across core financial functions has outpaced regulatory preparedness, leaving consumers and the wider financial system exposed to new forms of risk.

Lawmakers urged the FCA to publish detailed guidance by the end of 2026 clarifying how existing consumer protection rules apply to AI systems and how much senior managers must understand about the tools they oversee.

“Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI related incident and that is worrying,” committee chair Meg Hillier said in a statement.

Regulators and industry leaders have increasingly highlighted the risks associated with agentic AI, systems capable of making autonomous decisions rather than simply generating responses.

The FCA previously warned that competition among banks to adopt such tools could heighten risks for retail customers, particularly where automated decisions affect lending, pricing, or access to financial products.

Around three quarters of British financial firms are now using AI across core activities such as insurance claims processing, credit assessments, and customer interactions, according to evidence reviewed by the committee.

While lawmakers acknowledged that AI has delivered efficiency gains and improved services, they warned that the technology also introduces significant dangers.

The report flagged concerns over opaque credit decisions, the exclusion of vulnerable consumers through algorithmic profiling, increased fraud risks, and the spread of unregulated financial advice through AI powered chatbots.

Lawmakers say these issues could undermine trust in the financial system if left unchecked.

Beyond consumer harm, witnesses to the inquiry raised alarms about threats to financial stability.

Experts told the committee that heavy reliance on a small group of US technology companies for AI models and cloud infrastructure could create concentration risks similar to those seen in other critical financial services.

Some also warned that AI driven trading systems could amplify herding behavior in markets, where algorithms react to the same signals at the same time, potentially increasing volatility and raising the likelihood of sudden market dislocations.

The FCA said it would review the committee’s findings. The regulator has previously argued against introducing AI specific rules, citing the fast pace of technological change and the need for flexible, principles based oversight.

A Bank of England spokesperson said the central bank has already taken steps to assess AI related risks and strengthen the resilience of the financial system.

The spokesperson added that the bank would consider the committee’s recommendations and respond in due course.

Hillier said increasingly sophisticated forms of AI are already influencing financial decisions across the economy, heightening the stakes for regulators.

“If something has gone wrong in the system, that could have a very big impact on the consumer,” she said.

Separately, the UK finance ministry has moved to strengthen coordination on AI policy by appointing senior technology leaders from major banks to help guide the adoption of AI across financial services.

Lawmakers said such efforts must be matched by clearer regulatory expectations and stronger supervisory tools to ensure innovation does not come at the expense of stability or consumer protection.

Sign in to view comments
You may also like...
ad
Related insights