MPs Warn UK Finance Unprepared for Major AI Shocks
UK Finance Unprepared for AI Shocks, MPs Warn

A powerful parliamentary committee has issued a stark warning that the UK's financial system may be dangerously unprepared for a major incident triggered by artificial intelligence.

Worrying Evidence of Systemic Risk

The Treasury Select Committee stated it received a "significant volume of evidence" detailing the threats AI poses to financial services consumers and overall stability. Dame Meg Hillier, the committee's chairwoman, expressed deep concern, stating: "Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident."

The cross-party group of MPs highlighted that while AI offers benefits like faster services, their inquiry uncovered substantial risks that could "reverse any potential gains." They criticised a prevailing "wait and see" approach by regulators, arguing it risks exposing people and the financial system to serious harm.

Key Risks Identified by MPs

The committee's report, published on Tuesday 20 January 2026, outlined a series of critical vulnerabilities:

  • Lack of transparency in AI-driven decisions for credit and insurance, potentially leading to unfair outcomes.
  • Rising financial exclusion for the most disadvantaged customers.
  • Consumers being misled by unregulated advice from AI search engines.
  • An increase in the volume and sophistication of fraud and cyber-attacks.
  • AI-driven market trading amplifying herding behaviour, which could precipitate a financial crisis in a worst-case scenario.

Furthermore, the report revealed that over 75% of UK financial services firms now use AI, with the highest adoption among insurers and international banks. This reliance is compounded by a concerning dependency on a small number of US tech firms for both AI and cloud services.

Calls for Proactive Regulation and Testing

The committee urged immediate action to bolster resilience. Its key recommendations include:

The Financial Conduct Authority (FCA) and the Bank of England must conduct specific AI-driven stress tests to ensure businesses are ready for potential market shocks. The MPs also demanded that the FCA publish practical guidance for firms by the end of 2026, clarifying how consumer protection rules apply to AI and defining clear accountability for harms caused.

The report criticised the current regulatory framework as reactive, leaving firms with "little practical clarity" on applying existing rules to AI. This uncertainty, it argued, increases risks to both consumers and market integrity.

The committee also pressed the Government to use its new Critical Third Parties Regime to designate major AI and cloud providers, granting regulators greater oversight to improve sector-wide resilience.

Government and Regulatory Response

In response to the growing challenge, the Government announced on Tuesday the appointment of two industry "AI champions" for financial services: Harriet Rees of Starling Bank and Dr Rohit Dhawan of Lloyds Banking Group. Their unpaid roles, effective from 20 January 2026, will focus on accelerating safe, large-scale AI adoption.

Economic Secretary to the Treasury, Lucy Rigby, said the appointments would help "unlock growth while keeping our financial system secure and resilient." A Treasury spokesperson added that the Government would "not wait around" and was actively working with regulators to strengthen its approach.

The Bank of England welcomed the report, stating it had already taken steps to assess AI risks and would consider the recommendations carefully. The FCA also welcomed the focus, pointing to its existing work, including its AI live testing service launched in April 2025 and its "supercharged sandbox" for experimentation.

However, the committee noted that many stakeholders viewed the FCA's current supervision of AI as too reactive, underscoring the urgent need for the proactive measures it has recommended to safeguard the UK's financial future.