A powerful group of MPs has issued a stark warning that the UK's financial system and its consumers are being left exposed to 'serious harm' because regulators and the government have failed to properly tackle the risks posed by artificial intelligence.
A 'Wait-and-See' Approach to a Burgeoning Threat
In a damning new report, the Treasury committee criticised the government, the Bank of England, and the Financial Conduct Authority (FCA) for adopting a passive 'wait-and-see' stance towards AI's rapid adoption across the financial sector. This comes despite widespread use of the technology, with more than 75% of City firms now deploying AI in some form.
The report highlights that insurers and international banks are among the most significant adopters. AI is being utilised for tasks ranging from administrative automation to core operations like processing insurance claims and assessing customer credit scores.
However, the UK has not developed any specific laws to govern this use. Regulators have argued that existing general rules are sufficient, leaving businesses to interpret how old guidelines apply to new technology. The committee fears this regulatory gap is putting both consumer protection and overall financial stability in jeopardy.
Specific Risks to Consumers and Market Stability
The MPs identified several clear and present dangers. A primary concern is the lack of transparency in how AI models make financial decisions, which could unfairly disadvantage vulnerable consumers seeking loans or insurance. The report also notes it is unclear who would be held accountable—data providers, tech developers, or the financial firms themselves—if an AI system causes harm.
Furthermore, the committee warned that AI increases the threat of fraud and the spread of unregulated, misleading financial advice online. From a systemic perspective, the rising dependence on AI heightens cybersecurity risks and creates an over-reliance on a handful of large US tech firms for essential services.
Perhaps most alarmingly, the report suggests AI could amplify 'herd behaviour' in the markets. If multiple firms use similar AI models, they could make identical financial decisions during an economic shock, potentially triggering or worsening a financial crisis.
Calls for Urgent Regulatory Action
Meg Hillier MP, chair of the Treasury committee, expressed deep concern, stating: "Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying." She emphasised that it is the duty of the authorities to ensure safety mechanisms keep pace with technological change.
The committee is now urging immediate action. Its recommendations include the development of new stress tests to assess the City's resilience against AI-driven market shocks. It has also called on the FCA to publish practical guidance by year's end, clarifying how consumer protection rules apply to AI and defining lines of accountability.
In response, an FCA spokesperson said the regulator had already done "extensive work" on AI safety and would review the report carefully. The Treasury stated it aimed to "strike the right balance" between risk and opportunity, working with regulators and appointing new AI champions for financial services. The Bank of England said it had taken active steps to assess AI risks and would consider the recommendations in full.
Despite these assurances, the committee's central message remains clear: "By taking a wait-and-see approach to AI in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm."