Fresh warnings have been issued that the UK's crucial financial services sector is not adequately protected against the potential dangers posed by artificial intelligence. The stark assessment comes from a parliamentary inquiry led by Dame Meg Hillier, Chair of the Treasury Select Committee.
Embracing AI Without Understanding the Dangers
The committee launched its investigation to answer three pivotal questions: who is using AI in finance, for what purposes, and what safeguards exist if the technology fails. Their findings reveal a sector rapidly adopting AI, with three-quarters of firms now using it in some form.
Initially deployed for administrative efficiency, AI's role is expanding. It is now used to analyse complex data and guide significant financial decisions, from processing insurance claims to conducting credit assessments. While this drive for innovation is celebrated, the report highlights a critical lack of parallel preparedness for worst-case scenarios.
A Fragile System in Need of Stronger Guardrails
The report concludes that the current protective measures are insufficient. Dame Meg Hillier stated she is not confident the financial system is ready for a major AI-related incident. The core vulnerabilities identified include the known propensity for AI tools to "hallucinate" or generate incorrect outputs, and the systemic risk posed by over-reliance on a few key technology providers.
A major outage at Amazon Web Services in late 2025, which disrupted operations at HMRC and Lloyds Banking Group, was cited as a real-world example of this fragility. Such events underscore how embedded a handful of tech firms have become in the nation's economic infrastructure.
Urgent Calls for Regulatory Action
The Treasury Committee has issued several key recommendations to build resilience. Firstly, it demands that the Bank of England and the Financial Conduct Authority (FCA) conduct AI-specific stress tests on firms to gauge their ability to withstand widespread AI failures.
Secondly, regulators must provide clearer guidance on how AI use interacts with consumer protection rules and establish firm principles on accountability. Businesses need to know exactly who is responsible when AI systems go wrong.
The report also criticises the government's delay in using existing powers under the Critical Third Parties Regime. A year after its introduction, not a single company has been formally designated, denying regulators crucial oversight of the tech giants underpinning the financial network.
While acknowledging AI's potential to bolster the City of London's competitive edge, the committee's message is unequivocal: The Bank of England, the FCA, and the Treasury must act proactively. Without swift action to install robust guardrails, the stability of the UK's entire financial system is being exposed to significant and unnecessary risk.