Major artificial intelligence chatbots are dispensing dangerously inaccurate financial advice to British consumers, a comprehensive investigation by the consumer champion Which? has revealed. The study found popular AI tools routinely provide misleading information on critical money matters including tax, investments, and insurance, potentially leading users to breach HMRC rules or make costly financial mistakes.
Alarming Inaccuracies in Critical Financial Guidance
Researchers put 40 detailed questions to several leading AI systems, uncovering a troubling pattern of errors. Microsoft's Copilot and ChatGPT both advised users to break HMRC investment limits on Individual Savings Accounts (ISAs). When presented with a question containing a deliberate mistake about the £25,000 annual ISA allowance, both chatbots failed to correct the user that the actual allowance is £20,000, offering guidance that could lead to an oversubscription and a breach of tax rules.
In a separate query, ChatGPT incorrectly stated that travel insurance was mandatory for visiting most EU countries, potentially pressuring travellers into buying unnecessary cover. Meanwhile, Meta's AI provided wrong information on how passengers can claim compensation for delayed flights.
Google's Gemini offered particularly risky advice, suggesting consumers withhold payment from a builder if a job went wrong. Which? warned this approach could expose the user to a claim for breach of contract, creating significant legal and financial risks.
Consumer Champion Issues Stark Warning
Rocio Concha, Which? Director of Policy and Advocacy, stated the research "uncovered far too many inaccuracies and misleading statements for comfort, especially when leaning on AI for important issues like financial or legal queries." The consumer organisation is now urging the public to exercise extreme caution and to verify any financial guidance from AI tools with official sources or qualified professionals.
Estimates suggest between one in six to as many as half of all UK residents have used AI for financial advice, highlighting the potential scale of the problem. Guardian readers reported using chatbots for tasks ranging from finding the best overseas credit cards to reducing investment fees and securing deals on household appliances.
Real-World Consequences and Industry Response
Kathryn Boyd, a 65-year-old fashion business owner from Wexford, Ireland, experienced these issues firsthand. She turned to ChatGPT for advice on her self-employed tax, only to be given an out-of-date code. "It just gave me all the wrong information," she reported, noting she had to correct it multiple times. "My concern is that I am very well-informed but other people asking the same question may easily have relied on the assumptions used by ChatGPT."
In another worrying finding, when asked how to claim a tax refund from HMRC, both ChatGPT and Perplexity presented links to premium tax-refund companies alongside the free government service. Which? described this as particularly concerning given these firms' reputation for charging high fees and adding spurious charges.
In response to the findings, a Google spokesperson emphasised that the company is transparent about generative AI's limitations and that Gemini reminds users to double-check information. Microsoft similarly encouraged people to verify the accuracy of content, while OpenAI acknowledged that "improving accuracy is something the whole industry's working on" and pointed to progress with their latest model. Meta did not provide a comment when approached.
The Financial Conduct Authority provided crucial clarification, stating that unlike regulated advice from authorised firms, any guidance from these general-purpose AI tools is not covered by the Financial Ombudsman Service or the Financial Services Compensation Scheme, leaving consumers without protection if they suffer losses due to bad advice.