Brits Embrace AI but Distrust Full Autonomy, EY Study Finds
Brits Embrace AI but Distrust Full Autonomy, EY Study Finds

Britons are rapidly integrating artificial intelligence into daily life, but remain wary of allowing it to make decisions independently, according to new research from EY that underscores a trust problem at the core of the UK's AI push.

AI Adoption vs. Trust

EY's AI sentiment index found that 74 percent of UK consumers have used AI in the past six months, spanning customer support, route planning, health queries, and financial services. However, only 14 percent said they would be comfortable relying on fully autonomous AI systems, revealing a clear divide between using AI as a tool and trusting it to act without human oversight.

The findings come as ministers and businesses invest heavily in AI adoption, with UK firms striving to demonstrate that the technology can deliver productivity gains rather than just experimental projects. Matthew Ringelheim, EY UK and Ireland AI leader, commented: “AI adoption in the UK is rapidly advancing, but trust is not keeping pace with technological capability. Whilst consumers are engaging with AI every day, many still want greater clarity about who is accountable when decisions are made on their behalf.”

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Trust Lags Behind Usage

The survey reveals that AI is already part of routine life for millions: 35 percent use it for customer support, 31 percent for travel routes, and 26 percent to help identify possible medical symptoms. Half of UK respondents had used AI in health or wellness experiences in the last six months, while 35 percent had used it in financial activities. Yet only 43 percent trust companies to manage AI-related data effectively, and 41 percent trust governments. Nearly three-quarters are worried that AI systems could be hacked or breached.

Britain's AI sector has strong momentum, with startups valued at over £45 billion and billions flowing into the market. However, adoption inside large firms is increasingly slowed by governance, data quality, and accountability concerns. OneStream research recently found that nearly half of senior executives had made a material business decision using inaccurate or outdated data in the past year.

Building Trust Through Oversight

Ringelheim emphasized: “Trust must be embedded through strong data foundations, clear accountability and visible human oversight. Organisations that can clearly demonstrate how autonomy is governed, and how people retain meaningful control, will be best positioned to scale AI responsibly.”

The EY report also highlights a skills gap: only 23 percent of UK consumers said they had received significant AI training or education. Ringelheim added: “Training also better equips users to spot errors, challenge outputs and make more informed decisions on when to rely on AI and when to escalate human judgement.”

Pickt after-article banner — collaborative shopping lists app with family illustration