UK Firms Unaware of AI Data Use Overseas, Raising GDPR Concerns
UK Firms Unaware of AI Data Use Abroad, GDPR Risk

Most large UK firms do not fully know what happens to their data once it is processed by AI systems abroad, raising compliance and security concerns as adoption continues to rise.

Survey reveals lack of oversight

According to a recent Harbr Data survey, 61 per cent of senior tech and data leaders at companies worth £100m or more lack a full understanding of how their data is used overseas. Meanwhile, nearly three-quarters admitted their data is transferred out of the UK via AI systems at least weekly, whilst a third reported daily flows.

Oversight lags behind adoption

This omission from UK PLC reflects the speed at which AI tools have been integrated into business operations across every sector, often without the level of governance traditionally applied to new IT systems. Employees are increasingly uploading sensitive company material into tools like ChatGPT across email and messaging apps, usually with limited visibility on what happens to that data once it is processed.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

“We’re potentially talking about a whole range of critical business processes that now leverage AI and are exchanging data”, said Anthony Cosgrove, Harbr Data co-founder. “People seem to have very little idea of how the data governance works”, added Matthew Hodsgon, chief executive of Element, as he warned that sensitive information could be reused in model training.

Recently reported vulnerabilities in Slack AI and Microsoft 365 Copilot showed how systems could access or summarise data beyond their intended limits, even where safeguards were in place. Elsewhere, the survey found that while 70 per cent of respondents said they trust UK data handling and 62 per cent trust the EU, only 31 per cent expressed confidence in North America.

GDPR risk grows

Under UK GDPR, firms must be able to show where personal data is processed, on what basis and under what safeguards – which applies even if no breach has ever occurred. A total lack of visibility over AI-driven workflows can amount to non-compliance.

AI systems have made this harder, where data entered into tools can easily pass through various jurisdictions, or be incorporated into underlying models, without clear, trailed audit. If firms cannot map that journey, they may struggle to prove lawful processing.

The scale of this risk only increases as AI investment rises, with recent Hitachi Vantara research finding that 77 per cent of UK leaders think data complexity is growing too quickly. Meanwhile, 67 per cent cite data security as their top concern. At the same time, 85 per cent of firms now see data sovereignty as critical, driving a shift towards private cloud infrastructure for sensitive workloads.

Pickt after-article banner — collaborative shopping lists app with family illustration