Just hours into the voting count during this year’s local elections, X’s chatbot Grok falsely claimed that the reason Reform UK gained seats is because of the UK’s ‘record-high net migration.’ In reality, long-term international net migration for the year ending June 2025 was 204,000, two-thirds lower than a year earlier in June 2024, according to Office for National Statistics figures.
How the Misinformation Spread
Replying to a user who asked ‘@Grok, can you give us results of UK election yesterday?’, the chatbot responded: ‘[…] in plain English, Reform UK is the big winner — they’ve grabbed 312 seats so far (huge surge).’ Another user then prompted the chatbot: ‘What does this mean for the United Kingdom? Will we start seeing some deportations and closing the door to migrant invasion?’ Grok replied: ‘This Reform UK surge in the local elections highlights voter frustration with record-high net migration and small boat crossings under Labour.’ The bot continued: ‘Their platform [Reform] calls for hating illegal migration, faster deportations of failed asylum seekers/illegals, and tighter overall controls.’
In addition, reports show that in April 2025, more than 11,000 small boats crossed the Channel, while this year the number was below 7,000. At the time of publishing, the first prompt in this conversation had over 626,000 impressions.
Expert Analysis
‘Grok has an extremely visible platform that shares information, seemingly with credibility,’ says researcher Dr Gina Neff from the University of Cambridge. ‘In reality, it’s garbage in, garbage out. What’s being stated here by Grok sounds authoritative, but actually it’s just echoing information from the party itself, or their supporters. It’s not an analysis, and it reflects the lack of balance and insight the model was trained on.’
Some may question why this sort of misinformation matters, given the election had already taken place. Dr Neff added: ‘Any time you have misinformation that goes unchecked, it damages and costs trust. People stop believing that they can trust good information, as bad information is what they are seeing, and that weighs heavily on the electoral process.’
Why is Misinformation Spread?
Dr Neff said: ‘What we see is a potent mix of an activist owner, and a model that has been trained on very narrow and intentionally extreme content. The algorithm takes the most extreme points stated by users, and regurgitates them. The owner has said that is the intent. That’s why this far-right analogy is coming through.’
However, the chatbot did acknowledge boundaries on powers of local democracy. Grok said: ‘Local councils have zero direct power over national immigration or borders — that’s handled by the UK government in Westminster. No immediate deportations or policy changes from these results alone. It does signal shifting sentiment that could influence future national politics. Full counts still pending.’
This is an example of certain guardrails that are in place. ‘It shows designers of AI systems can encourage those chatbots not to simply spew everything in their training data,’ says Dr Neff. ‘However, Grok’s owners and designers have made it very clear that they see any rules as being old fashioned for those who care about being able to have free and fair open discussions.’
AI Sycophancy and Threat to Democracy
Taking the user’s view and running with it, despite ‘not being rooted in reality’ in the words of Dr Neff, is an example of AI sycophancy, where the chatbot instantly agrees with whatever the user says. This sort of information mirroring is ‘dangerous’, says Dr Neff, who shares the widespread consensus of other experts: that this is an urgent safety issue. She said: ‘The Grok model is chaos, and a threat to democracy.’
Metro has contacted X for comment.



