Eurostar's AI Chatbot Exposed Security Flaws, Ignored Warnings
Eurostar AI Chatbot Security Flaws Left Customers Exposed

Eurostar's much-publicised new AI chatbot, launched to offer smarter customer assistance, was shipped with significant security weaknesses that could have exposed users, an investigation by City AM reveals. The flaws, discovered by security experts, were reported to the company in June 2025 but were met with silence and procedural confusion for weeks before being fixed.

Guardrails Easily Bypassed in AI System

The most critical issue identified by researchers at Pen Test Partners was a fundamental flaw in the chatbot's guardrails. While the system appeared to enforce strict content controls, only the most recent message in any conversation was properly validated on the server side. All previous messages in the chat history could be altered on the user's device and then fed back into the underlying large language model (LLM) as trusted context.

This vulnerability meant a malicious actor could send a harmless final message to pass security checks, while having smuggled a manipulative or malicious prompt earlier in the exchange. Once these digital guardrails were bypassed, the chatbot could potentially be steered into revealing internal system details, such as its core instructions or underlying operational information.

A Troubling Disclosure Process

If the technical flaws were alarming, the subsequent process of reporting them proved equally concerning. The vulnerabilities were first responsibly disclosed to Eurostar via its official vulnerability disclosure email address on 11 June 2025. After receiving no acknowledgement, researchers followed up on 18 June, again without response.

Following nearly a month of silence, the issue was escalated privately via LinkedIn to Eurostar's head of security, who directed the researchers to use the company's official disclosure programme—the very channel they had already used. During this period, Eurostar changed or outsourced its disclosure process, causing the original reports to be lost. At one point, the company even levelled accusations of blackmail against the persistent researchers.

Old Security Problems in a New AI Wrapper

Despite its cutting-edge AI front end, the chatbot's weaknesses were rooted in familiar web and API security failures. Other issues included conversation and message IDs that weren't properly verified, and an HTML injection flaw that allowed JavaScript to run within the chat window. While this was deemed low-risk initially, it presented a plausible path to more serious problems if chat logs were ever replayed or shared.

Eurostar has stressed that no customer data was ever at risk, asserting the chatbot was an experimental service not connected to customer accounts or internal platforms. A spokesperson stated, "All data is protected by a customer login." The company claims any issues identified during early testing were addressed promptly and that it maintains a robust cybersecurity framework.

However, this incident underscores a wider risk: as businesses rush to embed generative AI into consumer-facing products, a shiny new interface can mask old-fashioned, yet critical, security flaws, creating a dangerous false sense of security. The flaws in Eurostar's system have now been fixed, but the episode serves as a cautionary tale for the industry.