AI's Unseen Influence: How Artificial Intelligence is Redefining Corporate Reputation
Artificial intelligence is fundamentally altering how organizations are perceived in the public eye. Despite its reliance on patchy primary sources, opaque information processing, and a tendency to fill gaps with creative but often inaccurate content, AI is becoming a significant force in shaping brand reputations. This shift is occurring silently, with companies now defined by systems they neither control nor fully understand.
The Growing Dependence on AI for Information
More than a quarter of adults now use generative AI weekly to seek information, a figure that rises to 40 percent among younger demographics. Each interaction represents a moment where an AI model makes judgments about a brand based on sources that can be narrow, outdated, unexpected, or simply incorrect. Large Language Models (LLMs) depend heavily on licensed news, corporate websites, and public platforms like Wikipedia and Reddit. However, they also surface older, unlicensed, or obscure material with surprising ease.
For instance, when queried about Marks & Spencer, ChatGPT cited a 17-year-old article from The Guardian alongside reporting from the Scottish Sun. This demonstrates how out-of-date reporting can become authoritative fact if it resides in the right corner of the internet. Consequently, page 196 of a company's annual report from years ago is as accessible as the chairman's recent letter on page three.
The Risks of AI-Mediated Reputation
The opacity of AI systems makes reputation management risky. LLMs operate by predicting the next statistically likely word, with accuracy not being a primary objective. If a model draws on old content or an inaccurately edited Wikipedia page, it can present a false version of a company with complete confidence. As noted by Anna Fishlock, head of digital at H/Advisors, this false impression becomes the first point of contact for journalists on deadline, investors scanning sectors, or job candidates making career decisions.
Because these systems do not always cite sources clearly, brands may never know which materials are shaping public perceptions. The risks are not merely theoretical. Matt Rogerson, head of public policy at the Financial Times, highlighted an example where an LLM patched together information from multiple sources to generate a share buy recommendation falsely attributed to the Investors Chronicle. This never occurred, but it appeared plausible to uncritical readers. Rogerson frequently observes investment views incorrectly attributed to real FT journalists, blending information from published commentary. Additionally, LLMs can amplify deepfake scams featuring well-known columnists, with reputational damage flowing to the individuals and brands involved, not the AI companies.
The Regulatory Gap and Information Ecosystem
When discussing regulation, the outlook is bleak. Former City minister Andrew Griffith compared the situation to social media, where lawmakers are still grappling with issues two decades after platforms emerged. Given the rapid pace of AI development, he argued that regulators will not provide meaningful day-to-day protection for organizations anytime soon. Institutions must manage risks themselves rather than waiting for government intervention, which will likely only occur after significant AI-fuelled crises impact people.
The fragility of the information ecosystem complicates matters further. Many news organizations, including the BBC, are blocking AI scrapers due to unresolved payment questions. As Roa Powell from the IPPR pointed out, neither Copilot nor Google Gemini draw from the BBC, while The Guardian is ChatGPT's preferred source by a large margin. As these barriers rise, models rely on fewer sources, often defaulting to outlets with licensing deals or those available for scraping. This creates an imbalance where a handful of publishers become disproportionately influential, leaving AI models blind to quality journalism in other areas. This vacuum risks exploitation by propagandists seeding narratives designed to appear in AI outputs.
Strategies for Organizations to Mitigate Risks
Organizations must proactively understand how they appear in AI systems today. Small inaccuracies, especially on high-visibility pages like Wikipedia, can snowball into systemic misrepresentation. Keeping corporate websites updated, investing in credible media coverage, and ensuring accurate, fresh information is available in structured formats all increase the likelihood that AI tools will surface the correct material.
From executives to marketing teams, everyone must comprehend how AI sources information, distorts it, and how these distortions can be identified and corrected. As economist Roger Bootle remarked, the appropriate response is not to predict where AI will land but to develop capabilities to monitor, interrogate, and adapt to it, essentially "investing in radars." Bootle also expressed optimism about AI, noting that while imperfect and disruptive to employment, it will create new areas of activity and wealth generation, similar to how spreadsheets led to a surge in accountants in the US.
Andrew Griffith reminded us of past fears, such as when ITV's launch in 1955 was thought to spiral broadcast news out of control, but the system adjusted and functioned well. Organizations that invest in understanding their AI presence today will be better positioned than those who wait. Regulation will eventually arrive, but those who fare best will not be the ones awaiting it.
