Police Continue Using AI Tool Despite Inaccurate Evidence in Football Fan Ban
Police Still Using AI Tool Despite Inaccurate Evidence in Fan Ban

Police Forces Persist with AI Tool Despite Evidence of Inaccuracies in Football Fan Ban

Exclusive revelations have uncovered that at least 21 police forces across England are continuing to utilize the Copilot AI tool, even after West Midlands Police (WMP) disabled access following a significant error. This decision came after an AI hallucination was used to justify banning fans of Maccabi Tel Aviv from attending a match against Aston Villa in November, based on evidence of a non-existent event.

Controversial Use of AI in Policing Decisions

The controversy escalated when West Midlands Police admitted, after initial denials, that a Copilot-generated hallucination had been included in an intelligence document. This document was used to exclude Israeli football fans, leading to one of the biggest policing controversies of the past year. The incident forced the resignation of WMP's chief constable, Craig Guildford, under government pressure.

MPs on the Home Affairs Select Committee have expressed fresh concerns, noting that Copilot produced inaccurate claims about past disorder around a contentious Maccabi match in Amsterdam in 2024. This highlights broader issues with the technology's reliability and the lack of robust testing before deployment in critical policing operations.

Disjointed Approach and Lack of Coordination

Despite the high-profile case, many police forces maintain a disjointed approach to AI usage. Only eight forces across the UK, including those in Scotland and Northern Ireland, have explicitly banned Copilot from investigations. This inconsistency underscores a lack of coordinated national policing strategy regarding AI tools.

West Midlands Police and Crime Commissioner Simon Foster voiced significant concerns, stating, "I am concerned about the way in which WMP was utilising AI... there were some significant concerns, shortcomings, and failures around ensuring there was a proper regulatory management of the use of AI." He emphasized the need for lawful, reasonable, and ethical use with a proper regulatory regime to prevent misuse and rogue results.

Defense and Cautious Approaches from Other Forces

In contrast, some police forces defend the use of AI. Greater Manchester Police, England's second-largest force, stated they have a robust AI policy to speed up processes and allow officers more time on the streets. West Yorkshire Police added that they provide education and guidance to ensure responsible use.

However, others adopt a more cautious stance. Cleveland Police does not block Copilot but insists it is not used for intelligence or investigations. Police Scotland is conducting a limited trial since October, focusing on corporate processes like HR information retrieval, rather than operational policing, to balance ethical and human rights considerations.

Industry and Regulatory Responses

Microsoft, the developer of Copilot, defended its software, highlighting differences between the workplace-focused Microsoft 365 Copilot and the free consumer chat service. A spokesperson said, "Microsoft 365 Copilot is grounded in an organisation's own data, security, and access controls... We continuously evaluate and improve our services and encourage organisations to use Copilot within their own governance and review practices."

The National Police Chief's Council expressed confidence that AI benefits outweigh risks if used correctly and securely, with AI experts advising forces to use Copilot in the most appropriate way. Chris Todd, chair of the National Police Data and Analytics Board, echoed this, stating AI should support human decisions, not make them, and is providing benefits to communities by joining up data and reducing delays.

This ongoing situation raises critical questions about the integration of AI in law enforcement, emphasizing the need for stricter oversight, uniform policies, and thorough testing to prevent future errors and ensure public trust.