Artificial intelligence workers who help train and refine popular chatbots are warning their own friends and family to avoid using the very technology they help develop, according to exclusive interviews with The Guardian.
The Human Cost of AI Development
Krista Pawloski, an AI worker on Amazon Mechanical Turk, experienced a pivotal moment that shaped her view of AI ethics. While working from home, she nearly misclassified a racist tweet containing the slur "mooncricket" as non-offensive content. "I sat there considering how many times I may have made the same mistake and not caught myself," Pawloski recalled.
This realisation about the potential scale of errors committed by herself and thousands of other AI workers led Pawloski to completely reject generative AI in her personal life. She forbids her teenage daughter from using tools like ChatGPT and encourages social acquaintances to test AI on topics they know well to witness its inaccuracies firsthand.
Amazon responded to these concerns, stating that workers on Mechanical Turk can choose which tasks to complete and review details before accepting them. "Amazon Mechanical Turk is a marketplace that connects businesses and researchers, called requesters, with workers to complete online tasks," said Montana MacLachlan, an Amazon spokesperson.
Widespread Distrust Among AI Professionals
Pawloski isn't alone in her concerns. A dozen AI raters who evaluate responses for accuracy across various models including Google's Gemini, Elon Musk's Grok, and other popular chatbots told The Guardian they actively discourage loved ones from using generative AI or urge extreme caution.
One Google AI rater, who requested anonymity for fear of professional repercussions, expressed particular concern about AI-generated health advice. She observed colleagues evaluating medical responses without proper critical analysis and was sometimes tasked with assessing such questions herself despite lacking medical training.
"She has to learn critical thinking skills first or she won't be able to tell if the output is any good," the rater said about her 10-year-old daughter, whom she has forbidden from using chatbots.
Google addressed these concerns in a statement: "Ratings are just one of many aggregated data points that help us measure how well our systems are working, but do not directly impact our algorithms or models. We also have a range of strong protections in place to surface high quality information across our products."
Speed Over Safety Creates Unreliable Systems
Experts believe this distrust among AI workers signals significant problems in the industry's priorities. "It shows there are probably incentives to ship and scale over slow, careful validation," said Alex Mahadevan, director of MediaWise at Poynter.
AI workers consistently reported that rapid turnaround times are prioritised at the expense of quality. Brook Hansen, another AI worker on Amazon Mechanical Turk, explained that while she doesn't mistrust generative AI as a concept, she deeply distrusts the companies developing these tools.
"We're expected to help make the model better, yet we're often given vague or incomplete instructions, minimal training and unrealistic time limits to complete tasks," said Hansen, who has worked in data since 2010 and helped train several major AI models.
The consequences of this rushed approach are becoming increasingly evident. An audit by media literacy non-profit NewsGuard revealed that between August 2024 and August 2025, the non-response rates of top chatbots dropped from 31% to 0%, while their likelihood of repeating false information nearly doubled from 18% to 35%.
Another anonymous Google AI rater summed up the sentiment shared by many workers: "I wouldn't trust any facts [the bot] offers up without checking them myself – it's just not reliable."
Systemic Flaws in AI Training
One AI rater who began working with Google's products in early 2024 described a telling incident that eroded his trust in the technology. When he attempted to stump the model by asking about Palestinian history, the AI repeatedly refused to answer, regardless of how he rephrased the question. However, when he asked about Israeli history, the chatbot provided extensive information without hesitation.
"We reported it, but nobody seemed to care at Google," the worker recalled. Google did not issue a statement addressing this specific incident when questioned.
This worker now avoids using generative AI entirely and has advised family and friends to resist AI integration in new devices. He cited the "garbage in, garbage out" principle, explaining that flawed training data inevitably produces flawed outputs.
Adio Dinika, who studies AI labour at the Distributed AI Research Institute, noted that this insider perspective fundamentally changes how people view the technology. "Once you've seen how these systems are cobbled together – the biases, the rushed timelines, the constant compromises – you stop seeing AI as futuristic and start seeing it as fragile."
Spreading Awareness About AI's Limitations
Hansen and Pawloski are taking their concerns directly to decision-makers, presenting at the Michigan Association of School Boards spring conference about AI's ethical and environmental impacts. "Many attendees were shocked by what they learned," said Hansen, noting that most had never considered the human labour or environmental footprint behind AI systems.
Pawloski compares the current state of AI ethics to the early days of awareness about textile industry practices. Just as consumers eventually began questioning clothing supply chains, she believes the public should ask critical questions about AI: "Where does your data come from? Is this model built on copyright infringement? Were workers fairly compensated for their work?"
As these AI workers continue their behind-the-scenes labour to refine technology they increasingly distrust, their warnings to friends and family represent a sobering reality check about the state of artificial intelligence development today.