AI Workers Warn Public to Avoid AI Tools They Help Train
AI workers distrust models they help create

Artificial intelligence workers are sounding the alarm about the very systems they help create, with many actively warning friends and family to avoid using popular AI tools due to serious concerns about accuracy and safety.

The Human Cost of AI Development

Krista Pawloski, an AI worker on Amazon Mechanical Turk, experienced a defining moment that shaped her scepticism towards AI ethics. While working from home classifying tweets as racist or not, she encountered a tweet containing the term "mooncricket" - which she discovered was a racial slur against Black Americans after nearly marking it as non-offensive.

"I sat there considering how many times I may have made the same mistake and not caught myself," Pawloski recalled. The realisation of how many offensive materials might have slipped through due to human error made her reconsider the entire AI development process.

After years working with AI models, Pawloski now completely avoids generative AI products personally and has banned her teenage daughter from using tools like ChatGPT. "It's an absolute no in my house," she stated firmly.

Widespread Distrust Among AI Professionals

Pawloski isn't alone in her concerns. A dozen AI raters - workers who check AI responses for accuracy - told investigators they actively discourage loved ones from using generative AI after witnessing how frequently these systems produce incorrect or harmful information.

These workers train various AI models including Google's Gemini, Elon Musk's Grok, and several other popular chatbots. One Google AI rater, who requested anonymity fearing professional repercussions, expressed particular concern about AI-generated health advice.

"She has to learn critical thinking skills first or she won't be able to tell if the output is any good," the rater said about forbidding her 10-year-old daughter from using chatbots.

Another worker described the fundamental problem as "garbage in, garbage out" - explaining that if flawed or incomplete data trains AI systems, the outputs will inevitably reflect those same flaws.

Speed Over Safety

Experts believe this insider distrust signals a much larger systemic issue. Alex Mahadevan, director of MediaWise at Poynter, noted that when the people creating AI are the most sceptical, it suggests companies prioritise rapid deployment over careful validation.

"It shows there are probably incentives to ship and scale over slow, careful validation, and that the feedback raters give is getting ignored," Mahadevan explained.

Brook Hansen, another Amazon Mechanical Turk worker, emphasised that the problem isn't necessarily with AI as a concept, but with how companies develop and deploy these tools. "We're expected to help make the model better, yet we're often given vague or incomplete instructions, minimal training and unrealistic time limits," Hansen revealed.

Recent data supports these concerns. An audit by media literacy non-profit NewsGuard found that while chatbots' non-response rates dropped from 31% in August 2024 to 0% in August 2025, their likelihood of repeating false information nearly doubled from 18% to 35% during the same period.

Creating Public Awareness

AI workers are now taking matters into their own hands by educating the public about the technology's limitations. Hansen and Pawloski recently presented at the Michigan Association of School Boards spring conference, shocking attendees with revelations about AI's human labour and environmental impacts.

Pawloski compares the current state of AI ethics to the early days of the textile industry, before consumers learned about sweatshop conditions. "Where does your data come from? Is this model built on copyright infringement? Were workers fairly compensated for their work?" she asks, encouraging people to demand transparency.

As one anonymous AI tutor summarised the industry's inside joke: "We joke that [chatbots] would be great if we could get them to stop lying."