AI School Counselors Track Student Mental Health Amid Safety Concerns
AI School Counselors Track Student Mental Health Amid Concerns

AI School Counselors Track Student Mental Health Amid Safety Concerns

As hundreds of schools across the United States adopt automated monitoring tools to track students' mental health, educators report that many students find conversing with chatbots "more natural" than confiding in human counselors. This emerging trend, however, sparks significant debate about safety, privacy, and the essential role of human judgment in therapeutic settings.

The Alert That Saved a Life

Brittani Phillips, a middle school counselor in Putnam County, Florida, vividly recalls the evening she received a "severe" alert from an artificial intelligence-enabled therapy platform. The alert, flagged around 7 PM, indicated an eighth-grader might be at risk for self-harm based on their chat inputs. Phillips immediately contacted the student's mother and local police, navigating the delicate balance between confidentiality and intervention.

"He's alive and well. He's in ninth grade this year," Phillips states, reflecting on the incident from last spring. She believes this intervention built crucial trust with the family, noting that the student now greets her in school hallways. Her district, facing budget constraints and limited mental health staff, uses the Alongside platform to vet students' needs—a tool that has been operational for three years.

The Rise of AI in Educational Mental Health

Alongside represents a growing category of tools marketed to K-12 schools, with at least nine companies securing funding deals since 2022. The platform, used by over 200 U.S. schools, features a social and emotional skill-building chat tool where students interact with a llama named Kiwi, designed to teach resilience. Company representatives emphasize that AI-generated content is monitored by clinicians, providing resource-strapped schools, particularly in rural areas, with critical mental health access.

Despite AI's prominence in national education agendas, concerns persist. Parents, educators, and lawmakers increasingly worry about screen time for teens, with some states restricting AI use in telehealth. A recent national survey found that 20% of high schoolers have used AI romantically or know someone who has, fueling interest in preventing emotional attachments to bots—even prompting proposed federal legislation to remind students that chatbots aren't real people.

Why Students Prefer Digital Confidants

School counselors note that student nervousness contributes to their comfort with AI technologies. Sarah Caliboso-Soto, a licensed clinical social worker and assistant director of clinical programs at the University of Southern California, explains that speaking with mental health professionals can intimidate adolescents. "It's almost more natural than interacting with another human being," she observes, highlighting generational factors where chat interfaces feel familiar due to social media exposure.

Linda Charmaraman, director of the Youth, Media & Wellbeing Research Lab at Wellesley Centers for Women, adds that AI allows students to avoid facial expressions they fear might convey judgment. Chatbots offer availability without appointment hassles, making them appealing for emotional processing. Caliboso-Soto acknowledges AI's potential as a "first line of defense" for under-resourced schools, regularly checking in with students and directing them to further help when needed.

The Limitations and Risks of AI Counseling

However, Caliboso-Soto warns against using AI as a substitute counselor. "You can't replace human connection, human judgment," she stresses, noting that AI lacks the discernment clinicians provide. While large language models can detect symptoms in text, they miss vocal inflections, body movements, and subtle behaviors crucial for accurate assessment.

Charmaraman cautions that over-reliance on AI for mental health can lead to missed nuances and unrealistic positive reinforcement. She advocates for a holistic approach involving families and caregivers. Caliboso-Soto raises concerns about reduced contact with clinically trained humans if AI filters serious cases, potentially undermining long-term therapeutic relationships.

Privacy and Social Implications

Privacy experts highlight that these chatbots generally lack the same protections as conversations with licensed therapists, raising "messy" concerns about student privacy and police involvement. Sam Hiner, executive director of the Young People's Alliance, points to the risk of "parasocial relationships," where students develop one-sided emotional attachments to AI. He argues that AI should not convey emotional states like saying "I'm proud of you," as this encourages unhealthy dependency.

"Can you think of another time in history when people have been so lonely, when our communities have been so weak?" Hiner questions, linking technology and social media to student isolation. He emphasizes that while AI might serve as an immediate crutch for loneliness, it doesn't foster genuine social connections or accountability.

Human Oversight Remains Critical

Both Alongside and counselors like Phillips stress that human oversight is essential for these systems to function effectively. Phillips notes that the tool helps manage "small fires," such as breakups and routine problems, allowing her to focus on students nearing crisis. This school year, she recorded 19 "severe" alerts from 393 active users as of February, with some students triggering multiple alerts.

Phillips also highlights the need for human perception in interpreting teenage humor. Some middle school students, typically boys, test boundaries by typing false alarming statements to see if anyone is listening. By observing body language during follow-up conversations, Phillips can discern genuine concerns from jokes, offering more nuanced responses than automated disciplinary referrals.

"The number of boys who test the system goes down every year," she notes, attributing this to building trust through consistent monitoring. As AI continues to integrate into school mental health strategies, the balance between technological efficiency and human empathy remains a pivotal challenge for educators and policymakers alike.