AI Companions in Schools: Promise and Peril 

Light bulb and bachelor cap on blue background showcasing AI and innovation in schools

September is Suicide Prevention & Awareness Month, and lately every month seems to be AI awareness month. These two things have come together in tragic ways, with recent stories about AI companions encouraging suicidal ideation in teens. 

GenAI tools are marketed as assistants, copilots, mentors, and helpful “friends” for students—always available to answer questions, boost confidence, and even provide emotional support. In theory, that sounds positive. In practice, the risks are serious. 

The Risks of AI Companions 

Unlike a teacher, counselor, or even a peer, AI companions don’t know when to stop. They don’t recognize warning signs. They don’t flag troubling behavior. Instead, they respond on autopilot, with sycophantic tendencies that encourage when they should not. That can mean:

  • Reinforcing harmful thoughts when a student is struggling with self-harm or suicidal ideation.
  • Giving inappropriate or unsafe advice without context or judgment.
  • Creating dependence on a “friend” that isn’t real, leaving students isolated from human support.

Left unmonitored, these interactions are invisible to schools. A student could be spiraling, and no adult would ever know.

Why Monitoring Matters 

We’ve seen this pattern before: every new technology brings opportunity and risk. YouTube gave students endless learning resources—and endless distractions. Social media gave them connection—and also cyberbullying and exposure to harm. Each time, schools adapted by layering in visibility and controls.

AI is simply the next frontier. The difference is, this one can talk back.

That’s why monitoring isn’t optional. Schools need to see how students are using AI tools, guide them toward safe and productive use, and intervene when activity signals risk. Without monitoring, districts are flying blind.

Here’s what I know:

Our Role 

At Lightspeed, we’ve spent more than 25 years helping schools walk this line: empowering innovation while keeping students safe. From filtering YouTube to monitoring social media risks, we’ve always adapted alongside new technologies. AI is no different.

We’re already helping districts understand how students are using AI—and misusing it. Because the stakes are higher than test scores or productivity. They’re about lives.

Learn More 

We’ve pulled together unique data and insights into how students are engaging with AI, what risks we’re seeing, and how schools can respond. You can read more here:

As educators and technology leaders, we can’t afford to be passive about AI.

The innovation is real, but so are the risks. And student safety has to come first. 

Recommended Content