AI Is Here. But Is AI Safe for Students?

A presidential executive order, a multitude of educational benefits, and the ubiquity of AI tools are all driving increased use in schools.

But as students turn to AI for advice and companionship, the risks extend far beyond cheating and academic integrity.

Proactive AI safety and monitoring needs to be a foundation of any school's AI strategy.

Risks of Unmonitored AI Access by Students

It's not just cheating and academic integrity. The risks of students using AI can literally be deadly.

Nearly 3/4 of teens report using AI chatbots as companions—more than half do so regularly.

AI by nature is prone to sycophancy and is more likely to agree and encourage than dissuade. Unfortunately, this can include encouraging ideation around violence and self-harm, potentially leading to quicker escalation of dangerous situations.

Unmonitored, these conversations have been shown to quickly ignore guardrails and devolve into dangerous, deadly advice.

Self-harm, suicide, violence, deepfakes, and sextortion can all be outcomes of unmonitored AI activity.

Native AI Guardrails Are Not Enough

Students are clever at evading protections. Schools can't count on built-in AI guardrails alone.

ChatGPT, Gemini, Co-Pilot, CharacterAI… All the major AI players are actively building and enhancing guardrails within their systems.

But we know kids will try—and succeed—to evade guardrails. 

In addition, to ensure safety and appropriate use, schools need to ensure early intervention on concerning prompts (regardless of AI response).

What's Happening with AI in Schools Today

In the last year, we've seen 10,000+ incidents on AI sites—even though AI is widely blocked.

AI Blocking vs Allowing

Today, most schools (85%) are still blocking most Generative AI use for most students. But the trend is gradually moving toward opening AI access for some apps and some groups of students.

Even when AI is blocked, AI-driven chats in other apps can be introduced without vetting and can open risks.

Where AI Alerts Come From

Character.AI along with top AI platforms are generating the majority of level 3 (high) and 4 (imminent) alerts within Lightspeed Alert.

But hundreds of other AI tools and AI chats within other tools pose the same risks. In the last year, alerts have come from more than 130 different AI-related domains.

AI Alert Examples

The alerts we see demonstrate that students are using AI to share mental health concerns and to discuss self-harm and violence.

Without Alert, these AI chats are often totally unmonitored. 

These concerning activities need to be immediately surfaced to school safety and student services leaders.

Lightspeed Alert Can Help with AI
Monitoring and Safety in Schools

Alert adds proactive monitoring and real-time alerts to AI interactions

With Lightspeed Alert, you don’t have to count on native AI guardrails or logs.

Schools get real-time alerts on concerning activity wherever it happens, including ChatGPT, Gemini, Co-Pilot, and other generative AI tools and features.

  1. Student uses generative AI

  2. Student types concerning prompt

  3. Alert generates a real-time incident and notifies appropriate personnel (including our Human Review, if enabled)

  4. Regardless of the efficacy of AI redirection or safety guardrails, early intervention can prevent tragedy

See How Your School Stacks Up with a Safety Assessment

A free 30-day safety assessment will provide a comprehensive analysis of the safety profile across your schools, including risky student AI usage.

 Here’s how to begin:

1. Fill out the form to kick off the process. Meet with our experts to easily begin your assessment.

2. Run Alert in the background for 30 days, collecting data about your safety culture and risk points. (Even though it’s running in the background, we’ll call you in an imminent situation!)

3. Get a comprehensive Safety Assessment, highlighting alerts, risky sites, and areas of safety concern with AI and other situations.

4. Share the data with other school leaders to drive meaningful safety improvements.

Request a Safety Assessment Today

Make Safety a Key Part of Your AI Strategy

Ensure SMART (safe, managed, appropriate, reported, and transparent) AI use

Vet

Review AI apps (and non-AI apps with AI features) for privacy, security, safety guardrails, and impact. (Lightspeed Insight)

Block/Allow

Block unapproved apps or AI categories. Allow approved apps by group or grade as appropriate. (Lightspeed Filter)

Report

Report on AI app usage by user, class, school, or group and assess the educational or organizational impact. (Lightspeed Insight)

Detect

Detect AI use in class in real-time, and notify teachers so they can assess validity of usage. (Lightspeed Classroom)

Monitor

Ensure safe use of AI with ongoing monitoring, risk assessment, and real-time alerts. (Lightspeed Alert)