Key takeaways:
- AI-enabled harm is an emerging safeguarding risk in Scottish schools, with deepfakes, automated bullying, self-harm content and AI-driven grooming increasing the scale and speed at which harm can occur.
- AI literacy, proportionate filtering and proactive monitoring are now essential components of effective digital safeguarding and child protection frameworks.
- In an AI-enabled world, early visibility, strong governance and confident safeguarding leadership are critical to ensuring technology supports learning while protecting student wellbeing.
As we approach AI Week in Scotland from 9 March, it feels timely to pause and reflect on a critical question.
Not just how we use artificial intelligence in education, but how we understand and safeguard against AI enabled harm.
From my perspective as Director of Safeguarding and Digital Transformation, AI is neither something to fear nor something to adopt without scrutiny. It is a powerful tool. Like any powerful tool, it requires boundaries, literacy, monitoring and leadership.
What Do We Mean by AI Enabled Harm?
AI enabled harm refers to risks that are created, amplified or accelerated through artificial intelligence systems. In a school context, this can include:
• AI generated sexual images, including deepfake imagery of peers or staff
• The use of AI chatbots to rehearse or normalise self-harm ideation
• Automated bullying content or manipulated media
• Exposure to inaccurate or harmful advice presented as authoritative
• AI tools being used to generate violent or exploitative material
• Increased scale and speed of grooming through AI assisted messaging
• Emotional dependency on conversational AI systems
What makes AI different is scale, realism and accessibility. Students do not need advanced technical skills. Many tools are free, anonymous and available on personal devices.
Why This Matters for Safeguarding in Scotland
Scottish schools operate within a strong child centred framework through GIRFEC, the Children and Young People Scotland Act and national guidance on child protection. AI does not sit outside this framework. It sits within it.
The key safeguarding principles remain:
• Early identification
• Proportionate response
• Multi agency collaboration
• Clear recording and governance
However, AI changes the environment in which risk presents.
For example, harm may not come from a peer in the playground but from a synthetic image generated in seconds. Disclosure may not begin with a trusted adult but through a chatbot conversation. Harmful narratives may be shaped by algorithmic systems that amplify extreme content.
This is where digital transformation and safeguarding must work together.
Why This Matters for Safeguarding in Scotland
Many mainstream AI image tools can generate realistic human faces that do not belong to real people, making identification and accountability more complex.
- Deepfake technology can now be created using only a handful of publicly available photos.
- Some AI chatbots have previously generated harmful or self-harm related content before safety guardrails were improved.
- AI systems can unintentionally reinforce bias, which can affect how children from different backgrounds experience online spaces.
- Students are increasingly using AI for emotional support without understanding data privacy implications.
- AI generated misinformation spreads faster because it can be produced at scale with minimal effort.
- Many AI tools are not age verified in the same way as regulated platforms.
- Synthetic nudification apps have become more accessible and harder to trace.
- AI can be used to automate harassment, for example generating repeated personalised abuse.
Awareness of these realities allows us to respond calmly and proportionately.
What Can We Do as Education Leaders?
First, strengthen digital literacy. AI literacy must now sit alongside online safety education. Students need to understand:
• How generative AI works
• What deepfakes are
• How algorithms shape what they see
• The permanence of digital content
• Data privacy and personal information risks
Second, ensure proportionate filtering. Schools should review filtering policies to ensure that known harmful AI tools are appropriately restricted while still allowing legitimate educational AI platforms to be used safely.
Third, implement proactive monitoring. Blocking alone is not safeguarding. Particularly with AI, risk often sits within typed interactions, prompts and behavioural indicators rather than simply website categories.
Fourth, support staff confidence. Teachers and safeguarding leads need training to recognise AI related harm, respond appropriately and record concerns in line with local child protection procedures.
Fifth, engage parents and carers. Many AI risks emerge on personal devices outside school hours. Clear communication and signposting is essential.
How Lightspeed Systems Can Help
From a systems perspective, Lightspeed provides three critical layers of support.
Lightspeed Filter™ allows schools to apply age appropriate and policy aligned filtering across devices, helping manage access to high-risk AI tools while maintaining educational use.
Lightspeed Alert™ provides proactive monitoring of student activity across supported devices and cloud environments. It identifies indicators of self-harm, bullying, violence, drugs, sexual abuse and explicit content. In an AI context, this means schools can gain visibility into concerning language, prompts or patterns that may signal vulnerability or emerging harm.
Incidents are grouped into cases and prioritised by risk level, supporting designated safeguarding leads to apply professional judgement and intervene early.
Crucially, monitoring supports human review. It does not replace it.
In an AI enabled world, early visibility is essential. Harm can escalate quickly. Context matters.
Key UK and Scotland Resources
For colleagues in Scotland, the following are important signposts:
Education Scotland guidance on AI in learning and teaching
South West Grid for Learning
The UK Safer Internet Centre
Childnet and Internet Matters parent resources
The ICO guidance on children’s data and AI
Schools may also wish to review the Department for Education filtering and monitoring standards in England, as many expectations increasingly influence UK wide practice.
A Balanced Approach
AI Week should not be about fear. It should be about informed leadership.
AI will continue to shape how young people learn, create and communicate. Our role is not to eliminate technology but to ensure it is used safely, ethically and responsibly.
That requires:
Proportionate filtering.
Proactive monitoring.
Clear governance.
Education and awareness.
Confident safeguarding leadership.
As Director of Safeguarding and Digital Transformation, I believe the most effective approach is not restriction alone and not uncritical adoption. It is thoughtful integration, supported by visibility and guided by child centred principles.
AI is here. Our responsibility is to ensure that children remain safe, supported and empowered within it.
If you have any questions about how Lightspeed Systems can help your school or MAT with AI ready classrooms, contact our team for a call here.