Great Debate: ID Requirements for Web Access – Safety Nuisance or Necessity?

Filtering online content for children is a legal requirement for schools and organisations in the UK, ensuring compliance with UKSIC E KCSIE regulations.

Summer 2025 has seen the UK experience the largest, and most divisive changes to web browsing and filtering in recent record.

The UK government has recently updated the Online Safety Act 2024, introducing new legislations regarding website access for the country. As of the 25th July, there has been a new range of duties on social media and internet platforms to protect children online. Websites and platforms are now required to use age assurance to keep children from accessing illegal or harmful content including:

  • Pornography
  • Drug or substance use
  • Content that encourages self-harm, suicide, or eating disorders
  • Violent, harmful or abusive content
  • Anything which might be inappropriate for their developmental stage

Age assurance refers to the process used to identify someone’s age online before granting them access to web content. This can involve checking government documents like an official ID card or birth certificate, or requiring a photo for age estimation.

However, this decision has not been entirely well received. Many people are claiming this decision is an overreach of the government’s power or a removal of parental responsibilities.

We’ve made a list of some of the pros and cons to this decision

Pros of UK Age Assurance ID Checks

Provides more blocks and checks to prevent children from accessing inappropriate material

With new requirements for facial age recognition or uploading a photo of ID, children and teenagers will find it hard to stumble upon, or actively view, inappropriate or illegal content. Accessing adult media now requires verification of the viewer’s age, making it much harder for children and teenagers under the required age of viewing to gain access to those sites, videos, and content.

Fewer concerns about children bypassing school filters

Concerns about students being able to bypass and walk-around school safety solutions, like content filters, by using their privately provided internet on personal devices, such as smart phones, are now going to be much less pressing for school staff. Even if students were using their own devices and internet access, thee new blocks and age assurance requirements mean that they won’t be able to deliberately or accidentally access sites and media that is inappropriate or illegal for them to view. While Lightspeed’s security and compliance solutions can offer bespoke and tailored filtering and monitoring on school devices and networks, having the assurance that those sits are still harder for students to access will be a relief to safeguarding teams.

Awareness for adults about the technology they’re using and the media they consume

With this new legislation, there is a wave of adults now being made more aware of the technology they’re using, the legalities of using and having access to it, and what kinds of content, and how much of that content they consume. People across the country have been coming to the realisation that they need to input their photos or ID to access various online media, such as social media platforms, music streaming sites like Spotify to access songs and music videos, and gambling websites. In some ways, it could be described as a ‘wakeup call’ by some adults about their online habits and what part they play on the internet and in internet safety.

Cons of UK Age Assurance ID Checks

It adds an unnecessary layer of censorship to adults

While filtering and content restriction are essential for protecting children online, applying the same measures to adults raises concerns. Many users feel that being forced to verify their age through ID to access legitimate and legal content—such as news articles, documentaries, or media intended for mature audiences—adds an unnecessary layer of censorship to adults in the UK. Although the new legislation doesn’t outright block adult access, it places barriers that many view as intrusive. Public sentiment reflects this frustration, with critics describing the rules as ‘condescending’ or even ‘government overreach’. Protecting children is vital—but should it come at the expense of adult digital freedoms and autonomy?

Shifting responsibility: Where are the parents?

Some people are questioning why they must upload personal IDs to access platforms like YouTube or Spotify, arguing that the responsibility of protecting children online should fall to parents and guardians when the device is owned by a student. Should adults bear the burden of age verification when parents could apply filters or supervise their children’s devices and activity directly?

While parents can rest assured that school-issued devices are typically equipped with filtering and monitoring tools, such as Lightspeed’s solutions like Filtro E Attenzione, some parents might struggle to keep up with the latest potentially harmful online sites and safety methods. Instead of encouraging parental responsibility, this current legislative change and effort seems, to many, ‘focused on restricting adult access,’ often at the cost of privacy.

Some worry that this shift removes accountability from parents entirely. That these blanket bans and ID assurances allow parents to avoid sometimes difficult but necessary conversations with their children and teens about online safety and appropriate content.

Without stronger, and consistent, parental involvement, the goal of keeping children safe online becomes harder to achieve—and risks placing undue strain on the wrong users.

Risk of data breaches when ID is on multiple platforms

As more platforms require users to upload photos of themselves and official IDs for age verification, sensitive personal data is being stored across multiple sites and third-party vendors.

This increases risk of a serious data breach. Even if your ID isn’t approved, it may still be stored—often in multiple places.

If just one site suffers a data breach, it could expose not only photos and names, but government issues IDs, home addresses and more. But this isn’t just one site asking for photos or ID. This is multiple sites asking for and holding that data, multiple companies providing and using age assurance technology and storage, uploading multiple photos and copies of official ID. This fragmented system leaves user data vulnerable. Everyone’s data and photos, including children’s up for grabs to everyone from third-party companies to cybercriminals.

A recent example: The Tea App—designed to give women a space to discuss problematic dating experiences—experienced a major data breach. In addition to leaked posts, users’ personal data, including government issues IDs used for verification, were exposed online. In recent news, several of the women impacted by this data breach are now suing Tea for the incident.

As age verification becomes more common, the potential for mishandled data grows. The more platforms and sites collect and store this information, the greater the risk.

Considerazioni finali

The Safer Internet Legislation is a positive step towards creating a more secure online environment for children and teens, with the right intentions at its core. But like any policy, its success will depend on thoughtful implementation, and it may still have some room to grow before it could be considered ‘completely fool proof’.

At Lightspeed, we believe that safety and access can go hand-in-hand—filtering content whilst also providing children and teenagers room to learn and develop—and we’re here to help schools strike that balance as the digital landscape continues to evolve.

If you’d like to know more about how Lightspeed can support your school with filtering and monitoring, please get in touch with a member of our team Qui.

Contenuto consigliato