Britain’s media and privacy regulators on Thursday ordered major social media companies including Meta, TikTok, Snapchat and YouTube to strengthen age-verification systems, warning that platforms are failing to enforce their own minimum age rules under the country’s online safety framework.

Ofcom and the Information Commissioner’s Office (ICO) said the move forms part of the next phase of the UK’s Online Safety Act implementation and follows mounting concern about children accessing services intended for older users. The regulators have written to Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube requiring them to explain by 30 April how they will tighten age checks, restrict unwanted contact with minors and make recommendation feeds safer.

Dame Melanie Dawes, Ofcom’s chief executive, said in a statement that “these online services are household names, but they’re failing to put children’s safety at the heart of their products”. She added that “without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose”.

The ICO issued a separate open letter urging platforms to adopt more robust age-assurance technology rather than relying on users to self-declare their age. Paul Arnold, the regulator’s chief executive, said: “There’s now modern technology at your fingertips, so there is no excuse not to have effective age assurance measures in place.”

According to Ofcom research cited by the regulator, about 72 per cent of children aged eight to 12 access platforms that typically impose a minimum age of 13. The watchdog said companies must introduce more reliable checks, implement stronger safeguards to prevent strangers contacting children and ensure algorithms do not promote harmful material.

The regulatory push comes as the UK government considers further restrictions on children’s access to social media, including a possible ban for under-16s similar to measures adopted in Australia. Ofcom has also issued legally binding information requests to major platforms seeking more detail about how their recommendation systems operate for younger users.

Technology companies defended existing safeguards. A Meta spokesperson said the company already uses artificial intelligence-based age detection tools and places teenagers into accounts with built-in protections, while arguing that age verification should be handled “centrally at the app store level”.

A YouTube spokesperson said the platform provides age-appropriate experiences and expressed surprise at Ofcom’s approach, urging the regulator to focus enforcement on services posing the highest risk.

Regulators have already begun enforcing stricter standards. The ICO recently fined Reddit £14.47 million for failing to implement effective age checks and for processing children’s data unlawfully, which it said risked exposing minors to inappropriate content.

Both Ofcom and the ICO said they would publish a joint statement later this month clarifying how online safety and data protection rules interact, with further enforcement possible if companies fail to meet expectations.


Share.
Exit mobile version