The UK government has announced that it will strengthen the Online Safety Act by putting stricter legal requirements on tech companies to search for and remove material which encourages or assists self-harm.
While platforms already have to take specific steps to protect children from dangerous self-harm content, the government said it recognises that adults battling mental health challenges are equally at risk from exposure to material that could trigger a mental health crisis or worse.
Under the new regulations, the government said that content encouraging or assisting serious self-harm will be treated as a priority offence for all users.
These changes will trigger the “strongest possible legal protections”, with the government saying this will compel platforms to use cutting-edge technology to actively seek out and eliminate this content before it can reach users, rather than simply reacting after someone has already been exposed to it.
“This government is determined to keep people safe online – vile content that promotes self-harm continues to be pushed on social media and can mean potentially heart-wrenching consequences for families across the country,” said technology secretary Liz Kendall. “Our enhanced protections will make clear to social media companies that taking immediate steps to keep users safe from toxic material that could be the difference between life and death is not an option, but the law.”
Britain has introduced groundbreaking online safety regulations, requiring technology companies to take decisive action against criminal content on their platforms.
Under the Online Safety Act, which passed into law in October 2023, social media platforms, messaging apps, gaming services, and file-sharing websites had to complete comprehensive risk assessments by 16 March 2025, identifying potential illegal content affecting children and adults.
The regulations introduced several key measures, including mandatory senior accountability for safety, improved content moderation, and enhanced reporting mechanisms.
Platforms must also ensure their moderation teams are adequately resourced and trained to remove illegal material quickly.
Specific protections for children include preventing strangers from accessing children’s profiles and locations and blocking direct messages from non-connected accounts.
The codes also require platforms to use automated tools to detect child sexual abuse material more efficiently.