Technology Secretary Peter Kyle has described the UK’s internet laws as “very uneven” and “unsatisfactory”, the BBC has reported.

The statement follows those of campaigners calling for a tightening of online safety rules.

Among them is Ian Russell, the father of Molly Russell, a 14-year-old British girl who committed suicide after seeing harmful content online. In a letter to the minister, Russell said that the UK is “backtracking” on the matter.

He argued that the Online Safety Act, which aims to get tech giants to take greater responsibility for the content on their sites, must be corrected and said that a “duty of care” should be imposed on companies.

In his letter, Russell also argued that “ominous” changes in the tech industry put more pressure on the government to act, warning that Meta founder Mark Zuckerberg and X-owner Elon Musk are “at the forefront of a global recalibration of the industry”.

He also said that Zuckerberg’s recently announced plans to axe third-party fact-checking in favour of community notes, which is being rolled out first in the US, will allow the return of the harmful content that his daughter Molly was exposed to.

The Cconservative government originally had plans to enforce the removal of legal but harmful content under the Online Safety Bill. Following harsh criticism, with several ministers accusing the plan of promoting censorship, the plans were dropped. This a outcome led Kyle to say that he had inherited “a landscape where we have a very uneven, unsatisfactory legislative settlement” in an interview with BBC journalist Laura Kuenssberg over the weekend.

While he didn’t announce any plans to change the legislation, the technology secretary said he was open about the issue, adding that in the coming months ministers will obtain powers to ensure that online platforms provide age-appropriate content.

Companies that do not comply with the rules will face “very severe” penalties, he told the news broadcaster.

Earlier this week, Zuckerberg announced the introduction of a new system that allows users to add ‘community notes’ to social media posts that they consider false. The move shifts dramatically from Meta’s initial 2016 approach, which included third-party moderators performing checks.

The social media giant said that it will allow more speech by “lifting restrictions on some topics that are part of mainstream discourse”, instead focusing its enforcement practices on “illegal and high-severity violations.”

The company claims that expert fact-checkers on the platform have been influenced by personal biases, which it says has resulted in legitimate political speech and debate being fact checked, with its system then attaching “intrusive labels and reduced distribution”.

https://nationaltechnology.co.uk/Digital_Hate_Speech_Campaigners_Hit_Back_At_Meta_Plan_To_Ditch_Fact_Checkers.php

A Meta spokesperson told the BBC that there would be no change to how it treats content that encourages suicide, self-injury, and eating disorders and that the company would continue to use its automated systems to scan for “high-severity content.”

The Online Safety Act places new duties on social media companies and search services to reduce the risks and harm on their platforms, with key points including preventing children from accessing harmful and age-inappropriate content whilst providing clear ways to report issues.

Under the Act, providers will also have to implement systems to reduce illegal activity and remove illegal content, with independent regulator Ofcom overseeing compliance.


Share.
Exit mobile version