Close Menu
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now

Verge readers can get a rare discount on the smartphone-sized Boox Palma 2 e-reader 

4 June 2025

Elden Ring Nightreign’s Director Has Soloed Every Boss Without Relics, And Wants Players To Know It’s ‘Very Possible’ To See Everything

4 June 2025

Tim Sweeney didn’t expect a five-year Fortnite ban

4 June 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact
Facebook X (Twitter) Instagram Pinterest VKontakte
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release
Tech News VisionTech News Vision
Home » Meta Reportedly Planning to Replace Human Reviewers With AI for Risk Assessment
Apps

Meta Reportedly Planning to Replace Human Reviewers With AI for Risk Assessment

News RoomBy News Room2 June 2025Updated:2 June 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

Meta is reportedly planning to shift a large portion of risk assessments for its products and features to artificial intelligence (AI). As per the report, the Menlo Park-based social media giant is considering letting AI handle the approvals of its features and product updates, which were so far exclusively handled by human evaluators. This change will reportedly affect the addition of new algorithms, new safety features, and how content is shared across different social media platforms. The decision will reportedly boost the speed of rolling out new features, updates, and products.

According to an NPR report, Meta is planning to automate up to 90 percent of all the internal risk assessments. The publication claimed to have obtained company documents that detail the possible shift in strategy.

So far, any new features or updates for Instagram, WhatsApp, Facebook, or Threads have had to go through a group of human experts who reviewed the implications of how the change would impact users, whether it would violate their privacy, or bring harm to minors. The evaluations, reportedly known as privacy and integrity reviews, also assessed whether a feature could lead to a rise in misinformation or toxic content.

With AI handling the risk assessment, product teams will reportedly receive an “instant decision” after they fill out a questionnaire about the new feature. The AI system is said to either approve the feature or provide a list of requirements that need to be fulfilled before the project can go ahead. The product team then has to verify that it has met those requirements before launching the feature, the report claimed.

As per the report, the company believes shifting the review process to AI will significantly increase the release speed for features and app updates and allow product teams to work faster. However, some current and former Meta employees are reportedly concerned about whether this benefit will come at the cost of strict scrutiny.

In a statement to the publication, Meta said that human reviewers were still being used for “novel and complex issues” and AI was only allowed to handle low-risk decisions. However, based on the documents, the report claims that Meta’s planned transition includes letting AI handle potentially critical areas such as AI safety, youth risk, and integrity — an area said to handle items such as violent content and “spread of falsehood.”

An unnamed Meta employee familiar with product risk assessments told NPR that the automation process started in April and has continued throughout May. “I think it’s fairly irresponsible given the intention of why we exist. We provide the human perspective of how things can go wrong,” the employee was quoted as saying.

Notably, earlier this week, Meta released its Integrity Reports for the first quarter of 2025. In the report, the company stated, “We are beginning to see LLMs operating beyond that of human performance for select policy areas.”

The social media giant added that it has started using AI models to remove content from review queues in scenarios where it is “highly confident” that the said content does not violate its policies. Justifying the move, Meta added, “This frees up capacity for our reviewers allowing them to prioritise their expertise on content that’s more likely to violate.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Telegram Update Brings DMs to Channels, Voice Message Trimming, and More Features

4 June 2025

Character.AI Unveils Video Generation Tool, Community Feed and Other Interactive Features

3 June 2025

Microsoft Bing Adds an AI Video Creator Tool Powered by OpenAI’s Sora

3 June 2025

WhatsApp for Android, iOS May Soon Let You Copy Specific Parts of a Message

3 June 2025
Editors Picks

Telegram Update Brings DMs to Channels, Voice Message Trimming, and More Features

4 June 2025

The Street Sharks Are Back in New IDW Publishing Series

4 June 2025

Samsung Teases ‘Ultra’ Foldable; May Debut Alongside Galaxy Z Fold 7 and Galaxy Z Flip 7

4 June 2025

AI Darth Vader Was the Beginning: Epic Reveals Plans to Let People Create Their Own AI NPCs in Fortnite

4 June 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Tech News Vision
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2025 Tech News Vision. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.