Close Menu
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now

The Elon Musk and Donald Trump Breakup Has Started

6 June 2025

Nintendo Fans Praise New Switch 2 eShop Feature, And Far Smoother Performance

6 June 2025

Here are three new apps building out the open social web

6 June 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact
Facebook X (Twitter) Instagram Pinterest VKontakte
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release
Tech News VisionTech News Vision
Home » Meta Reportedly Planning to Replace Human Reviewers With AI for Risk Assessment
Apps

Meta Reportedly Planning to Replace Human Reviewers With AI for Risk Assessment

News RoomBy News Room2 June 2025Updated:2 June 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

Meta is reportedly planning to shift a large portion of risk assessments for its products and features to artificial intelligence (AI). As per the report, the Menlo Park-based social media giant is considering letting AI handle the approvals of its features and product updates, which were so far exclusively handled by human evaluators. This change will reportedly affect the addition of new algorithms, new safety features, and how content is shared across different social media platforms. The decision will reportedly boost the speed of rolling out new features, updates, and products.

According to an NPR report, Meta is planning to automate up to 90 percent of all the internal risk assessments. The publication claimed to have obtained company documents that detail the possible shift in strategy.

So far, any new features or updates for Instagram, WhatsApp, Facebook, or Threads have had to go through a group of human experts who reviewed the implications of how the change would impact users, whether it would violate their privacy, or bring harm to minors. The evaluations, reportedly known as privacy and integrity reviews, also assessed whether a feature could lead to a rise in misinformation or toxic content.

With AI handling the risk assessment, product teams will reportedly receive an “instant decision” after they fill out a questionnaire about the new feature. The AI system is said to either approve the feature or provide a list of requirements that need to be fulfilled before the project can go ahead. The product team then has to verify that it has met those requirements before launching the feature, the report claimed.

As per the report, the company believes shifting the review process to AI will significantly increase the release speed for features and app updates and allow product teams to work faster. However, some current and former Meta employees are reportedly concerned about whether this benefit will come at the cost of strict scrutiny.

In a statement to the publication, Meta said that human reviewers were still being used for “novel and complex issues” and AI was only allowed to handle low-risk decisions. However, based on the documents, the report claims that Meta’s planned transition includes letting AI handle potentially critical areas such as AI safety, youth risk, and integrity — an area said to handle items such as violent content and “spread of falsehood.”

An unnamed Meta employee familiar with product risk assessments told NPR that the automation process started in April and has continued throughout May. “I think it’s fairly irresponsible given the intention of why we exist. We provide the human perspective of how things can go wrong,” the employee was quoted as saying.

Notably, earlier this week, Meta released its Integrity Reports for the first quarter of 2025. In the report, the company stated, “We are beginning to see LLMs operating beyond that of human performance for select policy areas.”

The social media giant added that it has started using AI models to remove content from review queues in scenarios where it is “highly confident” that the said content does not violate its policies. Justifying the move, Meta added, “This frees up capacity for our reviewers allowing them to prioritise their expertise on content that’s more likely to violate.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Google Doubles Gemini 2.5 Pro Rate Limit for Google AI Pro Subscribers

5 June 2025

OpenAI Brings ChatGPT Record Mode on MacOS, Adds Tool to Connect to Gmail and Outlook

5 June 2025

Truecaller Crosses 3 Million Paying Subscribers Globally; 16 Percent Growth in iOS Users

5 June 2025

Google Weather in Search Reportedly Testing AI-Powered Summaries In Some Cities

4 June 2025
Editors Picks

How to transfer your Switch 1 data to a Switch 2

6 June 2025

Silicon Valley Is Starting to Pick Sides in Musk and Trump’s Breakup

6 June 2025

First Full Trailer for FX’s Alien: Earth Reveals 5 Species of Alien, Someone Pulling the Inner Jaws Out of a Xenomorph… and Is That a Predator Tease?

6 June 2025

Wing and Walmart are bringing drone delivery to 100 new stores

6 June 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Tech News Vision
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2025 Tech News Vision. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.