Close Menu
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now

The 14 Best Subscription Boxes for Kids

17 May 2025

Huawei’s first trifold is a great phone that you shouldn’t buy

17 May 2025

The Best Folding Phones

17 May 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact
Facebook X (Twitter) Instagram Pinterest VKontakte
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release
Tech News VisionTech News Vision
Home » Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models
What's On

Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models

News RoomBy News Room14 March 2025Updated:14 March 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of “AI safety,” “responsible AI,” and “AI fairness” in the skills it expects of members and introduces a request to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness.”

The information comes as part of an updated cooperative research and development agreement for AI Safety Institute consortium members, sent in early March. Previously, that agreement encouraged researchers to contribute technical work that could help identify and fix discriminatory model behavior related to gender, race, age, or wealth inequality. Such biases are hugely important because they can directly affect end users and disproportionately harm minorities and economically disadvantaged groups.

The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signalling less interest in tracking misinformation and deep fakes. It also adds emphasis on putting America first, asking one working group to develop testing tools “to expand America’s global AI position.”

“The Trump administration has removed safety, fairness, misinformation, and responsibility as things it values for AI, which I think speaks for itself,” says one researcher at an organization working with the AI Safety Institute, who asked not to be named for fear of reprisal.

The researcher believes that ignoring these issues could harm regular users by possibly allowing algorithms that discriminate based on income or other demographics to go unchecked. “Unless you’re a tech billionaire, this is going to lead to a worse future for you and the people you care about. Expect AI to be unfair, discriminatory, unsafe, and deployed irresponsibly,” the researcher claims.

“It’s wild,” says another researcher who has worked with the AI Safety Institute in the past. “What does it even mean for humans to flourish?”

Elon Musk, who is currently leading a controversial effort to slash government spending and bureaucracy on behalf of President Trump, has criticized AI models built by OpenAI and Google. Last February, he posted a meme on X in which Gemini and OpenAI were labeled “racist” and “woke.” He often cites an incident where one of Google’s models debated whether it would be wrong to misgender someone even if it would prevent a nuclear apolocalypse—a highly unlikely scenario. Besides Tesla and SpaceX, Musk runs xAI, an AI company that competes directly with OpenAI and Google. A researcher who advises xAI recently developed a novel technique for possibly altering the political leanings of large language models, as reported by WIRED.

A growing body of research shows that political bias in AI models can impact both liberals and conservatives. For example, a study of Twitter’s recommendation algorithm published in 2021 showed that users were more likely to be shown right-leaning perspectives on the platform.

Since January, Musk’s so-called Department of Government Efficiency (DOGE) has been sweeping through the US government, effectively firing civil servants, pausing spending, and creating an environment thought to be hostile to those who might oppose the Trump administration’s aims. Some government departments such as the Department of Education have archived and deleted documents that mention DEI. DOGE has also targeted NIST, the parent organization of AISI, in recent weeks. Dozens of employees have been fired.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Amazon claims its ‘constantly inviting’ new customers to Alexa Plus

18 May 2025

It’s time for Logitech to make a real Forever Mouse

17 May 2025

Graduation gifts 2025: 28 unique and practical ideas

17 May 2025

Security News This Week: Coinbase Will Reimburse Customers Up to $400 Million After Data Breach

17 May 2025
Editors Picks

It’s time for Logitech to make a real Forever Mouse

17 May 2025

Graduation gifts 2025: 28 unique and practical ideas

17 May 2025

Security News This Week: Coinbase Will Reimburse Customers Up to $400 Million After Data Breach

17 May 2025

Epic asks judge to make Apple let Fortnite back on the US App Store

17 May 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Tech News Vision
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2025 Tech News Vision. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.