Close Menu
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Pokémon Fans Are Building Their Own LEGO Cubone With Pieces From a  Cute Puppy Set

Pokémon Fans Are Building Their Own LEGO Cubone With Pieces From a $30 Cute Puppy Set

6 May 2026
This slim ice cream maker could fit in my already crowded kitchen

This slim ice cream maker could fit in my already crowded kitchen

6 May 2026
A Kid With a Fake Mustache Tricked an Online Age-Verification Tool

A Kid With a Fake Mustache Tricked an Online Age-Verification Tool

6 May 2026
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact
Facebook X (Twitter) Instagram Pinterest VKontakte
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release
Tech News VisionTech News Vision
Home » I Am Begging AI Companies to Stop Naming Features After Human Processes
What's On

I Am Begging AI Companies to Stop Naming Features After Human Processes

News RoomBy News Room6 May 2026Updated:6 May 2026No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email
I Am Begging AI Companies to Stop Naming Features After Human Processes

Anthropic just announced a new feature called “dreaming” at the company’s developer conference in San Francisco. It’s part of Anthropic’s recently launched AI agent infrastructure designed to help users manage and deploy tools that automate software processes. This “dreaming” aspect sorts through the transcript of what an agent recently completed and attempts to glean insights to improve the agent’s performance.

Folks using AI agents often send them on multistep journeys, like visiting a few websites or reading multiple files, to complete online tasks. This new “dreaming” feature allows agents to look for patterns in their activity log and improve their abilities based on those insights.

The feature’s name immediately calls to mind Philip K. Dick’s seminal sci-fi novel, Do Androids Dream of Electric Sheep?, which explores the qualities that truly separate humans from powerful machines. While our current generative AI tools come nowhere close to the machines in the book, I’m ready to draw the line right here, right now: No more generative AI features with names that rip off human cognitive processes.

“Together, memory and dreaming form a robust memory system for self-improving agents,” reads Anthropic’s blog post about the launch of this research preview for developers. “Memory lets each agent capture what it learns as it works. Dreaming refines that memory between sessions, pulling shared learnings across agents and keeping it up-to-date.”

Courtesy of Claude

Since the spark of the chatbot revolution in 2022, leaders at AI companies have gone full tilt into naming aspects of generative AI tools after what goes on in the human brain. OpenAI released its first “reasoning” model in 2024, where the chatbot needed “thinking” time. The company described this release at the time as “a new series of AI models designed to spend more time thinking before they respond.” Numerous startups also refer to their chatbots as having “memories” about the user. Rather than the fast storage that’s typically referred to as a computer’s “memories,” these are much more humanlike nuggets of information: He lives in San Francisco, enjoys afternoon baseball games, and hates eating cantaloupe.

It’s a consistent marketing approach used by AI leaders, who have continued to lean into branding that blurs the line between what humans do and what machines can. Even the ways these companies develop chatbots, like Claude, with distinct “personalities,” can make users feel as if they are talking with something that has the potential for a deep inner life, something that would potentially have dreams even when my laptop is closed.

At Anthropic, this anthropomorphizing runs deeper than just marketing strategies. “We also discuss Claude in terms normally reserved for humans (e.g., ‘virtue,’ ‘wisdom’),” reads a portion of Anthropic’s constitution describing how it wants Claude to behave. “We do this because we expect Claude’s reasoning to draw on human concepts by default, given the role of human text in Claude’s training; and we think encouraging Claude to embrace certain humanlike qualities may be actively desirable.” The company even employs a resident philosopher to try to make sense of the bot’s “values.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

The Litter-Robot 4 bundle is back down to its best price of the year

The Litter-Robot 4 bundle is back down to its best price of the year

7 May 2026
Elon Musk’s Last-Ditch Effort to Control OpenAI: Recruit Sam Altman to Tesla

Elon Musk’s Last-Ditch Effort to Control OpenAI: Recruit Sam Altman to Tesla

7 May 2026
Google shuts down Project Mariner

Google shuts down Project Mariner

6 May 2026
This slim ice cream maker could fit in my already crowded kitchen

This slim ice cream maker could fit in my already crowded kitchen

6 May 2026
Editors Picks
Elon Musk’s Last-Ditch Effort to Control OpenAI: Recruit Sam Altman to Tesla

Elon Musk’s Last-Ditch Effort to Control OpenAI: Recruit Sam Altman to Tesla

7 May 2026
Star Fox Announced for Nintendo Switch 2

Star Fox Announced for Nintendo Switch 2

7 May 2026
Google shuts down Project Mariner

Google shuts down Project Mariner

6 May 2026
I Am Begging AI Companies to Stop Naming Features After Human Processes

I Am Begging AI Companies to Stop Naming Features After Human Processes

6 May 2026

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Tech News Vision
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2026 Tech News Vision. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.