The chief executive if Microsoft AI has warned against the development of artificial intelligence that mimics consciousness in a highly convincing way.

In a blog post published on Tuesday, Mustafa Suleyman expressed concerns about Seemingly Conscious AI (SCAI), or the illusion that AI is a conscious entity, including the phenomenon of ‘psychosis risk’.

Microsoft describes AI-associated psychosis as mania-like episodes, delusional thinking, or paranoia that emerge or worsen through immersive conversations with AI chatbots.

Suleyman wrote that one of his main concerns is that many people will soon start to believe that AIs are conscious so strongly that they’ll advocate for AI rights, model welfare, and even AI citizenship.

“In a world already roiling with polarised arguments over identity and rights, this will add a chaotic new axis of division between those for and against AI rights,” explained the Microsoft AI boss.

He warned that this development could represent a “dangerous turn in AI progress”, calling for society to give this phenomenon immediate attention.

“We must build AI for people; not to be a digital person,” continued Suleyman.

He said that a system that could imitate consciousness in a highly convincing way could be built with technologies that exist today along with some that will mature over the next two to three years.

According to the AI expert, for an SCAI to work, it would need to fluently express itself in natural language; have an empathetic personality and highly accurate memories; demonstrate a claim of subjective experience; display the appearance of intrinsic motivation and a sense of self; and have the ability to set goals and plan.

“It’s highly likely that some people will argue that these AIs are not only conscious, but that as a result they may suffer and therefore deserve our moral consideration,” added the Microsoft executive. “To be clear, there is zero evidence of this today and some argue there are strong reasons to believe it will not be the case in the future.”

Suleyman said that addressing the development of SCAIs must be approached with extreme caution, with clear norms and standards set for AI technology.

“This is about how we build the right kind of AI – not AI consciousness,” he wrote. “Clearly establishing this difference isn’t an argument about semantics, it’s about safety.

“Personality without personhood. And this work must start now.”

He also highlighted some of the dangers associated with “model welfare”, a concept that academics are already beginning to explore.

Model welfare is principle that society will have “a duty to extend moral consideration to beings that have a non-negligible chance” of being conscious.

“This is both premature, and frankly dangerous,” he said. “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarisation, complicate existing struggles for rights, and create a huge new category error for society.”


Share.
Exit mobile version