On Thursday, Anthropic released a statement covering issues with its flagship AI Claude after users complained that the software was providing lower-quality responses in its new iteration.

The AI company acknowledged the problems, and said they could be traced to three changes it made to Claude Code, Claude Agent SDK and Claude Cowork, its premium AI offerings. By way of apology, Anthropic reset usage limits for all subscribers on 23 April.

The first, on 4 March, was changing Claude Code’s default reasoning effort from high to medium. This was in response to reports of high latency, “enough to make the UI wrapper look frozen,” in its high reasoning mode. The company said that this move was, in hindsight, the wrong tradeoff, and one that it reverted on April 7 after customer feedback.

The second change was made on 26 March, and intended to reduce latency for users by clearing Claude’s older thinking from sessions that had been idle for over an hour. However, a bug caused this behaviour to subsequently happen every turn for the rest of the session, which Anthropic said made Claude appear “forgetful and repetitive”. This issue was fixed on 10 April.

The final change took place on 16 April. The company added a system prompt that was intended to “reduce verbosity” from Claude, but the result was a drop in its coding quality, and was reverted on 20 April.

Anthropic’s statement made clear that although the way these issues affected different user segments at different times made it appear like Claude was experiencing “broad, inconsistent degradation,” this is not the case.

Model collapse, the degradation of AI models over time, is a phenomenon that has been observed in studies where future models are trained on un-curated synthetic data produced by other artificial intelligences. IBM described it in a blog post as a challenge that “poses serious ramifications for AI development”, and one that could seriously harm a company’s user engagement.


Share.
Exit mobile version