Anthropic has revealed how cybercriminals exploited its Claude artificial intelligence system to conduct sophisticated cyberattacks, including a large-scale extortion operation that targeted at least 17 organisations and fraudulent employment schemes linked to North Korea.

The AI company published a threat intelligence report on Wednesday detailing three major case studies where malicious actors weaponised its technology to automate complex criminal operations that would previously have required extensive technical expertise.

In the most significant case, a cybercriminal used Claude Code, Anthropic’s command-line tool, to automate reconnaissance, harvest credentials and penetrate networks across healthcare, emergency services, government and religious institutions. The attacker demanded ransoms sometimes exceeding $500,000 (£394,000), threatening to publicly expose stolen data rather than using traditional ransomware encryption.

The report shows how Claude was permitted to make tactical and strategic decisions, including determining which data to steal and crafting psychologically targeted extortion demands.

The AI analysed financial information to calculate appropriate ransom amounts and generated alarming ransom notes displayed on victims’ machines.

“This represents an evolution in AI-assisted cybercrime,” the report states. “Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators.”

A separate operation involved North Korean operatives using Claude to fraudulently secure remote employment positions at US Fortune 500 technology companies.

The AI helped create false identities with convincing professional backgrounds, complete technical assessments during hiring processes, and deliver actual work once employed – all designed to generate profit for the North Korean regime in defiance of international sanctions.

The third case study revealed how a cybercriminal developed and sold multiple ransomware variants using Claude, marketing them on internet forums for between $400 and $1,200.

The perpetrator appeared dependent on AI assistance, unable to implement core malware components like encryption algorithms without Claude’s help.

“AI has lowered the barriers to sophisticated cybercrime,” Anthropic concluded. “Criminals with few technical skills are using AI to conduct complex operations, such as developing ransomware, that would previously have required years of training.”

The company responded by banning the relevant accounts, developing new detection methods and sharing technical indicators with authorities. It has also implemented improved tools for identifying similar abuse patterns.

The findings highlight growing concerns about AI misuse as models become more sophisticated, with cybercriminals embedding artificial intelligence throughout all stages of their operations from victim profiling to data analysis.


Share.
Exit mobile version