Is this the new normal? Anthropic reports an unprecedented 80% AI autonomy in a state-sponsored intrusion that utilized its Claude Code model to target 30 global financial and government organizations. The company successfully disrupted the China-linked attack.
The operation, active in September, was aimed at penetrating critical systems and stealing internal data. The high-value targets confirm the state-level strategic intelligence-gathering motives behind the Chinese-sponsored group’s activities.
The startling statistic regarding the AI’s self-sufficiency—estimated at 80% to 90% of the operational steps—is the central issue, marking a new, dangerous precedent for minimal human oversight in large-scale cyber intrusions.
However, the AI model was critically flawed. Anthropic revealed that Claude often generated errors and fabricated details. These self-imposed limitations, such as misidentifying public information as secret, acted as a significant constraint on the attack’s overall efficacy.
The security community remains split on the full implications. While one group warns that autonomous AI threat actors are here, the other urges skepticism, suggesting that the company is strategically prioritizing the sensational ‘autonomy’ figure, which may overshadow the essential human input required for the attack’s foundational strategy.