草莓传媒

Anthropic warns of AI-driven hacking campaign linked to China

WASHINGTON (AP) 鈥 A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion.

The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers.

While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new operation is the degree to which AI was able to automate some of the work, the researchers said.

鈥淲hile we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale,” they wrote in .

The operation targeted tech companies, financial institutions, chemical companies and government agencies. The researchers wrote that the hackers attacked 鈥渞oughly thirty global targets and succeeded in a small number of cases.鈥 Anthropic detected the operation in September and took steps to shut it down and notify the affected parties.

Anthropic noted that while AI systems are increasingly being used in a variety of settings for work and leisure, they can also be weaponized by hacking groups working for foreign adversaries. The San Francisco-based company, maker of the , is one of many tech developers pitching AI 鈥渁gents鈥 that go beyond a chatbot’s capability to access computer tools and take actions on a person’s behalf.

鈥淎gents are valuable for everyday work and productivity 鈥 but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks,鈥 the researchers concluded. 鈥淭hese attacks are likely to only grow in their effectiveness.鈥

A spokesperson for China’s embassy in Washington did not immediately return a message seeking comment on the report.

Microsoft that foreign adversaries were increasingly embracing AI to make their cyber campaigns more efficient and less labor-intensive. The head of OpenAI’s safety panel, which has the authority to halt the ChatGPT maker’s AI development, he’s watching out for new AI systems that give malicious hackers 鈥渕uch higher capabilities.鈥

America鈥檚 adversaries, as well as criminal gangs and , have exploited AI鈥檚 potential, using it to automate and improve cyberattacks, to spread and to penetrate sensitive systems. AI can translate poorly worded phishing emails into fluent English, for example, as well as .

Anthropic said the hackers were able to manipulate Claude, using 鈥渏ailbreaking鈥 techniques that involve tricking an AI system to bypass its guardrails against harmful behavior, in this case by claiming they were employees of a legitimate cybersecurity firm.

鈥淭his points to a big challenge with AI models, and it鈥檚 not limited to Claude, which is that the models have to be able to distinguish between what鈥檚 actually going on with the ethics of a situation and the kinds of role-play scenarios that hackers and others may want to cook up,鈥 said John Scott-Railton, senior researcher at Citizen Lab.

The use of AI to automate or direct cyberattacks will also appeal to smaller hacking groups and lone wolf hackers, who could use AI to expand the scale of their attacks, according to Adam Arellano, field CTO at Harness, a tech company that uses AI to help customers automate software development.

鈥淭he speed and automation provided by the AI is what is a bit scary,鈥 Arellano said. 鈥淚nstead of a human with well-honed skills attempting to hack into hardened systems, the AI is speeding those processes and more consistently getting past obstacles.鈥

AI programs will also play an increasingly important role in defending against these kinds of attacks, Arellano said, demonstrating how AI and the automation it allows will benefit both sides.

Reaction to Anthropic’s disclosure was mixed, with some seeing it as a marketing ploy for Anthropic’s approach to defending cybersecurity and others who welcomed its wake-up call.

鈥淭his is going to destroy us – sooner than we think – if we don鈥檛 make AI regulation a national priority tomorrow,鈥 wrote U.S. Sen. Chris Murphy, a Connecticut Democrat, on social media.

That led to criticism from Meta’s chief AI scientist Yann LeCun, an advocate of the Facebook parent company’s open-source AI systems that, unlike Anthropic’s, make their key components publicly accessible in a way that some AI safety advocates deem too risky.

鈥淵ou鈥檙e being played by people who want regulatory capture,” LeCun wrote in a reply to Murphy. “They are scaring everyone with dubious studies so that open source models are regulated out of existence.鈥

__

O’Brien reported from Providence, Rhode Island.

Copyright © 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.

Federal 草莓传媒 Network Logo
Log in to your 草莓传媒 account for notifications and alerts customized for you.