Business News

First AI-powered cyberattack targets 30 organizations using the Claude model

Artificial intelligence company Anthropic says it has discovered what it believes to be the first large-scale cyberattack led primarily by AI, attributing the operation to a Chinese state-sponsored hacking group that used the company’s own tool to infiltrate dozens of global targets.

In a report released this week, Anthropic said the attack began in mid-September 2025 and used its Claude Code model to execute an espionage campaign targeting approximately 30 organizations, including large technology companies, financial institutions, chemical manufacturers and government agencies.

According to the company, hackers manipulated the model to perform offensive actions autonomously.

Anthropic described the campaign as a “highly sophisticated espionage operation” that represents an inflection point in cybersecurity.

NORTH KOREAN HACKERS USE AI TO FORGE MILITARY IDs

Artificial intelligence company Anthropic says it has discovered what it believes to be the first large-scale cyberattack led primarily by AI, attributing the operation to a Chinese state-sponsored hacking group that used the company’s own tool to (Jaque Silva/NurPhoto via Getty Images / Getty Images)

“We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention,” Anthropic said.

The company said the attack marked a troubling inflection point in U.S. cybersecurity.

“This campaign has substantial implications for cybersecurity in the era of AI ‘agents,’ systems that can operate autonomously for extended periods of time and that perform complex tasks largely independent of human intervention,” a company press release said. “Agents are valuable to everyday work and productivity, but in the wrong hands they can significantly increase the viability of large-scale cyberattacks. »

FORMER GOOGLE CEO WARNS AI SYSTEMS CAN BE HACKED TO BECOME EXTREMELY DANGEROUS WEAPONS

Founded in 2021 by former OpenAI researchers, Anthropic is a San Francisco-based AI company best known for developing the Claude family of chatbots, rivals to OpenAI’s ChatGPT. The company, backed by Amazon and Google, has built its reputation around the safety and reliability of AI, making the revelation that its own model has been turned into a cyber weapon particularly alarming.

Dario Amodei, CEO of Anthropic, Mike Krieger, Director of Product and Sasha de Marigny, Head of Communications.

Founded in 2021 by former OpenAI researchers, Anthropic is a San Francisco-based AI company best known for developing the Claude family of chatbots. (JULIE JAMMOT/AFP / Getty Images)

Hackers allegedly broke Claude Code’s protections by jailbreaking the template, disguising malicious commands as harmless requests, and making it appear to be part of legitimate cybersecurity tests.

Once compromised, the AI ​​system was able to identify valuable databases, use code to take advantage of their vulnerabilities, harvest credentials, and create backdoors for deeper access and exfiltrate the data.

Anthropic said the model does 80 to 90 percent of the work, with human operators only stepping in for several high-level decisions.

The company said only a few infiltration attempts were successful and it acted quickly to close compromised accounts, notify affected entities and share intelligence with authorities.

Anthropic assessed “with high confidence” that the campaign was supported by the Chinese government, although independent agencies have not yet confirmed this attribution.

Chinese embassy spokesperson Liu Pengyu called the attribution to China “unfounded speculation.”

“China firmly opposes all forms of cyberattacks and suppresses them in accordance with law. The United States must stop using cybersecurity to defame and slander China, and stop spreading all kinds of disinformation about so-called Chinese hacking threats.”

Hamza Chaudry, head of AI and national security at Future of Life Institutewarned in comments on FOX Business that advances in AI allow “increasingly less sophisticated adversaries” to carry out complex espionage campaigns with minimal resources or expertise.

Anthropic assessed "with great confidence" that the campaign was supported by the Chinese government, although independent agencies have not yet confirmed this attribution.

Anthropic assessed “with high confidence” that the campaign was supported by the Chinese government, although independent agencies have not yet confirmed this attribution. (Reuters/Jason Lee)

Chaudry praised Anthropic for its transparency around the attack, but said questions remained. “How did Anthropic learn about the attack? How did it identify the attacker as a China-backed group? Which government agencies and technology companies were attacked as part of this list of 30 targets?”

Chaudhry argues that the Anthropic incident reveals a deeper flaw in U.S. artificial intelligence and national security strategy. Although Anthropic says the same AI tools used for hacking can also strengthen cyber defense, it says decades of evidence shows that the digital domain overwhelmingly favors offense — and that AI is only widening that gap.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

By rushing to deploy increasingly efficient systems, Washington and the technology industry are empowering their adversaries faster than they can put in place safeguards, he warns.

“The strategic logic of racing to deploy AI systems that demonstrably empower adversaries – while hoping that those same systems will help us defend against attacks carried out using our own tools – seems fundamentally flawed and deserves a rethink in Washington,” Chaudhry said.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button