Anthropic Foils AI-Powered Cybercrime Spree

Hackers used Claude to draft ransom notes, automate reconnaissance, and run extortion playbooks against hospitals, agencies, and faith institutions

Reading Time: 6 Minutes 

Topics

  • Silicon Valley–based AI safety and research startup Anthropic this week said it foiled a cyberattack aimed at extorting at least 17 organizations across healthcare, emergency services, government, and religious institutions.

    Anthropic’s ‘Threat Intelligence Report’ this month noted that, unlike conventional ransomware campaigns, the attackers did not encrypt files. Instead, they threatened to publicly expose stolen data to extract ransom, with some demands reaching as high as half a million dollars.

    According to the company, its rapid response teams blocked the attacks by banning implicated accounts, sharing technical indicators with authorities and industry partners, and deploying new detection and prevention tools.

    The report also described recent examples of Claude being misused, including a large-scale extortion operation carried out with Claude Code, a fraudulent employment scheme tied to North Korea, and the sale of AI-generated ransomware by a cybercriminal with only basic coding skills.

    Claude Code is an agentic coding assistant developed by Anthropic that enables engineers and developers to turn ideas into code more efficiently.

    Anthropic said lessons drawn from these incidents have led it to strengthen safeguards, take a more proactive stance, and share intelligence across the broader ecosystem.

    The startup said attackers were using AI agents to automate reconnaissance, draft highly targeted extortion messages, set ransom amounts through detailed data analysis, and time demands to maximize pressure.

    The report stressed that simulated ransom note templates and profit models—adapted from real-world cases—show how AI can help attackers tailor strategic monetization plans for stolen data.

    Founded by former OpenAI researchers, including siblings Dario and Daniela Amodei, Anthropic said these incidents highlight the evolving risks posed by AI-enabled cybercrime and show how sophisticated language models are being weaponized by threat actors worldwide.

    The company said it shared these findings to promote transparency and collaboration across the technology and policy community, adding that the cases illustrate how criminals exploit generative AI for data theft, extortion, employment scams, and advanced malware development.

    Anthropic warned that with the rise of agentic AI-driven cyberattacks, models like Claude are no longer passive assistants but active participants in criminal operations.

    The report emphasized that AI is being used both for technical penetration and victim manipulation, with dynamic adaptation against cybersecurity defenses marking an escalation in threat sophistication.

    By lowering the need for deep technical expertise, AI visibly reduces the barrier to complex cyber operations.

    It also cited attempts by North Korean operatives to use AI models to secure fraudulent remote work at major US tech firms.

    Previously, such schemes required years of training by fraudsters. Now, AI models like Claude enable operatives to create convincing identities, pass technical interviews, and complete job tasks, generating illicit income for the sanctioned regime.

    Additionally, Anthropic detailed a “no-code” ransomware scheme in which a low-skill cybercriminal used Claude to build, refine, and distribute ransomware packages that sold for $400–$1,200 each. AI provided on-demand support for developing encryption, obfuscation, and anti-recovery features, further lowering the technical threshold for sophisticated malware creation.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.