AI Is Turning Cloud Break-Ins Into a Race Against the Clock
A new study shows how attackers are using AI to automate break-ins on Amazon Web Services, rapidly escalating access and running up costs before defenders can react.
Topics
News
- AI Is Turning Cloud Break-Ins Into a Race Against the Clock
- Anthropic Pushes Claude Toward Long-Haul Work With Opus 4.6
- ICC Taps Google Gemini to Power AI Fan Experiences at T20 World Cup
- OpenAI Unveils Frontier to Scale AI Agents Inside Companies
- UK Regulator Seeks Explanation From Air India Over Dreamliner Switch Issue
- Alphabet Plans Major Bengaluru Expansion With Space for 20,000 Staff
Attackers are using AI tools to shrink cloud break-ins from hours to minutes, sharply reducing the time companies have to detect and stop intrusions, according to new research from Sysdig’s threat research team.
In a real-world incident investigated by Sysdig last November, attackers moved from exposed credentials to full administrative control of an Amazon Web Services (AWS) environment in less than 10 minutes.
Researchers said AI tools were used to automate reconnaissance, generate malicious code and guide decisions throughout the intrusion.
“AI has fundamentally changed the economics and speed of cloud attacks,” Sysdig researchers said, noting that tasks that once required manual effort and time can now be executed almost instantly with AI assistance.
The attack began when adversaries found valid AWS credentials stored in publicly accessible Simple Storage Service buckets. The buckets held retrieval-augmented generation data for AI models and inadvertently exposed sensitive access keys.
The credentials belonged to an Identity and Access Management user with read and write permissions on AWS Lambda and limited access to Amazon Bedrock.
Although the user was assigned the ReadOnlyAccess policy, that was sufficient for attackers to carry out extensive reconnaissance across services including Secrets Manager, Systems Manager, EC2, ECS, RDS, and CloudWatch.
Within minutes, they injected malicious code into an existing Lambda function and escalated privileges by creating new access keys tied to an administrative account. Sysdig said the attackers then established persistence by creating a backdoor user with full administrator rights.
To complicate detection, the operation was distributed across 19 different AWS principals, including multiple IAM roles and compromised users. The attackers also rotated IP addresses on nearly every request, making activity harder to correlate.
After confirming that model-invocation logging was disabled, the attackers shifted to abusing Bedrock in what Sysdig described as “LLMjacking.” They invoked multiple foundation models, including Anthropic’s Claude, DeepSeek, Llama and Amazon’s own Nova and Titan models, likely to generate code and plan further actions.
The attackers also deployed a Terraform module to create a hidden Lambda function capable of generating Bedrock credentials and exposing them through a publicly accessible Lambda URL without authentication.
The final stage focused on resource abuse. The attackers queried more than 1,300 machine images optimized for deep learning and launched a high-end GPU instance costing more than $32 an hour. They configured the instance with CUDA, PyTorch, and a publicly accessible JupyterLab server, creating an alternative access path independent of AWS credentials.
Sysdig said the attackers relied on several evasion techniques, including an IP rotation tool that changed the source address for each request, making it harder for defenders to correlate activity.
The researchers also found strong indicators of AI-assisted development. The malicious Lambda code included detailed exception handling, precise timeout settings, and even comments in Serbian.
At the same time, the code showed signs of AI hallucinations, such as attempts to reference non-existent AWS accounts, fabricated GitHub repositories, and session names like “claude-session”.
“As LLMs become more capable, attacks like this will become faster and harder to stop…Organizations need to assume that credential exposure can lead to compromise within minutes,” the researchers said.
The firm urged companies to tighten identity and access management, restrict Lambda modification rights, eliminate public S3 buckets containing sensitive data, enable logging for AI model usage and rely more heavily on runtime detection rather than preventive controls alone.
