Security researchers have discovered a flaw in Amazon Web Services (AWS) that could allow unauthorized access to an AWS account simply by publishing an Amazon Machine Image (AMI) with a cleverly chosen name.
The vulnerability, dubbed “whoAMI,” was first identified by researchers at DataDog in August 2024. They demonstrated how attackers could exploit the way software retrieves AMI IDs, potentially gaining code execution privileges within AWS environments. Although Amazon addressed the issue with a fix in September, the risk remains for organizations that have not updated their configurations.
How the “whoAMI” Attack Works
AMIs serve as preconfigured virtual machines used to launch AWS EC2 instances. Users typically search for AMIs by name or ID, but the attack becomes possible when software retrieves AMIs without specifying an owner. This allows attackers to insert malicious AMIs into the search results, tricking users into launching compromised instances.
Several factors increase the risk of a “whoAMI” attack:
- Using the
ec2:DescribeImages
API without specifying an AMI owner. - Scripts relying on wildcard searches instead of exact AMI IDs.
- Infrastructure-as-code tools like Terraform using the
most_recent=true
filter, which automatically selects the latest AMI matching a given name.
By exploiting these conditions, attackers can publish malicious AMIs with names that closely resemble those of trusted providers, increasing the likelihood of their selection. Notably, this attack does not require direct access to the victim’s AWS account—just an AWS account to publish an AMI publicly.
Amazon’s Response and Recommended Defenses
Upon being notified of the vulnerability, Amazon confirmed that even its internal non-production systems had been affected. The company rolled out a fix in September 2024 and later introduced a security feature called “Allowed AMIs” in December. This feature allows AWS users to create an allowlist of trusted AMI providers, preventing the selection of potentially harmful images.
Amazon maintains that there is no evidence of real-world exploitation beyond security research tests. However, AWS customers are strongly advised to:
- Always specify AMI owners when using
ec2:DescribeImages
. - Enable “Allowed AMIs” via AWS Console → EC2 → Account Attributes → Allowed AMIs.
- Update Terraform configurations to avoid using
most_recent=true
without an owner filter. - Audit AWS CLI, Python Boto3, and Go AWS SDK configurations to ensure safe AMI selection.
For additional protection, DataDog has released a scanner tool on GitHub that helps AWS users detect instances created from untrusted AMIs.
With potentially thousands of AWS accounts still at risk, organizations must take immediate steps to secure their environments against the “whoAMI” attack.
Bijay Pokharel
Related posts
Recent Posts
Subscribe
Cybersecurity Newsletter
You have Successfully Subscribed!
Sign up for cybersecurity newsletter and get latest news updates delivered straight to your inbox. You are also consenting to our Privacy Policy and Terms of Use.