Microsoft AI Researchers Accidentally Leaked 38TB of Internal Data
Microsoft AI researchers accidentally leaked 38 terabytes of data including employees’ personal computer backups, passwords to Microsoft services, and secret keys for three years in a major security blunder that could also have let a malicious attacker inject malicious code into exposed AI models.
The breach was discovered by cloud security firm Wiz whose security researchers found that a Microsoft employee inadvertently shared the URL for a misconfigured Azure Blob storage bucket containing the leaked information.
Microsoft linked the data exposure to using an excessively permissive Shared Access Signature (SAS) token, which allowed full control over the shared files. This Azure feature enables data sharing in a manner described by Wiz researchers as challenging to monitor and revoke.
When used correctly, Shared Access Signature (SAS) tokens offer a secure means of granting delegated access to resources within your storage account.
“Due to a lack of monitoring and governance, SAS tokens pose a security risk, and their usage should be as limited as possible. These tokens are very hard to track, as Microsoft does not provide a centralized way to manage them within the Azure portal,” Wiz warned today.
“In addition, these tokens can be configured to last effectively forever, with no upper limit on their expiry time. Therefore, using Account SAS tokens for external sharing is unsafe and should be avoided.”
In an advisory on Monday by the Microsoft Security Response Center (MSRC) team, Microsoft said that no customer data was exposed, and no other internal services faced jeopardy due to this incident.