New Warning — Microsoft Copilot AI Can Access Restricted Passwords
Hackers have used Copilot AI to extract passwords from Microsoft SharePoint, highlighting a concerning vulnerability in the company's AI-powered security protections.
Pen Test Partners, a company that specializes in security consulting and penetration testing, conducted an investigation into how Microsoft's Copilot AI for SharePoint can be exploited. The results were alarming, with the red team hackers successfully accessing restricted passwords using the AI agent.
"The agent then successfully printed the contents," said Jack Barradell-Johns, a red team security consultant with Pen Test Partners. "including the passwords allowing us to access the encrypted spreadsheet." This exploit has significant implications for organizations that rely on SharePoint for sensitive data storage and management.
The Power of AI in Security
AI can be a powerful tool in enhancing security protections, but it also poses risks when used maliciously. The recent example of Copilot AI being exploited to access restricted passwords is a stark reminder of the importance of proper configuration and user permissions.
The Exploitation Process
During the engagement, the red team hackers encountered a file named passwords.txt, located adjacent to an encrypted spreadsheet containing sensitive information. They tried to access the file, but Microsoft SharePoint restricted it due to download restrictions in the restricted view protections. The red teamers then used their expertise and asked the Copilot AI agent to retrieve the file instead.
"The agent then successfully printed the contents," Barradell-Johns reported. "including the passwords allowing us to access the encrypted spreadsheet." This exploit highlights a configuration hole in SharePoint's security features, which can be exploited by sophisticated hackers using AI-powered attacks.
A Message from Microsoft
Microsoft has issued a statement on the matter, saying that SharePoint information protection principles ensure that content is secured at the storage level through user-specific permissions. However, Pen Test Partners' Ken Munro countered that this is not what the hackers exploited. He stated that organizations often don't log activities related to AI-powered agents and that having more granular user permissions would mitigate this vulnerability.
"Microsoft are technically correct about user permissions," Munro said. "but that's not what we are exploiting here. They are also correct about logging, but again it comes down to configuration. In many cases, organisations don’t typically logging the activities that we’re taking advantage of here." The Pen Test Partners founder emphasized that organizations need to be aware of the implications of adding licenses to their users and configuring AI-powered security features properly.
A Call to Action
The recent exploit highlights the importance of staying vigilant and proactive in securing sensitive data. Organizations must ensure that they have robust security measures in place, including regular audits and monitoring of user permissions and AI-powered agent activities. The risks associated with AI-powered attacks are very real, and it's crucial to take steps to mitigate them.