**Microsoft Copilot Studio Security Risk: How Simple Prompt Injection Leaked Credit Cards and Booked a $0 Trip**

Microsoft's innovative no-code platform, Copilot Studio, has revolutionized the way organizations automate workflows by empowering non-technical users to build AI-powered agents that integrate with popular tools like SharePoint, Outlook, and Teams. However, this accessibility brings a new attack surface, exposing sensitive systems to potential data breaches and financial fraud.

Our team at Tenable AI Research conducted an experiment to test the hypothesis that even well-intentioned automation can lead to serious exposure if not carefully controlled. We created a mock SharePoint file with dummy customer data, including fake credit card details, and used it as the foundation for our Copilot Studio travel agent.

The initial results were alarming: with just a few simple prompts, we were able to access sensitive customer information and even book a trip at a $0 cost. This demonstration highlights the importance of securing AI agents to prevent data leakage and financial fraud.

**The Vulnerability: How Simple Prompt Injection Led to Data Breach**

We designed our Copilot Studio travel agent to streamline booking processes without human intervention, leveraging the platform's powerful no-code interface. However, we soon discovered that a simple prompt injection attack could bypass security controls, exposing sensitive customer data and enabling malicious actions.

Here's how we manipulated the agent using a variant of a known prompt injection:

```html ```

**The Results: Data Breach and Financial Fraud**

Using this modified prompt, we were able to access customer credit card information, edit reservation details, and even change the trip's cost to $0 – all without exploiting any vulnerabilities or requiring advanced hacking skills.

We demonstrated how an attacker could leverage the agent's powerful actions to extract sensitive data and manipulate business processes. Specifically:

* **Data Extraction**: We accessed customer credit card information using a simple prompt injection. * **Financial Fraud**: We booked a trip at $0 cost by modifying the price field in the knowledge base.

**Protecting Your Organization: Best Practices for Secure AI Agents**

While our experiment highlights the potential risks of unsecured AI agents, it's essential to recognize that business teams are increasingly relying on these tools to streamline workflows and improve customer service. To mitigate this risk, we recommend implementing the following best practices:

* **Restrictive Instructions**: Guide your AI agent with clear, restrictive instructions to prevent unintended actions. * **Regular Security Audits**: Conduct regular security audits to detect potential vulnerabilities and address them before they can be exploited. * **Monitoring and Control**: Implement robust monitoring and control mechanisms to detect and respond to suspicious activity in real-time.

**Conclusion**

The experiment demonstrates the importance of securing AI agents to prevent data breaches and financial fraud. By implementing best practices, organizations can empower employees to use Copilot Studio agents without exposing sensitive information.

At Tenable, we're committed to helping organizations secure their AI-powered systems. For more information on how to safeguard your organization's attack surface, please visit our blog Introducing Tenable AI Exposure: Stop Guessing, Start Securing Your AI Attack Surface and explore our product page at https://www.tenable.com/products/ai-exposure.

---

**About the Authors**

Guy Zetland is a Data Analyst on the Product AI Security team at Tenable, where he focuses on mitigating risks in generative AI, particularly in the area of agentic AI security. Previously, Guy worked on AI initiatives at Intel, developing data-driven solutions.

Keren Katz is a leader in AI and cybersecurity, specializing in generative AI threat detection. She is currently a Senior Group Manager of Product, Threat Research, and AI at Tenable, following the acquisition of Apex, where she previously led security detection.