Cybersecurity has become a top priority for businesses as they embrace artificial intelligence (AI) to power their operations. However, securing AI systems is a significant challenge that cybersecurity teams face every day. In this special edition of the Cybersecurity Snapshot, we will highlight some of the best practices and insights provided by experts in 2025 for AI security.
Organizations looking to protect sensitive data powering their AI systems should check out new best practices released in May by cyber agencies from Australia, New Zealand, the U.K., and the U.S. The document titled "AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems" provides a robust foundation for securing AI data and ensuring the reliability and accuracy of AI-driven outcomes.
According to the document, the authoring agencies seek to accomplish three goals: To provide a comprehensive guide for organizations using AI systems in their operations; To help protect sensitive, proprietary, or mission-critical data; and To ensure the reliability and accuracy of AI-driven outcomes.
The U.S. National Institute of Standards and Technology (NIST) is stepping up to help organizations get a handle on the cyber risks threatening AI systems. In March, NIST updated its "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations" report.
"Despite the significant progress of AI and machine learning in different application domains, these technologies remain vulnerable to attacks," reads a NIST statement. "The consequences of attacks become more dire when systems depend on high-stakes domains and are subjected to adversarial attacks."
The European Telecommunications Standards Institute (ETSI) published a global standard for AI security in April, aimed at developers, vendors, operators, integrators, buyers, and other AI stakeholders.
ETSI's "Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems" technical specification outlines a set of foundational security principles for an AI system's entire lifecycle.
A study published in April by McKinsey Co., the Patrick J. McGovern Foundation, and Mozilla found that organizations increasingly adopt open-source AI technologies worry about facing higher risks than those posed by proprietary AI products.
According to the report, respondents cite benefits like lower costs and ease of use but consider open source AI tools to be riskier in areas like cybersecurity, compliance, and intellectual property.
Using AI tools in cloud environments? Make sure your organization is aware of and prepared for the complex cybersecurity risks that emerge when you mix AI and the cloud," said Liat Hayun, Tenable's VP of Research and Product Management for Cloud Security.
SANS Institute published draft guidelines for AI system security in March. The "SANS Draft Critical AI Security Guidelines v1.1" document outlines six key security control categories for mitigating AI systems' cyber risks.
"By prioritizing security and compliance, organizations can ensure their AI-driven innovations remain effective and safe in this complex, ever-evolving landscape," the document reads.
In conclusion, securing AI systems is a top priority for businesses. By following best practices, understanding the risks associated with open-source AI, and using cloud security measures that protect against complex attacks on AI data, organizations can ensure their AI-driven innovations remain effective and safe.