AI for Cybersecurity is Advancing, But Users Need to Trust It
Nvidia recently became the world's first company to reach a $4 trillion valuation, marking a significant milestone in the tech industry. However, as AI adoption continues to grow, it's becoming increasingly clear that users need to trust AI systems if they're going to be effective.
AI is being implemented in various business functions, but its implementation in cybersecurity remains lagging. According to Cisco President and Chief Product Officer Jeetu Patel, the efficacy of AI tends to be low, and the costs tend to be prohibitively high. Additionally, there's a significant skills shortage in cybersecurity, making it challenging for companies to adopt AI effectively.
Patel argues that the biggest risk we have is not AI taking our jobs away, but rather the attack surface increasing dramatically due to AI. To mitigate this risk, security must be integrated into the foundation of AI development, including full visibility on data flow and validation of models.
"If you don't trust an AI system, you're not going to use it," Patel emphasizes. "But if you trust it, you're going to use it." This highlights the importance of building a common substrate of security across all AI systems, agents, and applications.
The disconnect between executive excitement about AI's possibilities and trust and ability to use the systems is significant. According to CEO surveys and readiness indexes, only 1.7% of CEOs feel prepared for AI adoption. Patel attributes this to three main factors: lack of infrastructure know-how, concerns about safety and security, and a shortage of skills in cybersecurity.
"Companies like Cisco are trying to work at simplifying the use of AI in the fabric of what we do," Patel says. "But to succeed, you have to lean in with AI and make sure that people understand that the only way adoption and AI scale is if you solve the trust problem."
As AI continues to advance, it's essential for companies to prioritize security and trust. By doing so, they can unlock human potential while keeping themselves secure.
The Role of Security in AI Adoption
Security is no longer just about managing risk; it's also about unlocking human potential. As the security industry makes progress on managing risk, adoption will accelerate as a function of security and safety being well-formed.
To determine when to test, iterate, pilot, and launch new initiatives, consider the following tips:
- Measure variables that can be reported and shared with value and importance.
- Avoid attempts to make you look good to other executives and shareholders.
- Determine when to test, iterate, pilot, and launch new initiatives based on progress and results.
The Future of AI Adoption
As we move forward, the progress of AI will largely depend on the progress of security and people's psychological feeling a level of trust with AI. This trust must be real and based on the right underlying technology to keep humans safe and secure.
The stakes are high, but with a focus on security and trust, companies can unlock human potential and reap the benefits of AI adoption.