Veza Extends Reach to Secure and Govern AI Agents
Veza has added a platform to its portfolio that is specifically designed to secure and govern artificial intelligence (AI) agents that might soon be strewn across the enterprise.
The Future of Network Security Policy Management in a Zero Trust World
Veza has added a platform to its portfolio that is specifically designed to secure and govern artificial intelligence (AI) agents that might soon be strewn across the enterprise.Currently in the process of being acquired by ServiceNow, the platform is based on an Access Graph the company previously developed to provide cybersecurity teams with a visual framework that enables them to use natural language prompts to more easily discover who in an organization has access to what specific resources.That core capability also now provides the foundation needed to manage and govern AI agents, which are rapidly emerging as a third class of identity that organizations need to manage alongside humans and other classes of non-human identities such as application programming interfaces (APIs), says Rich Dandliker, chief strategy officer for Veza.Having that capability is critical because to makes it possible to determine who within an organization actually owns an AI agent that is autonomously performing a task, he added.Just as importantly, it also enables organizations to apply policies that protect it from prompt injection attacks that might be used to compromise an AI agent in a way that, for example, results in sensitive data being sent to an external website operated by a malicious actor, noted Dandliker. In effect, Veza Security is establishing a framework for managing security posture management of AI agents in a way that ensures least privilege access controls are maintained to limit the scope of any potential breach, he added.The goal then becomes integrating the AI agent security framework developed by Veza with the control plane for managing AI agents that ServiceNow has made available. That integration should make it simpler for IT teams to centrally control the management of AI agents within the context of an IT service management (ITSM) workflow.Hopefully, cybersecurity and IT teams are proactively looking to apply and extend governance policies to AI agents before there is some type of major breach. However, if history is any guide, it’s probable there will be a number of significant incidents. On the plus side, organizations such as the OWASP GenAI Security Project have created a top 10 list of the potential security threats that organizations are likely to encounter as they build and deploy AI agents, while the U.S. National Institute of Standards and Technology (NIST) is building a taxonomy of attack and mitigations for securing AI agents.Regardless of how AI agents are managed and secured, the one thing that is certain is that the overall size of the attack surface that cybersecurity teams are now expected to defend is about to dramatically increase. Fortunately, a recent Futurum Group survey suggests that awareness of the cybersecurity implications of AI agents is already fairly high, with more than three quarters (78%) of respondents noting that trust and governance are key barriers to adoption.The challenge, of course, is given the ephemeral nature of the tasks that many AI agents will be performing and the simple fact that many of them will be operating in the shadows, there will also be more blindspots than ever.
