Zero Trust in the Age of AI: Why the Classic Model Isn’t Enough Anymore


Here’s a statement that should make any security architect pause:
In most enterprise environments today, machine identities – service accounts, API keys, deployment pipelines, and increasingly AI agents – outnumber human identities by a signif

[…Keep reading]

Anthropic seeks to renegotiate its AI deal with US DoD, says report

Anthropic seeks to renegotiate its AI deal with US DoD, says report

Here’s a statement that should make any security architect pause:
In most enterprise environments today, machine identities – service accounts, API keys, deployment pipelines, and increasingly AI agents – outnumber human identities by a significant margin. In AI-native companies, that ratio is already 10 to 1 or higher.
Zero Trust was designed with human users as the primary subject. The model assumes identity belongs to a person who authenticated with credentials, uses a device you can evaluate, and accesses applications in recognizable patterns. When something deviates from that pattern, behavioral analytics flags it.
AI agents don’t fit that model. And the mismatch is creating security gaps that traditional Zero Trust frameworks weren’t designed to close.

What Changed When AI Entered the Picture
The first wave of AI in enterprise environments was relatively contained. Machine learning models trained on internal data, analytics pipelines, recommendation engines. These were workloads – they had identities, made API calls, and could be secured reasonably well with existing approaches.
The second wave – large language models, AI agents, and autonomous systems – is different in kind, not just in degree.
AI agents act autonomously. A human user making access requests follows recognizable patterns. An AI agent running a workflow might make hundreds or thousands of API calls in minutes, query multiple data sources in sequence, generate and execute code, and trigger downstream actions – all autonomously. The behavioral baseline for an AI agent looks nothing like a human user.
AI agents operate across long contexts. An AI workflow might start with a user request, retrieve context from multiple systems, call external APIs, process results, and write output to another system – all in a single execution. This multi-hop, multi-system access pattern is exactly what lateral movement looks like. Distinguishing legitimate agent behavior from adversarial behavior is genuinely hard.
AI agents inherit and amplify permissions. If an AI agent runs in the security context of a human user or a service account with broad permissions, it can do whatever that identity can do. And because it acts autonomously and at machine speed, any permission misuse happens before a human can intervene.
AI agents can be manipulated. Prompt injection attacks – where malicious content embedded in data the agent processes causes it to perform unintended actions – are a real and growing class of attack. An agent that trusts content from external sources without verification can be weaponized by that content.

The Machine Identity Problem
Before even getting to AI agents, there’s a foundational problem: most organizations have poor visibility and governance over non-human identities.
When I was scaling a CIAM platform to handle billions of user authentications, we were acutely aware of human identity management. But the service accounts, API keys, and machine-to-machine connections multiplied faster than anyone tracked them. This pattern holds across the industry.
A 2024 assessment across enterprise environments found that organizations with good visibility into their human identity inventory often had three to five times more non-human identities than human ones – and a fraction of the governance.
Here’s what that looks like in practice:

API keys embedded in code repositories with no expiration and no ownership
Service accounts with broad permissions granted for a project that ended two years ago
OAuth application grants that were authorized by employees who have since left
AI pipeline credentials with read access to data systems far beyond what the pipeline actually needs

Every one of these is a potential pivot point for an attacker. And because machine identities are less visible and less monitored than human ones, they’re increasingly the preferred target.
The SolarWinds breach in 2020 exploited precisely this gap. The malicious code inserted into the build pipeline operated using legitimate service account credentials. No human user behaved anomalously. The compromise lived entirely in the machine-to-machine communication layer.

How Zero Trust Needs to Evolve for AI
Classical Zero Trust principles still apply. Never trust, always verify. Least privilege. Assume breach. But the implementation needs to extend in several specific directions to handle AI agents and machine identities effectively.
1. Every AI Agent Needs Its Own Identity
An AI agent should not run under a shared service account or a human user’s identity. It should have its own workload identity, with permissions explicitly scoped to what that specific agent needs to do its specific job.
This means:

Workload identity credentials issued per-agent, not per-application
Short-lived credentials where possible (tokens with brief expiry rotated frequently)
No standing permissions – access granted when needed, revoked when the workflow completes
Machine identity lifecycle management as rigorous as human identity management

The practical challenge: AI frameworks and orchestration platforms vary widely in how they handle identity. Some make this easy; many require deliberate work to implement correctly. Treat agent identity as a first-class design requirement, not an afterthought.
2. Least Privilege for Agentic Workflows Is Harder – and More Important
A human user with least privilege access typically needs read access to their own work, write access to their own projects, and limited access elsewhere. Scoping that is well understood.
An AI agent that orchestrates a complex workflow might legitimately need to read from a database, call an external API, write to a document store, and trigger a notification – in sequence, not simultaneously. Traditional least privilege models often grant all the permissions the workflow might ever need upfront.
A more sophisticated approach uses just-in-time permission grants: the agent requests and receives the permission it needs for each step, uses it, and releases it. This requires the orchestration layer to mediate permission requests rather than granting all permissions at initialization.
This is architecturally more complex but significantly reduces the blast radius if the agent is compromised or manipulated mid-workflow.
3. Protect AI Agent Inputs and Outputs
Prompt injection is the Zero Trust problem for the AI data plane. If an AI agent processes data from external sources – web content, emails, documents, user inputs – any of that content could contain instructions intended to manipulate the agent’s behavior.
Zero Trust for AI inputs means:

Treating all external content as untrusted data, not as instructions
Implementing input sanitization and validation before agent processing
Separating the trust level of agent instructions (from your system prompt, your code) from the trust level of agent inputs (user data, external content)
Monitoring agent outputs for anomalies that might indicate manipulation

This is a relatively new problem domain, and the tooling is still maturing. But the principle maps directly to existing Zero Trust thinking: explicit verification, never implicit trust.
4. Behavioral Baselines for AI Agents
UEBA (User and Entity Behavior Analytics) was built for human users and, to some extent, traditional service accounts. AI agents behave differently and require different baselines.
An AI agent running normally might make 500 API calls in ten minutes. That would be catastrophically anomalous for a human user. Building baselines that correctly distinguish normal high-volume agent behavior from anomalous agent behavior – excessive data access, calls to unexpected endpoints, anomalous output volumes – requires agent-aware analytics.
This means:

Agent activity should be logged separately from human user activity, with agent-specific context
Behavioral baselines should be established per-agent-type, not applied from human user templates
Anomaly detection rules should account for the burst-and-pause pattern typical of AI workflows
Threshold violations should trigger agent suspension and review, not just alerting

5. Human-in-the-Loop for High-Stakes Decisions
The most dangerous class of AI agent action is the irreversible high-stakes operation: sending an external communication, modifying production data, executing a financial transaction, deprovisioning an account.
Zero Trust for AI agents should include explicit checkpoints for these operations, where a human must review and approve before the agent proceeds. This isn’t a performance optimization – it’s a security control that limits the damage an adversarial manipulation can cause.
Designing these checkpoints into the workflow architecture from the beginning is far easier than retrofitting them later.

The Threat Landscape AI Is Creating
Understanding what you’re defending against sharpens how you build defenses.
AI-assisted reconnaissance: Attackers are using AI to accelerate target profiling, identify exposed credentials and API keys in public repositories, and analyze large datasets for attack paths. The speed of attack preparation has increased dramatically.
AI-generated phishing: The barrier to convincing social engineering has dropped. AI-generated phishing content can be personalized, grammatically correct, and contextually appropriate at scale. Traditional spam filters that look for poor writing or generic templates are less effective.
Adversarial AI agents: As AI agents become more capable and more prevalent in enterprise environments, using them as attack vectors becomes more attractive. A compromised or manipulated AI agent with legitimate credentials can do significant damage quietly.
LLM-specific attacks against your own AI systems: If you’re running internal LLMs or using AI services that ingest internal data, adversarial inputs designed to exfiltrate data or manipulate outputs become a real concern.

Practical Steps for CISOs and Security Teams Today
The AI security problem can feel overwhelming – too many new threat vectors, too few proven defenses. Here’s a practical prioritization.
This quarter:

Inventory all AI tools and services currently in use across the organization (the number is higher than IT knows)
Audit all service accounts and API keys associated with AI workloads; revoke anything not actively used
Implement conditional access policies that apply specifically to AI service accounts
Define acceptable-use policy for AI tools and communicate it explicitly

This year:

Implement workload identity management for AI agents you develop or deploy
Establish agent-specific logging and build initial behavioral baselines
Design human-in-the-loop controls for irreversible agent actions
Conduct a prompt injection threat assessment for any internal LLM deployments
Extend your access certification process to cover AI tool authorizations

Ongoing:

Treat AI agent identity with the same rigor as privileged human identity
Incorporate AI attack scenarios into red team exercises
Stay current with emerging standards (the OAuth working group is actively developing workload identity extensions; NIST is updating SP 800-207 guidance to incorporate AI considerations)

The Core Insight
Zero Trust’s founding insight – that implicit trust based on network location is the fundamental flaw in enterprise security – applies with equal force to AI agents and machine identities.
The extension for the AI era is this: trust shouldn’t be implicit based on any identity signal, human or machine. Every access request should be verified against explicit policy. Every identity should carry only the minimum permissions needed. Every system should be designed assuming that any component can be compromised.
AI doesn’t break Zero Trust. But it does expose the parts of Zero Trust that most organizations implemented incompletely. Machine identity governance, east-west traffic control, behavioral analytics for non-human entities – these were always part of a complete Zero Trust architecture.
The AI era just makes it urgent to get them right.

Deepak Gupta is the Co-founder & CEO of GrackerAI and an AI & Cybersecurity expert with 15+ years in digital identity and enterprise security. He writes about cybersecurity, AI, and B2B SaaS at guptadeepak.com.

*** This is a Security Bloggers Network syndicated blog from Deepak Gupta | AI & Cybersecurity Innovation Leader | Founder's Journey from Code to Scale authored by Deepak Gupta – Tech Entrepreneur, Cybersecurity Author. Read the original post at: https://guptadeepak.com/zero-trust-in-the-age-of-ai-why-the-classic-model-isnt-enough-anymore/

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.