AI, agents, and the trust gap


As we speak, I’m remodeling my kitchen and relied heavily on ChatGPT to research sinks, compare reviews, and determine whether I can do all the pipe-work myself.

[…Keep reading]

AI, agents, and the trust gap

AI, agents, and the trust gap


As we speak, I’m remodeling my kitchen and relied heavily on ChatGPT to research sinks, compare reviews, and determine whether I can do all the pipe-work myself. 

Between my personal life and my role as a technical product marketer, I’m all for AI. 
But something Jono, our Head of Product, said recently initially felt counterintuitive: fraud rings won’t be using ChatGPT’s agentic mode any time soon to do the type of fraud we usually see. That is, they won’t be opening up ChatGPT accounts en masse and using the agentic mode to replace what they can do with solvers, residential proxies, and scraping API companies like FireCrawl.
The more I thought about it, the more it made sense.
At least for now, AI tools are better described as general‑purpose hacker tooling than a full substitute for existing fraud infrastructure. We do see them used to generate fake accounts or produce code—and in some cases, you can literally see ChatGPT‑generated code patterns show up in request logs. But they’re not yet a wholesale replacement.
That nuance matters.
Like many people, I’m bullish on AI but conservative when it comes to risk—especially financial and security risk. I want to capture upside while limiting downside. Most Fortune 500 companies we work with feel the same way.
And this is where the real tension shows up.
Marketing teams want to open the floodgates. Security teams want to lock them shut. Somewhere in between is a workable middle ground—but it’s not obvious where that line should be.
So what does “sensible and forward‑looking” actually look like?
Here’s how we’ve been thinking about it at Kasada.
The same AI company is multiple things
OpenAI’s ChatGPT isn’t one thing. It’s a browser acting on behalf of a user and an automated scraper and an agentic commerce client, depending on context.
All three may have different cryptographic signatures, but can we trust their intended use? All three are “really ChatGPT,” but they carry completely different risk profiles:

Browser mode: A user is in the loop. They’re shopping, researching, maybe adding to cart. This looks like a customer journey with an AI assist.
Scraper mode: No user interaction. Automated requests pulling product data, pricing, and inventory. This might be competitive intelligence. It might be training data for a competitor.
Agentic mode: An agent attempting to complete transactions on a user’s behalf. Sign up, checkout, booking, redemptions. High-value actions with real consequences.

One ChatGPT. Three discrete governance problems.
Treating all of that as “just ChatGPT” creates blind spots. Each mode represents a separate governance problem, and collapsing them into one policy guarantees either over‑blocking or under‑protection.
Each industry wants different things
There’s no universal default for how AI traffic should be handled.
We count some of the biggest companies across e-commerce, hospitality, media, and financial services as customers. And there isn’t a clear pattern on how AI helps them, they just know they have to adapt. Every industry wants something different.
If you’re selling sneakers, you probably want AI search visibility. You want to show up when someone’s shopping agent looks for “best running shoes under $150.” But you don’t want that same agent creating accounts or burning through promo codes.
If you’re a media platform like Reddit, you may want almost none of it. You don’t want your content scraped for training data. However, you likely still want search referral traffic.
The same endpoint—say, product search—might be wide open for one business and locked down for another. There is no one‑size‑fits‑all policy—and that’s the point.
Prompts are not secure by default
Some teams assume they can rely on LLM guardrails or system prompts to constrain agent behavior. The agent’s prompt might say, “never attempt checkout without explicit user confirmation.”
That is wishful thinking. 
Prompts can be overridden. Agents can be jailbroken. The model itself might hallucinate past its constraints.
You can’t treat the agent’s instructions as a reliable security boundary. 
Governance has to happen at your edge—where you can verify identity, enforce permissions, detect anomalies, and observe behavior over time.
(If you’re interested in this topic, Lenny’s podcast has a great interview with Sander Schulhoff, an AI security researcher.)

Verification is table stakes
The industry is converging on cryptographic request signing. Standards like Web Bot Auth allow agents to prove who they are, not just claim it.
This is necessary infrastructure.
But verification alone doesn’t answer the harder question: should this request be allowed?
Knowing a request came from OpenAI doesn’t tell you whether it’s a browsing request, a scraper, or an agent attempting a high‑risk action. Nor does it tell you what that agent should be permitted to do on your site.
Meaningful control requires:

Permissions per endpoint
Permissions per action
The ability to distinguish between different modes from the same provider

Identity without authorization is just better labeling.
It’s still early days
Everyone’s talking about agentic commerce like it’s already here. Agents booking flights. Agents completing purchases. End-to-end automation.
That’s not what we’re seeing in the traffic.
The reality is messier. Agents browse. They research. They add items to carts. But the “book my flight from zero to 100” future? It’s not here yet—even if technical teams are running POCs with the latest standards.
Will it arrive? Probably. Soon? Maybe.
Which is exactly why rigid, static rules written today are likely to break tomorrow.
The opportunity in the gap
This is the part that matters most now.
Everyone’s preparing for a future that hasn’t fully arrived. Since every industry is different, I’d start by creating a simple diagram defining the benefit and risk. Does the benefit of AEO outweigh the probability of it being used for LLM training? It depends. 
You have time to build the framework now, while the traffic is still small enough to understand. That gives teams time to establish visibility, set sane defaults, establish visibility, and create permissions that flex as capabilities mature.
The teams that wait until agentic commerce is “big enough to matter” will discover it mattered earlier than they thought—just quietly and without controls in place.
The hype says agents will transform everything overnight.
The traffic says you have a window to get this right.
Use it.
If you’re thinking about how to distinguish AI browsing from scraping from agentic action—and how to apply controls that evolve as capabilities mature—register for our upcoming webinar on January 29th.
Jono and I look forward to seeing you there.
The post AI, agents, and the trust gap appeared first on Kasada.

*** This is a Security Bloggers Network syndicated blog from Kasada authored by Edgar Cerecerez. Read the original post at: https://www.kasada.io/ai-agents-and-the-trust-gap/

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.