ConFoo 2026: Guardrails for Agentic AI, Prompts, and Supply Chains


Montreal has a guardrail baked into its skyline.

[…Keep reading]

ConFoo 2026: Guardrails for Agentic AI, Prompts, and Supply Chains

ConFoo 2026: Guardrails for Agentic AI, Prompts, and Supply Chains

Montreal has a guardrail baked into its skyline. The “mountain restriction” keeps most buildings from rising higher than the cross on top of Mount Royal, roughly 233 meters (764 feet), so the city’s natural high point remains the highest point. It is an urban policy choice that says clearly that growth is allowed, even encouraged, but only within constraints that preserve what matters.
This makes Montreal a perfect backdrop for ConFoo 2026, a conference focused on building resilience by learning from experiences across many communities, including Java, .NET, PHP, Python, DevOps, and many others. With around 800 attendees spanning development and DevOps, the conference felt full of practitioners admitting the ground is moving and choosing to respond with structure rather than nostalgia. The event took place across five full days of activities, including over 190 sessions, two full days of workshops, an evening of co-located meetups, and many fun social hours spent sharing our ideas. 
The shared theme across sessions was not novelty for its own sake, but guardrails: controls that keep fast-moving systems from surprising you, whether the “system” is an LLM agent calling tools, a dependency graph pulling in unreviewed execution hooks, or a web application whose defaults quietly widen risk.
There is no way to fully explain all that is ConFoo, so here are just a few highlights and thoughts. 
The Wristband Check for Your Bots
In the session from Nick Taylor, Developer Advocate at Pomerium, “Agentic Access: OAuth Gets You In. Zero Trust Keeps You Safe,” we were presented with a crisp argument that our access model has quietly changed from human access to agentic access, and most stacks are still built for the former. There is a mismatch between how we authenticate as people and how we should authenticate when an LLM agent makes the call.
Nick used Zero Trust as the corrective lens. Through it, we can see we should never trust, always verify, and verify with more than identity. The “wristband-at-the-venue” metaphor he mentioned works because it captures the difference between a one-time gate and ongoing enforcement. In a Zero Trust model, “who you are” is only one input. Device posture, time, location, and session behavior become policy signals, and the enforcement point needs to sit in front of the request, not behind it. Identity Aware Proxies matter, now more than ever. They do not just authenticate blindly just because a key is present. Instead, they apply context-aware policy and create a single choke point for logging and auditing.
MCP has quickly become a standard interface for tool calls in LLM ecosystems. That standardization reduced bespoke integrations, but it also made it easier to wire powerful tools to agents eager to comply with prompts. Nick said to “put the guardrails where the wheels touch the road;” place MCP servers behind a proxy, enforce authentication, validate token audience and scopes, prevent token passthrough, preserve user consent, and audit all access. When the caller is non-human, and your tools can touch sensitive systems, you need a per-request policy that survives context changes.
Nick Taylor
Prompt Hygiene Is the New Input Validation
Ben Dechrai, Staff Developer Advocate & Software Engineer, presented “Rogue LLMs: Securing Prompts and Ensuring Persona Fidelity” to a completely full room. It was a reality check that LLMs are programmable interfaces that accept adversarial input, except that the input is language and the boundaries are fuzzy. Models will be “dangerous” if we keep treating prompt behavior as if it will be stable under pressure. We know from decades of security practice that “works on the happy path” is not a control.
Ben gave real-world examples of “prompt leak,” including one where system instructions translated into another language evaded trigger-word filters. Another showed structured output requirements used as a lever to coax the model into returning what it should not. Persona drift, where the assistant stops being “your bot,” and defaults to system prompts that make it want to be more helpful. Basically, social engineering, except that the target has no human context and no hard boundaries. If you can socially engineer a human, you can socially engineer a model, and the model is optimized to comply.
We must treat prompts like code and treat model behavior like a system under test. Ben talked about a mindset that says you cannot “prove” safety with just a couple of manual tests; you need statistically meaningful testing, an explicit risk appetite, and continuous evaluation in CI/CD. The environment will change even when your prompt does not. Ben also suggested planting “canary tokens” in your internal context and treating any appearance in a response as a deterministic sign that something has gone wrong. 
Ben Dechrai
NuGet as a Delivery Truck With a False Bottom
In his session, “Building a supply chain attack with .NET and NuGet,” Maarten Balliauw, Head of Customer Success at Duende Software, presented on how dependency trust gets abused in ways that look boring at first, then turn catastrophic. He framed supply chain attacks as downstream-impact multipliers. If an attacker compromises a package, they inherit the victim’s trust graph, and they leverage that trust to access environments that were never the original target. The danger of a “sleeper,” where a good package goes bad later, is especially effective because it aligns with how teams actually update dependencies.
Maarten broke the attack down into familiar components: a dropper, a payload, the command-and-control infrastructure, and a persistence-and-exfiltration layer. What made it uncomfortable was how many execution hooks exist in a modern .NET workflow that are legitimate features. Module initializers can run code when an assembly loads and source generators can run during builds to produce code that becomes part of the compiling project. Startup hooks can run before `Main` via environment variables. These are all powerful extension points, not really bugs. The attacker’s job is to smuggle intent through extension points that defenders treat as normal plumbing for the supply chain.
We need a Swiss cheese model when thinking of guardrails, with multiple overlapping layers that make a system more resilient to attacks. Sign commits and packages, while also using package source mapping. Restore with lock files and enforce locked mode in CI, on top of  generating SBOMs, which you should actually be analyzing. Pin CI actions while you watch for suspicious environment variable changes. Good dependency management requires adopting an operational discipline for your organization. If your build system can execute code from your dependency graph, then “dependency update” is a privileged operation, and it should be treated with the same care as production access.
Maarten Balliauw
OWASP as a Mirror, Not a Checklist
Christian Wenz, Owner of Arrabiata Solutions, presented “Web Application Security Up-to-date: The 2025 OWASP Top Ten.” He began by reminding us all that the OWASP project is not a compliance artifact. It is a lens for what the industry is repeatedly getting wrong. The list is useful precisely because it is a little imperfect; the categories are sometimes too broad, sometimes frustratingly narrow, and the debates about what belongs on it reveal where teams still lack shared mental models.
Christian highlighted how misconfiguration and supply chain issues have risen in prominence and how some perennial categories stay stubbornly relevant. Broken Access Control remains an umbrella that hides many failure modes, from direct object access to function-level authorization gaps. Security Misconfiguration is odd because DevOps blurred the line between “developer” and “admin,” and defaults remain sharp edges, full of noisy errors, weak browser headers, and parsing hazards. Cryptographic failures are most often about basics like enforcing HTTPS, setting HSTS, and using secure cookie flags consistently.
Web security categories are converging with the agentic and supply chain themes rather than competing with them. “Injection” is still relevant, but the boundary of “input” is expanding. Model binding quirks, deserialization assumptions, and integrity failures in CI/CD all rhyme with prompt injection and package compromise. OWASP is not a checklist; it is a mirror. It reflects that our modern security failures are often control failures, often caused by automating actions without enforcing the constraints that make those actions safe.
Christian Wenz
The Future Is Embracing Change You Can Actually Operate
Across ConFoo there were a lot of conversations across technical communities that rarely have the chance to meet and talk. The ‘hallway track’ consistently had an air of excitement. While many subjects came up, a common theme was that things are evolving faster than ever before. At the same time, there was a real sense around AI, especially agentic AI, that we need to proceed with a little more safety and control in mind. 
Change is here, but it still needs structure
From the keynote, there was a recurring background conversation throughout the week about “Spec-Driven Design.” There is a worry that while AI accelerates work, it “amplifies ambiguity.” When context is thin, the model will infer missing details and keep moving trying to please the user. That creates debt quickly when the output doesn’t match what you meant, and you just have to hope it understands your next prompt. 
Instead, we need a structured approach. Structure should be seen as a performance feature. The clearer the requirements, such as a concrete design, with solid implementation details, gives both humans and tools something stable to align on. You get faster iteration because fewer cycles are spent translating intent after the fact.
Per-request trust beats perimeter trust
Agentic AI was, of course, a common conversation across the networking events. Modern workflows are full of non-human actors that can touch real systems. Agents, automations, and developer tools operating with delegated access all throughout your “perimeter,” which has a very different meaning in the modern world. Network location no longer describes risk when the tools you rely on run outside your environment.
A sturdier approach is context-based access on every request. Systems need to validate identity and authorization, apply scoped permissions, and enforce policy before a tool reaches sensitive services. Then make it observable via monitored audit logs, consistent gateways, and controls that work the same way across systems.
Your software risk iceberg is mostly hidden beneath the surface
For modern applications, software risk lives outside just your application code. Dependencies, base images, build artifacts, configuration, and runtime behavior routinely decide what reaches production and what attackers can touch.
One angle I heard in a few conversations was the idea of end-to-end ownership. For people who deliver dependencies, “owning” means proving you know what you ship, can track any exposure throughout the delivery chain, and build checks that hold up when any part of the systems change. For consumers, we need to treat dependency findings as seriously as first-party bugs, tighten error handling and logging, and test for drift in assistants and automations so they continue behaving as the system you intended.
Build for AI Speed With Control As A Requirement
AI is forcing a shift in what “secure by default” even means. The models will eventually say something wrong, which is definitely something many speakers touched on, including your author. The issue is that we are wiring language to action, and then acting surprised when the system behaves probabilistically. Systems take the shortest path through ambiguity, following incentives to comply as it figures out the next token. AI will happily route around soft boundaries, and any unfortunate surprises are the tax you pay for automation without constraints.
We can’t slow down change, but we can make that change operable. We need policies enforced where the wheels touch the road, not just in a slide deck or internal PDFs. We have to treat prompts as inputs that can be adversarial and treat dependencies as privileged code that can execute automatically. Then test for drift, log what matters, and assume the environment will change even when your intent does not.
Montreal’s skyline still grows, but it keeps one thing higher than everything else, on purpose. For AI, that high point should be guardrails you can enforce and observe. Change for IT means building upward; we just need to align on where our highest priorities sit.

*** This is a Security Bloggers Network syndicated blog from GitGuardian Blog – Take Control of Your Secrets Security authored by Dwayne McDaniel. Read the original post at: https://blog.gitguardian.com/confoo-2026/

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.