AI Is Making Security More Agile: Highlights from ChiBrrCon 2026
In 1900, Chicago completed one of the most ambitious engineering projects ever. Engineers reversed the flow of the Chicago River so that sewage would no longer contaminate Lake Michigan, the city’s drinking water source.
AI Is Making Security More Agile: Highlights from ChiBrrCon 2026
In 1900, Chicago completed one of the most ambitious engineering projects ever. Engineers reversed the flow of the Chicago River so that sewage would no longer contaminate Lake Michigan, the city’s drinking water source. They did not filter harder or build temporary containment. They redesigned that entire system’s direction. This effort mirrors the reality everyone in information security now faces, as AI-driven code and applications create new challenges that require bigger answers than just more alerts and tools. This spirit of solving a big challenge, together, carried through every session at ChiBrrCon 2026.
This year marked the 6th installment of ChiBrrCon, an enterprise-focused security conference hosted on the Illinois Institute of Technology campus. This was the largest ChiBrrCon as well, with over 800 tickets sold. Throughout this single-day event, 27 speakers shared their knowledge, war stories, and best practices for securing the enterprise. There were also hands-on villages and competitions, where folks could learn some new skills and make some new connections.
Across all the sessions, there was an urgency as every speaker addressed the question of what we do in a world where AI is driving code and applications. The answers were never just technical, but honest discussions of the needed structural decisions and changes we need to work on.
Here are just a few of the highlights from this year’s ChiBrrCon.
Land the Plane Before You Write the Postmortem
In the session from Joshua Peltz, VP of Zero Networks, called “Resiliency through Adversity: Comparing ‘Flight 1549’ with a Cyber Breach,” he explained crisis response as disciplined execution under pressure. Drawing on his experience aboard Flight 1549, the infamous flight that landed in the Hudson, piloted by Capt. Sully. He described how survival depended on preparation deposits made long before impact, immediate anomaly recognition, and unambiguous authority during execution.
The aviation parallel worked because it was procedural, just like we need to be for incident response. Detection happened in seconds. Roles were predefined. Communication was controlled. The objective was stabilization, not explanation.
Joshua emphasized sequencing: containment first, recovery second, and attribution later. Security teams often invert this order. They chase root cause while lateral movement continues. That instinct feels analytical, but it increases blast radius.
He also surfaced an uncomfortable reality that runbooks that are not stress-tested are documentation theater. Communication channels that are not practiced introduce chaos when adrenaline rises. Authority models that are not explicit collapse when escalation happens. Resilience, in his framing, takes rehearsal.
Joshua Peltz
Cutting Through The Noise Of 90 Billion Daily Events
In “Modernizing Security Operations in a World of AI Threats,” Paul Hill, Cortex Regional Sales Manager at Palo Alto Networks, presented a structural critique of how modern SOCs accumulate complexity. He shared his firsthand testimony of seeing his team take tens of billions of daily log events and collapse them using ML-powered automation into a handful of meaningful incidents. The real challenge was the cohesion of data, not the detection volume.
Paul described how years of well-intentioned tooling decisions create fragmentation, where separate detection engines, SIEM pipelines, and response workflows created operational friction and context switching. Each part was optimized in isolation, and the end result was a blindness to the whole story.
He described what they did at Palo Alto to consolidate telemetry into a unified data model, allowing context to travel with the signals. AI was positioned as a stitching mechanism, grouping alerts into coherent incident stories that reflect attacker movement rather than system boundaries. Repetitive triage work was automated so analysts could focus on investigation and engineering.
Ultimatelt this led to the elimination of human-driven Level 1 triage. He was clear that there was no headcount reduction. Analysts moved into threat hunting and detection engineering. Burnout declined because the system removed repetition.
Paul Hill
Fluency Is Not Judgment
In the session from Bill Bernard, Field CTO at Between Two Firewalls, called “Gen AI Ain’t Your Buddy: Neither Is Your Lawnmower,” we got a behavioral warning about generative AI adoption. His fear is not that AI is intelligent; the danger lies in the fact that it feels intelligent.
Bill grounded generative AI in a practical definition. It predicts statistically likely output based on training data and rule constraints. It does not understand context in the human sense. It does not evaluate ethical nuance. It does not possess lived experience.
He walked through examples of confidently wrong outputs, fabricated quotes, and biased responses that appear credible because they are well structured. He summed this up as “fluency creates an illusion of authority.” The operational risk lies in how easily teams accept polished language as verified information.
He reminded us that AI is just another tool, like your lawnmower. Tools excel within defined tasks. They fail when treated as cognitive partners. Generative AI is powerful for summarization, drafting, and preparation. It is fragile when asked to replace validation or expert judgment. He urged us to treat AI as infrastructure rather than a companion. He thinks that is the maturity step most teams have not yet taken.
Bill Bernard
Risk Reduction Requires Building An Inventory
Sean Juroviesky, Security Architect at SoundCloud, presented “The Risky Business of AI Illiteracy.” They presented AI risk as an extension of classical security fundamentals, such as overprivileged identities, injection vulnerabilities, misconfigurations, and weak segmentation. These issues have remained the dominant exposure vectors, no matter how the tech has evolved. Sean said that AI accelerates these patterns.
Sean said we need to anchor the conversation with our teams, especially management, in “risk math.” Risk equals threat plus vulnerability. Without full visibility into your environments and data flow, neither side of that equation is measurable.
They emphasized mapping of trust boundaries and understanding how data moves across services. AI systems sit inside identity providers, API gateways, orchestration platforms, and third-party integrations. Focusing exclusively on model behavior while ignoring those interactions produces blind spots.
Threat modeling, inventory discipline, and iterative review remain the backbone of defensible security. Automation does not replace them. It magnifies their absence.
Sean Juroviesky
Operational Agility Beats Resilience
The word “resilience” implies passive strength, a rigid ability to withstand shocks and return to baseline. ChiBrrCon 2026 revealed the deeper truth that security teams don’t need to bounce back; they need to adapt forward. This is especially true in the exceptionally fast-moving world of AI-driven everything, which almost every speaker touched on.
To deal with rapidly shifting demands and the new dangers that AI presents or amplifies, we must embrace a new goal beyond recovery, which can be summed up in one term: Operational Agility.
The heart of operational agility is about more than just responding to and surviving breaches; it is the ability to evolve through these events in real time. And it’s reshaping how we structure our teams, tools, and thinking.
Acting Before Certainty Arrives
Operational environments now move faster than traditional security decision models allow. Data volumes overwhelm human analysis. Incidents unfold across multiple systems simultaneously. In this reality, waiting for full understanding is often indistinguishable from inaction.
Operational agility requires comfort with acting with the best available, partial picture. It means knowing which decisions must be made immediately, which can be refined later, and which are reversible. Teams need to be able to prioritize under pressure.
When incidents demand parallel action across detection, containment, communication, and recovery, hesitation becomes the threat multiplier. Agility emerges when authority is clear, decision rights are understood, and responders are trusted to move before every answer is known.
Reducing Cognitive Load To Preserve Judgment
Security teams are not failing because they lack enough data. If anything, there’s generally too much raw data for us to know what to do with it. They are failing because context is fragmented. Alerts arrive without narrative. Analysts spend time stitching together meaning instead of making decisions. Every manual correlation step drains attention that should be reserved for judgment.
Teams need a shared understanding of what is happening, why it matters, and what action is required next. Tooling only helps if it reduces mental overhead rather than adding to it. Automation is essential here, not as a headcount strategy, but as a way to protect human cognition.
Any task that requires repeated action without new judgment should already be automated. Humans should be reserved for ambiguity, tradeoffs, and accountability. When teams are buried in repetitive work, agility collapses long before burnout becomes visible.
Security Happens Where People Interpret Risk
Risk is not perceived uniformly, as individual humans are the ones perceiving and defining it. Different people respond differently under stress. Some act quickly and pull others in. Some slow down to stabilize and verify. Some prioritize accuracy. Others prioritize momentum.
Trust sits at the center of this. Without trust, uncertainty gets hidden. Escalation gets delayed. With trust, imperfect information surfaces early and improves over time. This is also where the limits of AI become clear. Tools can assist thinking. They cannot replace accountability or judgment.
The enduring takeaway from ChiBrrCon was that operational agility is not something you discover during an incident. It is something you build long before one ever starts.
Adaptability, Especially To AI, Is The New Availability
The most useful takeaway from ChiBrrCon 2026 was not a new tool or a new tactic. After all, the dangers that AI brings to enterprise software are not new concepts. Your author was able to be one of many presenters who showed that issues like adversarial input injection, XSS, and broken access controls are patterns we have been facing for decades. AI has simply accelerated the speed at which these issues can be introduced.
Operational agility is needed to adapt to this new rate of software development and delivery. It is something you build long before the next incident ever starts. You build it in runbooks that get stress-tested, and by making sure all alerts bring context instead of noise. You build it in inventories that reflect how your systems actually work, not how you wish they worked. Teams need to trust each other enough to act on available information, no matter how complete it is.
Chicago did not solve its water problem by asking people to be more careful. It solved it by changing the system. That is the work in front of us now. And the good news, visible in every packed room and every hallway conversation at ChiBrrCon, is that we are not doing it alone.
If you are working on solving the issues around secrets sprawl and NHI governance, you are not alone, either, and we would love to work with you.
*** This is a Security Bloggers Network syndicated blog from GitGuardian Blog – Take Control of Your Secrets Security authored by Dwayne McDaniel. Read the original post at: https://blog.gitguardian.com/chibrrcon-2026/
