TrendAI™ at [un]prompted 2026: From KYC Exploits to Agentic Defense
The sessions from Sean Park and Peter Girnus with Demeng Chen captured the attention of attendees because they painted a picture of the ecosystem: as AI systems rapidly expand, so does the attack surface around them. The work of TrendAI™ is aimed at helping organizations keep up with those changes through real research and practical defensive measures.
Principal Threat Researcher Sean Park took the stage for a session that sounded like something out of a spy novel: “When Passports Execute: Exploiting AI Driven KYC Pipelines.”
Most of us assume that when we upload a photo of our ID for “Know Your Customer” (KYC) verification, the AI is just pulling text from the image into a database. Sean showed that these pipelines are actually execution environments. He demonstrated how a document embedded with hidden “injects”, can trick an AI agent into reading and writing data across different customer records. It’s a worrying look at how data theft can happen without ever having to “bypass” a traditional security control.
In a real-world stack built with FastAPI, Claude Code, and a SQLite MCP backend, his team embedded malicious instructions inside a passport so that the AI agent followed them and leaked other customer records directly into the verification page. They scaled this into 2,600 automated tests across 13 different models to explore high success rate injects. The takeaway here is that if your AI can read documents and call tools, your documents can potentially become executable attack surfaces even when guarded with strict schemas.
Later in the conference, Threat Hunting Senior Manager Peter Girnus and Threat Researcher Demeng Chen presented the latest results from the vulnerability research pipeline of TrendAI™. Their talk, titled FENRIR: AI Hunting for AI Zero Days at Scale, demonstrated what the team has built to uncover weaknesses in the AI and model context protocol (MCP) ecosystem.
The presentation covered the full architecture behind FENRIR, a multi‑stage system that scales from static analysis to human validation. The research is built around a simple but critical idea: AI systems cannot be secured unless we can find their weakest points faster than attackers can.
Using a layered pipeline, FENRIR processes large codebases with a combination of CodeQL, Semgrep, YARA‑X, SpotBugs, and two tiers of LLM reasoning. The system is designed to eliminate more than 90 percent of false positives before a human researcher even sees a result. Once a true positive reaches an analyst, it already comes with an exploit proof, an auto‑generated report, and threat intel artifacts.
This approach has already produced:
- More than 60 published CVEs across AI and MCP components
- Over 100 additional vulnerabilities in pre‑disclosure with ZDI
- More than 3000 findings queued for further review
Sean Park’s demonstration of “executable documents” proves that traditional data can now act as code, turning routine verification processes into potential entry points for attackers. Meanwhile, the FENRIR system highlights that to defend this rapidly expanding territory is by fighting fire with fire, using automated, agentic AI to hunt for flaws at a scale human researchers cannot achieve alone.
TrendAI™ was proud to have participated in [un]prompted 2026 alongside industry leaders like OpenAI, NVIDIA, and Anthropic. By bringing together a diverse community of security professionals, researchers, and policy contributors, the event fostered the kind of honest discussion and practical insights required to secure the AI landscape. This focus on substance over hype ensures that the industry moves beyond basic model monitoring toward a comprehensive security posture that treats every AI pipeline as a high-stakes execution environment.
