GUEST ESSAY: Executives trust AI security even as security teams confront blind spots, new risks


By Daniel Bardenstein
In our recent report, Beyond the Black Box, we found a striking gap: 80% of executives believe their organizations have strong security coverage for AI systems. Only about 40% of AppSec practitioners agree.

[…Keep reading]

GUEST ESSAY: Executives trust AI security even as security teams confront blind spots, new risks

GUEST ESSAY: Executives trust AI security even as security teams confront blind spots, new risks

By Daniel Bardenstein
In our recent report, Beyond the Black Box, we found a striking gap: 80% of executives believe their organizations have strong security coverage for AI systems. Only about 40% of AppSec practitioners agree.
Related: AI moves mainstream
That’s not just a perception problem. It’s a visibility problem.
The numbers back that up. Sixty-three percent of organizations report discovering “shadow AI” inside their environments — tools, models, or integrations adopted without formal oversight.
Executives tend to measure security by the presence of programs, policies, and governance structures. Practitioners measure it by what they can actually see, inspect, and test. When it comes to AI systems, those two measures rarely land on the same number.
The reason is straightforward: much of the AI supply chain is still invisible to the tools security teams rely on.
Breaking assumptions
Over the past decade, software security built real mechanisms for understanding dependencies. Package managers, dependency scanners, and software bills of materials (SBOMs) emerged because organizations learned they couldn’t secure what they couldn’t inventory. Modern AppSec programs now assume teams can identify the components their software depends on and track vulnerabilities within them.
AI systems break that assumption.
A typical AI deployment doesn’t just include application code and open source libraries. It may depend on pretrained models, model weights, training datasets, machine learning frameworks, GPU acceleration libraries, and specialized tooling embedded inside development pipelines. Many of those components are inherited through environments, frameworks, or model repositories — not explicitly chosen through dependency management systems.
As a result, they often don’t appear where AppSec tools normally look. That blind spot is widening alongside rapid adoption: nearly 80% of organizations report broad use of commercial AI tools, while 56.7% are training open-weight models on internal datasets.
Bardenstein
Leadership may assume existing security coverage extends to AI systems. Security teams know large parts of the stack remain opaque. The confidence gap in our report reflects that difference directly.
AI development also runs on a large amount of implicit trust. Teams routinely rely on widely used machine learning frameworks, model repositories, GPU toolchains, and preconfigured development environments. These components are typically treated as foundational infrastructure — not as software dependencies that need scrutiny. That reliance is growing: about 29% of organizations already report tuning their own models, layering in additional dependencies across training data, frameworks, and compute infrastructure.
Risk buried in the stack
Security history is pretty consistent on this point: infrastructure layers are often where high-impact vulnerabilities surface. And organizations aren’t confident they’ve got a handle on even the compliance basics. Ninety-three percent say they have room for improvement in understanding licensing, IP, and usage obligations tied to AI models and datasets.
In many environments, security teams may not even know these components are present, let alone have visibility into vulnerabilities within them. When issues emerge at that layer, they can affect large portions of the AI pipeline without triggering a single traditional security control.
This is exactly what AppSec practitioners are reacting to when they report lower confidence in AI security coverage.
Executives, meanwhile, are often seeing different signals. Organizations may have launched AI governance initiatives, introduced policies covering AI systems, or incorporated AI risks into broader compliance frameworks. Those efforts reflect real awareness of the challenge.
But governance doesn’t automatically translate into artifact-level visibility. The security of an AI system ultimately depends on the components it relies on, and many organizations are still working out how to inventory and track those components.
Software supply chain security followed a similar path. For years, organizations assumed their software stacks were secure — until Log4j exposed how little visibility existed into underlying dependencies. Only then did practices like SBOM generation and dependency monitoring become standard.
AI ecosystems appear to be at an earlier point in that same arc.
The hunt for solutions
Organizations looking to close the gap should start with a few basic questions. Do we maintain an inventory of the models running in production? Can we identify the frameworks, runtimes, and infrastructure components those models depend on? Do we have a way to track vulnerabilities within those dependencies over time?
If the answers aren’t clear, the problem isn’t just AI security coverage. It’s that significant portions of the AI supply chain may still be invisible to the teams responsible for securing them.
The 80/40 split in our report reflects that reality. Executives see coverage. Practitioners see the parts of the AI stack that remain hidden.
Confidence in security programs matters. But confidence without visibility is fragile.
Before organizations can secure AI systems, they first need to understand the software supply chains those systems depend on.
About the essayist: Daniel Bardenstein is CEO and co-founder of Manifest, where he focuses on making software and AI supply chains more transparent and secure. Before Manifest, he served as Chief of Tech Strategy at CISA and led cybersecurity efforts at the Defense Digital Service, including Hack the Pentagon.

March 20th, 2026 | Guest Blog Post | Top Stories

*** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/guest-essay-executives-trust-ai-security-even-as-security-teams-confront-blind-spots-new-risks/

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.