What We Do in the Shadows: How CISOs Can Crack Down on Shadow AI
Artificial intelligence has quickly become both a force multiplier and a source of friction for modern enterprises.
It’s Not the Computer, Stupid. It’s the Information in It. Two Recent Indictments Stretch the Limits of “Theft” of Information.
Artificial intelligence has quickly become both a force multiplier and a source of friction for modern enterprises. On one hand, AI tools are helping employees move faster: automating workflows, accelerating development, and unlocking insights from data. On the other hand, they are introducing new risks that many organizations are still struggling to fully understand, let alone control.For CISOs, this tension is familiar. Any new technology introduced into the enterprise must be vetted, governed and monitored. Sensitive data must be protected, and regulatory obligations must be met. But AI adoption is happening faster than most governance models can keep up with. And as adoption accelerates, so do concerns around compliance. In fact, recent research shows that 72% of organizations are concerned about AI’s impact on compliance, up from 58% just a year prior.The result is a growing disconnect: while organizations debate policies and frameworks, employees are already using AI tools in their day-to-day work, often without oversight.That gap is where shadow AI takes root—and is a CISO’s worst nightmare.The Call Is Coming from Inside the HouseShadow AI isn’t a hypothetical risk; it’s already embedded in enterprise workflows.Consider a developer troubleshooting an issue in proprietary code. Under pressure to deliver quickly, they paste that code into a public AI assistant to get help. The tool provides a useful response, the task gets completed, and the workflow feels more efficient.But what happens next is far less visible. That code may now be retained, processed, or learned from by an external system. Depending on the tool and its terms, sensitive intellectual property could be exposed beyond organizational boundaries. What feels like a harmless shortcut becomes a potential data leak.This is the core challenge: shadow AI often emerges not from negligence, but from productivity.Employees aren’t trying to bypass security, they’re trying to get their jobs done. When governance is unclear or absent, they default to the path of least resistance. And today, that path increasingly leads to widely accessible AI tools like ChatGPT, Copilot or Gemini.The issue here is shadow AI at scale. According to recent data, 36% of organizations still don’t have an AI compliance policy in place. Without proper guardrails or training in place, it’s common for employees to use AI tools through their own personal accounts. And with compromised credentials responsible for more than half of data breaches in 2025, those using shadow AI are unknowingly leaving the door wide open for risks.Without clear policies, employees make their own decisions about what’s acceptable. Without visibility, security teams are left guessing where AI is being used and how. This creates a fragmented environment where:Sensitive data may be shared with unvetted third-party toolsPersonal accounts are used for work-related AI interactionsAPI connections between external tools and internal systems go unmonitoredRegulatory obligations become harder to track and enforceIn other words, a lack of compliance strategy at the top cascades into inconsistent and risky behavior across the organization.When Compliance Gaps Become BehaviorToo often, AI governance is treated as something to address later, after use cases are proven, after tools are adopted, after productivity gains are realized. But by that point, shadow AI is already entrenched.CISOs need to reframe compliance not as a constraint, but as an enabler of safe adoption. A well-defined compliance strategy gives employees clarity. It sets boundaries without blocking innovation. And most importantly, it reduces the likelihood that employees will seek unsanctioned alternatives.Established frameworks can provide a useful starting point. Standards like ISO 42001 offer guidance for building structured, auditable approaches to AI governance. But frameworks alone aren’t enough; they need to be operationalized quickly and pragmatically.In the first 90 days of formalizing an AI governance approach, organizations should focus on a few critical priorities:Establish accountability: Define who owns AI governance across security, compliance and business units. Without clear ownership, efforts stall.Create visibility: Inventory all AI usage across the organization—not just approved tools, but shadow usage as well. Understanding what employees are using (and why) is essential to managing risk.Assess and prioritize risk: Not all AI use cases carry the same level of exposure. Identify high-risk scenarios, such as those involving sensitive data, and address them first.Conduct an assessment with an audit partner: Analyze the regulatory, reputational, and compliance risks associated with each deployment.Implement interim controls: Even before policies are finalized, introduce guardrails for high-risk activities to reduce immediate exposure.That said, there is no one-size-fits-all solution. While 77% of companies plan to pursue an AI certification in the next 12 months, that’s not the only path to compliance. Many are choosing a blended approach, and plan to address AI risk with ISO 42001 (60%), self-assessments (50%), and/or adding AI controls to other assessments (56%).What matters is not the specific approach, but the presence of a proactive, intentional strategy.The Power of Policy: Promoting Acceptable Use Technology alone won’t solve shadow AI. Blocking tools or restricting access may reduce some risk, but it doesn’t address the underlying driver: employees need efficient ways to do their work. If sanctioned options are too limited, too slow or too unclear, employees will find alternatives. That’s why education and enablement are just as important as policy.CISOs should focus on building a culture where employees understand both the value and the risks of AI. This starts with clear, practical guidance, not abstract policies buried in documentation.Effective approaches include:Defining acceptable use clearly: Employees should know what types of data can and cannot be used with AI tools, and in which contexts.Providing real-world training: Use scenarios employees actually encounter, like debugging code or summarizing documents, to illustrate safe vs. unsafe practices.Offering approved alternatives: If employees have access to secure, vetted AI tools, they’re far less likely to seek out shadow options.Reinforcing accountability: Make it clear that AI usage is part of the organization’s broader security posture, not an exception to it.When employees understand the “why” behind the rules and have viable ways to work within them, compliance becomes far more sustainable.Bringing AI Out of the ShadowsShadow AI is ultimately a symptom of misalignment. It reflects a gap between how organizations think AI should be used and how employees are actually using it. Closing that gap requires more than reactive controls and CISOs are uniquely positioned to lead this effort as both protector and enabler.AI isn’t going away. Neither is the pressure to move faster. The CISOs that succeed will demand visibility, establish clear governance and a willingness to meet employees where they are.
