Updated on
June 16, 2025
Research: Shadow AI is a Blind Spot in Enterprise Security, Including Among Security Teams
At RSA Conference 2025 and InfoSecurity Europe 2025 we surveyed over 500 cybersecurity professionals to assess emerging threats in enterprise environments. The findings reveal a growing and often overlooked risk: security professionals using generative AI tools without approval, a trend known as Shadow AI.
TABLE OF CONTENTS
Key Takeaways
Key Takeaways
  • Shadow AI is widespread within organizations, with usage going largely unmonitored.
  • AI adoption inside security teams is widespread.
  • AI use is accelerating faster than security can control it.
  • Sensitive data is being exposed, and there is no clear ownership of AI risk.

Even security teams are going rogue with AI.

In a survey of over 500 cybersecurity professionals at RSA Conference 2025 and InfoSecurity Europe 2025, Mindgard uncovered a striking trend: security staff are using generative AI tools without approval. This rise in Shadow AI is creating a serious blind spot inside the very teams meant to protect the enterprise.

Key Findings: The Security Team Is Breaking Its Own Rules

  • Shadow AI is widespread within organizations, with usage going largely unmonitored. Shadow AI refers to the use of generative AI tools and applications such as ChatGPT or GitHub Copilot within organizations without formal approval or supervision. Shadow AI mirrors the earlier phenomenon of Shadow IT, but with higher stakes. AI tools often process sensitive code, proprietary business data, and regulated customer information. According to the survey, 56% of security professionals acknowledged the use of AI by employees in their organization without formal approval with another 22% suspect it’s happening.
  • AI adoption inside security teams is widespread. 87% of cybersecurity practitioners are incorporating AI into their own daily workflows. Nearly one in four security pros admit to using personal ChatGPT accounts or browser extensions outside formal approval, logging, or compliance. A substantial 76% estimate that their cybersecurity teammates are leveraging AI tools, such as ChatGPT or GitHub Copilot. These are not unaware end users; they're the people who write and enforce corporate security policy. Shadow AI isn’t just happening in marketing or R&D – it’s happening inside the SOC.
  • AI use is accelerating faster than security can control it. Nearly 90% of security practitioners have used AI tools, but only 32% of organizations have formal controls in place. That means most AI use in security is happening without oversight. The very teams tasked with defending enterprise systems are experimenting with powerful external tools, without governance, enforcement, or accountability.
  • Sensitive data is being exposed, and there is no clear ownership of AI risk. Security professionals report entering internal documentation, customer records, and even sensitive data into AI tools. 12% admit they don’t know what’s being input. When asked who owns AI risk in their organization, 39% said no one. Another 38% pointed to security teams, with fewer respondents pointing to data science (17%), the C-suite (16%), or compliance (15%), highlighting a failure of cross-functional governance. AI is being used, informally and at scale, with no one truly in charge.

Why does this matter? 

Peter Garraghan, CEO and Co-founder at Mindgard:

"AI is already embedded in enterprise workflows, including within cybersecurity, and it's accelerating faster than most organizations can govern it. Shadow AI isn’t a future risk. It’s happening now, often without leadership awareness, policy controls, or accountability. Gaining visibility is a critical first step, but it’s not enough. Organizations need clear ownership, enforced policies, and coordinated governance across security, legal, compliance, and executive teams. Establishing a dedicated AI governance function is not a nice-to-have. It is a requirement for safely scaling AI and realizing its full potential."

Detailed survey results

Security teams are the first to adopt AI, and the first to bypass policy

AI is already deeply embedded within security teams; 87% of cybersecurity practitioners report using AI tools in their daily work. A significant 76% of respondents believe that their cybersecurity peers are using AI tools (such as ChatGPT and GitHub Copilot). Meanwhile, at the organizational level, only 57% of companies have integrated AI into operations in a broad way or identified themselves as AI-native.

Among respondents who suspected AI usage within their security teams, nearly half (49%) can be classified as extensive AI users, applying AI tools across multiple distinct tasks, ranging from writing detection rules to generating phishing simulations. This suggests a growing operational reliance on AI within mature security workflows. An additional 21% fall into the category of early adopters, experimenting with AI for at least one work-related task, indicating a broader trend of cautious but active exploration.

By contrast, only 5% identified as non-users, explicitly stating they are not currently using AI for any security-related functions. Notably, 24% of respondents reported informal AI usage, such as the use of browser extensions or personal ChatGPT accounts for work purposes, even when their teams have not formally adopted AI. This highlights a potential governance gap, where individual experimentation may outpace organizational policy, raising important considerations for security oversight and compliance.

The most common individual use cases among cybersecurity professionals is content-focused: 57% use it for research or summarizing topics, and 45% for writing policies or documentation. However, its role in technical tasks is also significant — 40% use AI to write or debug code, and 33% for writing or testing detection rules. These results highlight AI’s dual function: enhancing efficiency in everyday content work while increasingly supporting core security operations.

The trend toward informal use is reinforced by further insights. Beyond those using personal accounts, 17% reported using AI out of curiosity or for experimentation, while 16% apply it to routine tasks like handling tickets or writing status updates. Together, these groups represent roughly one third of users—clear evidence that Shadow AI is becoming embedded in day-to-day security operations well before formal governance measures are in place.

AI Adoption Is Advancing, But Maturity and Visibility Still Vary

Organizations display significant variation in their AI adoption. About 34% report broad AI usage across departments, with another 23% describe their organizations as AI-native, with generative or predictive models embedded in core products or services. An additional 22% restrict AI to specific functions, such as research, marketing, or security, where structured pilots are in progress. Around 6% say AI experimentation remains informal, led by individuals using public tools without formal approval. While these grassroots efforts may drive innovation, they often operate outside of established oversight. Notably, 10% of respondents are unsure how AI is used in their organization, and 5% report no AI activity at all.

These findings suggest an ecosystem in flux. While many organizations are shifting from pilot projects toward operational or AI-native maturity, others remain in the early stages with fragmented or informal approaches. The next challenge is to convert successful practices into structured governance that encourages innovation without compromising security, compliance, or strategic alignment.

AI Use Is Accelerating Faster Than Security Can Control

56% of security practitioners acknowledged that AI is used by employees in their organization without approval or oversight, while an additional 22% believe this is likely happening.

12% of practitioners admitted they had no visibility into what is being entered into AI systems within their organization. Over half of respondents (57%) believe the content is likely related to general research. Nearly as many (49%) reported usage for coding or scripting tasks. However, 30% acknowledged entering internal documents or emails, and 29% said customer or other sensitive data had been input into AI tools. These behaviors raise clear concerns about data governance. Around 20% of respondents confirmed using AI with regulated or sensitive data, underscoring the urgent need for defined usage policies and proactive oversight.

Despite these risks, only 32% of organizations actively monitor AI usage. An additional 24% rely on informal or manual methods including spot checks, internal surveys, or manager-led reviews. While these provide a degree of visibility, they often fail to detect unsanctioned or after-hours usage.

In contrast, 14% of respondents admitted there is no monitoring in place at all, leaving their organizations vulnerable to data leakage and compliance violations. Another 11% indicated plans to introduce monitoring, suggesting awareness of the risk but delayed execution. A further 11% reported having no plans to monitor AI use, possibly reflecting misplaced confidence in existing controls or a low prioritization of AI risk. Finally, 7% were unsure whether monitoring efforts existed, highlighting gaps in communication and governance.

Overall, while nearly 60% of organizations report having some level of AI oversight, a substantial portion either lack monitoring or remain undecided. As AI becomes further integrated into business operations, comprehensive audit mechanisms and cross-functional awareness will be critical to prevent security breaches and regulatory failures.

Lack of AI Risk Ownership Within Organizations

The survey reveals a notable absence of agreement around who is responsible for AI risk. The largest group, 39%, stated that no specific owner has been assigned. This suggests that in many organizations, AI governance remains undefined, with no individual or team accountable for monitoring model behavior, setting usage standards, or anticipating emerging threats.

Another 38% said responsibility falls to the security team. While these teams play a central role in protecting systems and data, assigning them full ownership of AI risk can stretch their remit to include areas like compliance, legal exposure, data governance, and vendor assessment—domains that often require cross-functional oversight.

Smaller segments point to data science or machine learning teams (17%) or senior executives (16%) as the appropriate leads. Data science teams bring technical expertise on model behavior and limitations, while executive leadership can ensure strategic alignment, secure funding, and enforce policy across departments. Only 15% believe the legal or compliance function should lead AI risk efforts, which may reflect the perception that AI remains a technical or innovation concern, despite its clear implications for data protection, intellectual property, and regulatory compliance.

These findings underline the pressing need to establish defined roles and responsibilities in AI governance. Without clear ownership or a coordinated framework, organizations will face challenges in enforcing policy, maintaining accountability, and responding effectively to risk. A dedicated AI governance function, incorporating input from security, data, legal, and executive stakeholders, will be essential to managing evolving risks while enabling responsible AI adoption.

Methodology and Demographics

The survey was conducted in person at the RSA Conference in San Francisco between May 6 and 9, 2025 and InfoSecurity Europe in London between June 3 and 5, 2025. The Mindgard team collected over 500 responses from a broad cross-section of the cybersecurity community, spanning a wide range of roles, experience levels, and organizational types.

Respondents represent a diverse spectrum of experience. Over one third (36%) have worked in cybersecurity for fewer than three years, reflecting a digitally native group with a strong interest in AI tools. At the other end of the spectrum, 24% bring 16 or more years of experience, contributing seasoned insights. The remainder includes 18% with 3 to 7 years of experience and 22% with 8 to 15 years, often representing those responsible for leading technology adoption within their teams.

A majority of respondents (61%) hold management positions, ensuring the findings reflect input from both strategic decision makers and operational practitioners. Company size also varied. One third (37%) came from large enterprises with more than 1,000 employees, 31% from mid-sized firms with 100 to 999 employees, and 32% from small businesses with fewer than 100 staff. 

About Mindgard

Mindgard is the leader in Artificial Intelligence Security Testing. Founded at Lancaster University and backed by cutting edge research, Mindgard enables organizations to secure their AI systems from new threats that traditional application security tools cannot address. Its industry-first, award-winning, Offensive Security and AI Red Teaming solution delivers automated and continuous security testing across the AI lifecycle, making AI security actionable and auditable. For more information, visit mindgard.ai