Gartner's AI Trust, Risk, and Security Management (AI TRiSM) framework provides a structured approach to managing AI risks while maintaining transparency and accountability.
Fergal Glynn
Even security teams are going rogue with AI.
In a survey of over 500 cybersecurity professionals at RSA Conference 2025 and InfoSecurity Europe 2025, Mindgard uncovered a striking trend: security staff are using generative AI tools without approval. This rise in Shadow AI is creating a serious blind spot inside the very teams meant to protect the enterprise.
Peter Garraghan, CEO and Co-founder at Mindgard:
"AI is already embedded in enterprise workflows, including within cybersecurity, and it's accelerating faster than most organizations can govern it. Shadow AI isn’t a future risk. It’s happening now, often without leadership awareness, policy controls, or accountability. Gaining visibility is a critical first step, but it’s not enough. Organizations need clear ownership, enforced policies, and coordinated governance across security, legal, compliance, and executive teams. Establishing a dedicated AI governance function is not a nice-to-have. It is a requirement for safely scaling AI and realizing its full potential."
AI is already deeply embedded within security teams; 87% of cybersecurity practitioners report using AI tools in their daily work. A significant 76% of respondents believe that their cybersecurity peers are using AI tools (such as ChatGPT and GitHub Copilot). Meanwhile, at the organizational level, only 57% of companies have integrated AI into operations in a broad way or identified themselves as AI-native.
Among respondents who suspected AI usage within their security teams, nearly half (49%) can be classified as extensive AI users, applying AI tools across multiple distinct tasks, ranging from writing detection rules to generating phishing simulations. This suggests a growing operational reliance on AI within mature security workflows. An additional 21% fall into the category of early adopters, experimenting with AI for at least one work-related task, indicating a broader trend of cautious but active exploration.
By contrast, only 5% identified as non-users, explicitly stating they are not currently using AI for any security-related functions. Notably, 24% of respondents reported informal AI usage, such as the use of browser extensions or personal ChatGPT accounts for work purposes, even when their teams have not formally adopted AI. This highlights a potential governance gap, where individual experimentation may outpace organizational policy, raising important considerations for security oversight and compliance.
The most common individual use cases among cybersecurity professionals is content-focused: 57% use it for research or summarizing topics, and 45% for writing policies or documentation. However, its role in technical tasks is also significant — 40% use AI to write or debug code, and 33% for writing or testing detection rules. These results highlight AI’s dual function: enhancing efficiency in everyday content work while increasingly supporting core security operations.
The trend toward informal use is reinforced by further insights. Beyond those using personal accounts, 17% reported using AI out of curiosity or for experimentation, while 16% apply it to routine tasks like handling tickets or writing status updates. Together, these groups represent roughly one third of users—clear evidence that Shadow AI is becoming embedded in day-to-day security operations well before formal governance measures are in place.
Organizations display significant variation in their AI adoption. About 34% report broad AI usage across departments, with another 23% describe their organizations as AI-native, with generative or predictive models embedded in core products or services. An additional 22% restrict AI to specific functions, such as research, marketing, or security, where structured pilots are in progress. Around 6% say AI experimentation remains informal, led by individuals using public tools without formal approval. While these grassroots efforts may drive innovation, they often operate outside of established oversight. Notably, 10% of respondents are unsure how AI is used in their organization, and 5% report no AI activity at all.
These findings suggest an ecosystem in flux. While many organizations are shifting from pilot projects toward operational or AI-native maturity, others remain in the early stages with fragmented or informal approaches. The next challenge is to convert successful practices into structured governance that encourages innovation without compromising security, compliance, or strategic alignment.
56% of security practitioners acknowledged that AI is used by employees in their organization without approval or oversight, while an additional 22% believe this is likely happening.
12% of practitioners admitted they had no visibility into what is being entered into AI systems within their organization. Over half of respondents (57%) believe the content is likely related to general research. Nearly as many (49%) reported usage for coding or scripting tasks. However, 30% acknowledged entering internal documents or emails, and 29% said customer or other sensitive data had been input into AI tools. These behaviors raise clear concerns about data governance. Around 20% of respondents confirmed using AI with regulated or sensitive data, underscoring the urgent need for defined usage policies and proactive oversight.
Despite these risks, only 32% of organizations actively monitor AI usage. An additional 24% rely on informal or manual methods including spot checks, internal surveys, or manager-led reviews. While these provide a degree of visibility, they often fail to detect unsanctioned or after-hours usage.
In contrast, 14% of respondents admitted there is no monitoring in place at all, leaving their organizations vulnerable to data leakage and compliance violations. Another 11% indicated plans to introduce monitoring, suggesting awareness of the risk but delayed execution. A further 11% reported having no plans to monitor AI use, possibly reflecting misplaced confidence in existing controls or a low prioritization of AI risk. Finally, 7% were unsure whether monitoring efforts existed, highlighting gaps in communication and governance.
Overall, while nearly 60% of organizations report having some level of AI oversight, a substantial portion either lack monitoring or remain undecided. As AI becomes further integrated into business operations, comprehensive audit mechanisms and cross-functional awareness will be critical to prevent security breaches and regulatory failures.
The survey reveals a notable absence of agreement around who is responsible for AI risk. The largest group, 39%, stated that no specific owner has been assigned. This suggests that in many organizations, AI governance remains undefined, with no individual or team accountable for monitoring model behavior, setting usage standards, or anticipating emerging threats.
Another 38% said responsibility falls to the security team. While these teams play a central role in protecting systems and data, assigning them full ownership of AI risk can stretch their remit to include areas like compliance, legal exposure, data governance, and vendor assessment—domains that often require cross-functional oversight.
Smaller segments point to data science or machine learning teams (17%) or senior executives (16%) as the appropriate leads. Data science teams bring technical expertise on model behavior and limitations, while executive leadership can ensure strategic alignment, secure funding, and enforce policy across departments. Only 15% believe the legal or compliance function should lead AI risk efforts, which may reflect the perception that AI remains a technical or innovation concern, despite its clear implications for data protection, intellectual property, and regulatory compliance.
These findings underline the pressing need to establish defined roles and responsibilities in AI governance. Without clear ownership or a coordinated framework, organizations will face challenges in enforcing policy, maintaining accountability, and responding effectively to risk. A dedicated AI governance function, incorporating input from security, data, legal, and executive stakeholders, will be essential to managing evolving risks while enabling responsible AI adoption.
The survey was conducted in person at the RSA Conference in San Francisco between May 6 and 9, 2025 and InfoSecurity Europe in London between June 3 and 5, 2025. The Mindgard team collected over 500 responses from a broad cross-section of the cybersecurity community, spanning a wide range of roles, experience levels, and organizational types.
Respondents represent a diverse spectrum of experience. Over one third (36%) have worked in cybersecurity for fewer than three years, reflecting a digitally native group with a strong interest in AI tools. At the other end of the spectrum, 24% bring 16 or more years of experience, contributing seasoned insights. The remainder includes 18% with 3 to 7 years of experience and 22% with 8 to 15 years, often representing those responsible for leading technology adoption within their teams.
A majority of respondents (61%) hold management positions, ensuring the findings reflect input from both strategic decision makers and operational practitioners. Company size also varied. One third (37%) came from large enterprises with more than 1,000 employees, 31% from mid-sized firms with 100 to 999 employees, and 32% from small businesses with fewer than 100 staff.
Mindgard is the leader in Artificial Intelligence Security Testing. Founded at Lancaster University and backed by cutting edge research, Mindgard enables organizations to secure their AI systems from new threats that traditional application security tools cannot address. Its industry-first, award-winning, Offensive Security and AI Red Teaming solution delivers automated and continuous security testing across the AI lifecycle, making AI security actionable and auditable. For more information, visit mindgard.ai