London and Boston (Jun 17, 2025) – In a survey of over 500 cybersecurity professionals at RSA Conference and InfoSecurity Europe 2025, Mindgard uncovered a striking trend: security staff are using AI without approval. This rise in Shadow AI is creating a serious blind spot inside the very teams meant to protect the enterprise.
Shadow AI refers to the use of generative AI tools such as ChatGPT or GitHub Copilot without formal oversight. Much like Shadow IT, this informal adoption bypasses security controls. But with AI, the risks are more acute; these tools can ingest sensitive code, internal documentation, and regulated customer data, significantly increasing the risk of data leakage, privacy violations, and compliance breaches.
Security teams are part of the problem
86% of practitioners report using AI, with 24% admitting to doing so via personal accounts or unapproved browser extensions. According to the survey, a substantial 76% of respondents suspect that their cybersecurity teammates are using AI tools in their workflows to write detection rules, generate training materials, or review code.
The risk is compounded by the type of data being entered into AI systems. Around 30% of security professionals said internal documentation and emails were being fed into AI tools within their organizations, and a similar number acknowledged the use of customer or confidential business data. One in five admitted to entering sensitive information, while 12% said they didn’t know what data was being submitted at all.
Oversight is inconsistent—or missing entirely
Monitoring and oversight lag far behind adoption. Only 32% of organizations have systems in place to track AI use. Another 24% rely on manual processes like surveys or manager reviews, which often miss unauthorized use. Alarmingly, 14% of respondents say there is no monitoring at all, leaving their organizations exposed to silent and unmitigated risk.
Peter Garraghan, CEO and Co-founder of Mindgard, said: "AI is already embedded in enterprise workflows, including within cybersecurity, and it's accelerating faster than most organizations can govern it. Shadow AI isn’t a future risk. It’s happening now, often without leadership awareness, policy controls, or accountability. Gaining visibility is a critical first step, but it’s not enough. Organizations need clear ownership, enforced policies, and coordinated governance across security, legal, compliance, and executive teams. Establishing a dedicated AI governance function is not a nice-to-have. It is a requirement for safely scaling AI and realizing its full potential."
Accountability is fragmented
The survey also reveals widespread confusion over who is responsible for managing AI risk. 39% of respondents said their organization has no designated owner. Another 38% pointed to the security team, while smaller shares identified data science (17%), executive leadership (16%), and legal or compliance (15%). This fragmentation reinforces the urgent need for cross-functional AI governance and clearly assigned responsibility.
About the survey
The research was conducted by Mindgard at RSA Conference 2025, held in San Francisco from May 6–9, 2025 and InfoSecurity Europe in London from June 3–5, 2025. It includes over 500 responses from a broad cross-section of cybersecurity professionals. Respondents range from early-career practitioners to senior leaders, with over 60% in management roles. Participants came from organisations of all sizes, including enterprises, small businesses, MSSPs, and government bodies.
Read the full survey here.