A range of resources including research papers, webinars, and company news focused on AI Security.
MITRE ATLAS™ Adviser in Mindgard that helps standardise AI red teaming reporting.
A technical exploration of modern AI red teaming, examining how probabilistic behavior, classic vulnerabilities, and psychometric steering combine to create real-world AI security risk.
Mindgard’s GitHub Action example repository shows how to integrate automated AI security testing into CI/CD pipelines so every model or code change is validated against the latest Mindgard capabilities.
S&P Global Coverage Initiation: Mindgard’s continuous AI red teaming looks to secure models and applications
PINCH is an automated framework that runs large-scale extraction attacks across deep learning architectures to reveal how and when model stealing actually succeeds.
Model Leeching shows how attackers can distill ChatGPT-class task knowledge into smaller models for about fifty dollars, then use them to tune follow on attacks.
This study shows how simple character transformations and algorithmic evasion attacks can silently bypass six popular LLM guardrails, sometimes reaching one hundred percent evasion.
This work shows how applying compiler driven tensor optimizations can cut side-channel model reconstruction success by up to forty-three percent without redesigning architectures.
AI guardrails are often used as the first line of defense within AI systems, however how effective are they in practice against actual attackers?
In this article we’ll walk through hunting for AI application vulnerabilities. We’ll use Mindgard to find application vulnerabilities in a deliberately-vulnerable LLM lab application made available by PortSwigger.
The LLM and Generative AI Security Solutions Landscape is an industry report developed by OWASP that maps out key vendors and solutions in the AI security space. This landscape provides a comprehensive view of tools and technologies that help organizations safeguard their AI-powered applications.
At RSA Conference 2025 and InfoSecurity Europe 2025 we surveyed over 500 cybersecurity professionals to assess emerging threats in enterprise environments. The findings reveal a growing and often overlooked risk: security professionals using generative AI tools without approval, a trend known as Shadow AI.
In this talk, Peter Garraghan demonstrates how adversaries are already exploiting AI systems and why current security practices are often ill-equipped to stop them.
In this webinar, Dr. Peter Garraghan takes the audience on a deep dive into the underbelly of AI vulnerabilities, exposing the gaps within traditional AI security approaches and demonstrating why application-level AI security must be a priority.
Gartner's AI Trust, Risk, and Security Management (AI TRiSM) framework provides a structured approach to managing AI risks while maintaining transparency and accountability.
With this update, Mindgard’s platform and CLI have been updated to support image models.
PINCH is an efficient and automated extraction attack framework.
Mindgard Recognized as UK's Most Innovative Cyber SME 2024 at Infosecurity Europe
Report: Cyber Security for AI Recommendations
Model Leeching: An Extraction Attack Targeting LLMs
Enhancing DL Model Attack Robustness via Tensor Optimization