AI application vulnerability scanning identifies risks traditional tools miss. Tracking six practical metrics helps security teams strengthen defenses, ensure compliance, and prevent costly breaches.
Fergal Glynn

I’m incredibly excited to announce that Jim Nightingale has joined the Mindgard team, working on Red Teaming and Adversarial Testing.
Jim brings a unique blend of academic experience, deep curiosity, and a passion for understanding how advanced AI systems can be tested, poked, and prodded to uncover the weaknesses that matter most.
Some of you will already be familiar with Jim’s work from his recent research posts:
These pieces are great illustrations of Jim’s mindset: deeply analytical investigations, relentless about uncovering real security risk, and always laser-focused on practical impact. Jim doesn’t just ask “can we do this?” he focuses on “what does this mean for the safety, reliability, and real-world deployment of AI?” That perspective is exactly what we need as we push the frontier of AI risk discovery and security testing.
Jim isn’t only active here at Mindgard, you can follow more of his insights and research explorations at Jim the AI Whisperer on Medium, where he regularly writes about AI behavior, prompt engineering, attack vectors, and the evolving landscape of model security.
Please join me in welcoming Jim.