Updated on
March 10, 2025
Mindgard Wins Best Cybersecurity Startup and Best AI Security Solution at the 2025 Cybersecurity Excellence Awards
Mindgard, a pioneer in AI security testing, has been recognized as the winner of the Best Cybersecurity Startup and Best AI Security Solution at the 2025 Cybersecurity Excellence Awards.
TABLE OF CONTENTS
Key Takeaways
Key Takeaways

Mindgard, a pioneer in AI security testing, has been recognized as the winner in two categories at the 2025 Cybersecurity Excellence Awards:

Best Cybersecurity Startup
Best AI Security Solution

This recognition underscores Mindgard’s leadership in AI security, reinforcing its mission to help organizations secure their full AI applications against threats that traditional security tools cannot address.

“We congratulate Mindgard on these outstanding achievements in the Best Cybersecurity Startup and Best AI Security Solution categories of the 2025 Cybersecurity Excellence Awards,” said Holger Schulze, founder of Cybersecurity Insiders and organizer of the awards. “As we celebrate 10 years of recognizing excellence in cybersecurity, your innovation, commitment, and leadership set a powerful example for the entire industry.”

These wins come on the heels of another major recognition: Mindgard CEO and co-founder, Dr. Peter Garraghan, was also named ‘Cybersecurity Innovator of the Year, further validating the company’s impact on the future of AI security.

Solving AI Security’s Biggest Challenges

As enterprises race to integrate AI into their operations, security has become the missing piece. According to the Gartner AI TRiSM report, 41% of organizations deploying AI have experienced security breaches, yet only 10% of internal auditors have visibility into AI risks.

Mindgard is tackling this problem head-on with the first and only Application Security for AI solution that addresses the full AI application, the model, the RAG, all the pieces that go into an application. The Mindgard solution is designed to identify and remediate AI-specific vulnerabilities at runtime, something traditional applications security can not do.

What makes Mindgard’s AI security solution unique?

🔹 Continuous AI security testing: Finds and remediates vulnerabilities that traditional static code analysis cannot detect.
🔹 Real-world AI attack simulation: Threat intelligence library with thousands of AI-specific attack scenarios.
🔹 Seamless integration: Works within existing CI/CD pipelines, requiring only an inference or API endpoint.
🔹 Validates AI security controls: Provides real-time security validation for guardrails, WAFs, and enterprise AI governance frameworks.

This breakthrough technology was recently recognized in the OWASP LLM and Generative AI Security Solutions Landscape Guide 2025, further solidifying its role as a must-have AI security solution.

From Research to Real-World Security Impact

Founded as a spin-off from Lancaster University, Mindgard is built on over a decade of rigorous AI security research. Led by Dr. Peter Garraghan, one of the world’s foremost AI security experts, the company’s deep-tech foundation and PhD-led R&D team give it an edge in developing innovative, science-backed security solutions.

In December 2024, Mindgard raised an $8M funding round, with support from .406 Ventures, Atlantic Bridge, Willowtree Investments, IQ Capital, and Lakestar. This funding enables Mindgard to expand its technology, scale operations, and continue pushing the boundaries of AI security innovation.

Shaping the Future of AI Security

Winning Best Cybersecurity Startup and Best AI Security Solution at the Cybersecurity Excellence Awards reflects Mindgard’s commitment to securing AI’s future. As AI adoption accelerates, Mindgard is setting new industry standards for proactive AI red teaming and AI security testing—helping enterprises, security teams, and developers stay ahead of evolving threats.

About Mindgard

Mindgard is the leader in Artificial Intelligence Security Testing. Founded at Lancaster University and backed by cutting edge research, Mindgard enables organizations to secure their full AI applications from new threats that traditional application security tools cannot address.