Our mission is to secure the world's AI

Organizations are rapidly adopting AI technologies, embedding them into production environments without full visibility into how their probabilistic and opaque behaviors introduce exploitable risk. Mindgard addresses this challenge by providing AI security solutions that help enterprises secure AI models, agents, and applications across the AI lifecycle. Spun out of more than a decade of AI security research at Lancaster University and headquartered in Boston and London, Mindgard enables organizations to identify, assess, and mitigate real-world AI threats.

The Mindgard Philosophy

Mindgard’s philosophy is grounded in offensive security. Effective defenses and security controls can only be built by emulating how real attackers scope, plan, and exploit targets. Mindgard builds software that empowers organizations to understand what attackers can learn about their AI, assess how those systems can be exploited, and ultimately prevent attackers from breaching them.

Achieving these goals at scale is non-trivial. Mindgard has assembled an elite team of AI and offensive security experts whose research is embedded directly into the platform, allowing security teams to apply specialized AI security technology and expertise without needing to build it in-house.

Join others Red Teaming their AI

Our Key Milestones

May 2022

Mindgard was founded on pioneering research by Dr. Peter Garraghan at Lancaster University, which showed traditional AppSec could not address AI-specific risks.

Dec 2024

Seed round led by top security investors, validating demand for an offensive-security approach to AI and the thesis that effective defenses must emulate real attacker behavior.

Sept 2025

Expanded leadership with key hires: CEO James Brear, Head of Research Aaron Portnoy, and Offensive Security Lead Rich Smith, accelerating the research-led foundation.

Jan 2026

Secured Fortune 500 design partners, validating enterprise demand for attacker-aligned AI security.

Our Team

We’ve assembled the strongest AI security team in the world, with deep roots in cybersecurity AI research and behavioral analysis.

James Brear

Chief Executive Officer

Dr. Peter Garraghan

Chief Science Officer, Founder

Aaron Portnoy

Head of Research & Innovation

Rich Smith

Offensive Security Lead

Fergal Glynn

Chief Marketing Officer

Jonathan Canizales

Operations Manager

Nicole Pellicena

Product Lead

Imran Bohoran

Engineering Lead

William Hackett

Founding ML Engineer

Stefan Trawicki

Founding ML Engineer

Lewis Birch

Founding ML Engineer

Ayomide Apantaku

Software Engineer

Alex Dutton

Software Engineer

Andrew Cook

Software Engineer

Janamejay Poddar

Designer

Piotr Ryciak

AI Red Teamer

Amanda Worker

AI Data Scientist

Robert Cook

Software Engineer

Rob Heath

Software Engineer

Join Us

Mindgard's values guide our actions and decisions. These principles form the foundation of our company's culture, shaping how we interact within our teams and with our clients. They inspire us to improve continuously and help us navigate the dynamic landscape of the AI security industry.

Innovative Environment
Join a team that fosters innovation and creativity, providing an environment where your ideas are valued.
Professional Growth
Experience continuous professional growth with access to learning resources, skill-building workshops.
Collaborative Culture
Be part of a collaborative culture that values teamwork and open communication.
Flexible Work Environment
Enjoy a flexible work environment that respects work-life balance. We understand the importance of flexibility.
Cutting-edge Technology
Work with the latest and most advanced technologies in the coding and development space as a member of our team.
Impactful Projects
Contribute to projects that make a real impact. Our team takes on exciting challenges that push the boundaries.

Mindgard in the News

scawardseurope.com / June 2025
Mindgard wins Best AI Solution and Best New Company at the SC Awards Europe 2025!
Read more
Safetydetectives.com / Feb 2025
"The best AI security solutions will balance automation with oversight, assessment through red teaming, and strengthening defenses without introducing new vulnerabilities."
Read the full article on safetydetectives.com
Techradar.com / Jan 2025
"Securing AI demands trust, socio-technical integration and transparency"
Read the full article on Techradar.com
TechCrunch.com / Dec 2024
“Mindgard raises $8M to safeguard AI with industry-first AI security solution”
Read the full article on TechCrunch.com
TNW.com Podcast / May 2024
"We discussed the questions of security of generative AI, potential attacks on it, and what businesses can do today to be safe."
Listen to the full episode on tnw.com
Businessage.com / May 2024
"Even the most advanced AI foundation models are not immune to vulnerabilities. In 2023, ChatGPT itself experienced a significant data breach caused by a bug in an open-source library."
Read the full article on businessage.com
Finance.Yahoo.com / April 2024
"AI is not magic. It's still software, data and hardware. Therefore, all the cybersecurity threats that you can envision also apply to AI."
Read the full article on finance.yahoo.com
Verdict.co.uk / April 2024
"There are cybersecurity attacks with AI whereby it can leak data, the model can actually give it to me if I just ask it very politely to do so."
Read the full article on verdict.co.uk
Sifted.eu / March 2024
"Mindgard is one of 11 AI startups to watch, according to investors."
Read the full article on sifted.eu
Maddyness.com / March 2024
"You don’t need to throw out your existing cyber security processes, playbooks, and tooling, you just need to update it or re-armor it for AI/GenAI/LLMs."
Read the full article on maddyness.com

FAQs

Learn how Mindgard secures AI systems by applying attacker-aligned testing, continuous risk assessment, and runtime defense across models, agents, and applications.

How is Mindgard different from AI safety or content moderation tools?
AI safety tools focus on output quality and policy compliance. Mindgard focuses on security. It identifies how attackers exploit AI behavior, system interactions, and agent workflows to achieve real compromise, not just policy violations.
Can Mindgard detect shadow AI usage?
Yes. Mindgard helps identify undocumented or unmanaged AI systems by enumerating behaviors, integrations, and access paths that expose hidden AI risk across the organization.
Does Mindgard replace my security team or existing tools?
No. Mindgard extends existing security teams by providing attacker-aligned visibility and automation that would otherwise require specialized expertise. It complements AppSec, cloud security, and governance tooling rather than replacing them.
What makes Mindgard stand out from other AI security companies?
Mindgard is built on over a decade of AI security research originating at Lancaster University and grounded in offensive security methodology. Rather than evaluating models in isolation, Mindgard tests AI systems the way real attackers do, uncovering high-impact risks that emerge from behavior, system interactions, and deployment context.
How often should AI systems be tested?
AI security testing should be continuous. Changes to models, prompts, tools, data sources, or user behavior can introduce new risks at any time. Mindgard is designed to test AI systems continuously as they evolve.
Can Mindgard handle different kinds of AI models?
Yes. Mindgard is neural-network agnostic and supports generative AI, LLMs, NLP systems, vision, audio, and multi-modal models. More importantly, it secures AI systems end-to-end, including agents, tools, APIs, data sources, and workflows that models interact with in production.
How does Mindgard ensure data security and privacy?
Mindgard follows industry best practices for secure software development and operation, including use of our own platform for testing AI components. We are SOC 2 Type II and GDPR compliant and expect ISO 27001 certification in early 2026.
Can Mindgard work with the LLMs I use today?
Yes. Mindgard works with leading commercial and open-source LLMs and applies continuous testing across deployed models, agents, and applications. This enables teams to identify emerging risks as systems evolve, rather than relying on one-time assessments.
What types of organisations use Mindgard?
Mindgard serves a diverse range of organisations, including those in financial services, healthcare, manufacturing, and cybersecurity. Any enterprise deploying AI technologies can benefit from Mindgard's platform to secure their AI assets and mitigate potential risks.
Why don't traditional AppSec tools work for AI models?
Traditional AppSec assumes deterministic behavior and known vulnerability classes. AI systems are probabilistic, adaptive, and often autonomous, with risks that emerge only at runtime. Attacks such as prompt injection, agent misuse, and behavioral manipulation exploit how AI behaves and interacts with surrounding systems, requiring an attacker-aligned, system-level security approach. This lack of visibility is reflected in industry research from Gartner showing limited enterprise insight into AI risk.
What is automated red teaming?
Automated AI red teaming uses attacker-aligned techniques to continuously test AI systems for real-world exploitation paths. Mindgard automates reconnaissance, adversarial testing, and chained attack scenarios to surface high-impact vulnerabilities with speed, scale, and repeatability.
Why is it important to test instantiated AI models?
AI systems behave differently once deployed. Interactions with users, tools, data, and workflows can introduce vulnerabilities that do not appear during development. Continuous testing of deployed systems is essential to identify emergent risk, validate controls, and maintain security over time.
What are the types of risks Mindgard uncovers?
Mindgard focuses on risks that materially impact confidentiality, integrity, and availability, including behavioral exploitation, unauthorized data access, agent misuse, guardrail bypass, prompt injection, model extraction, and attack paths that pivot into surrounding enterprise systems. https://aitools.inc/tools/mindgard?toolVerificationChallenge=90ca8af2-0f60-4248-a7dd-24610acc5d28

Mindgard identifies various AI security risks, including:

- Jailbreaking: Manipulating inputs to make AI systems perform unintended actions.
- Extraction: Reconstructing AI models to expose sensitive information.
- Evasion: Altering inputs to deceive AI models into incorrect outputs.
- Inversion: Reverse-engineering models to uncover training data.
- Poisoning: Tampering with training data to manipulate model behaviour.
- Prompt Injection: Inserting malicious inputs to trick AI systems into unintended responses.
Learn how Mindgard can help you navigate AI Security

Take the first step towards securing your AI. Book a demo now and we'll reach out to you.