Our mission is to secure the world's AI

Organizations are rapidly adopting AI technologies, embedding them into production environments without full visibility into how their probabilistic and opaque behaviors introduce exploitable risk. Mindgard addresses this challenge by providing AI security solutions that help enterprises secure AI models, agents, and applications across the AI lifecycle. Spun out of more than a decade of AI security research at Lancaster University and headquartered in Boston and London, Mindgard enables organizations to identify, assess, and mitigate real-world AI threats as these systems move into production.

The Mindgard Philosophy

Mindgard’s philosophy is grounded in offensive security. Effective defenses and security controls can only be built by emulating how real attackers scope, plan, and exploit targets. Mindgard creates technologies that empower organizations to:

● Understand what attackers can learn about their AI
● Assess how attackers can exploit their AI
● Stop attackers from breaching their AI

Achieving these goals at scale is non-trivial. Mindgard has assembled an elite team of AI and offensive security experts whose research is embedded directly into the platform, allowing security teams to apply specialized AI security technology and expertise without needing to build it in-house.

Join others Red Teaming their AI

Our Key Milestones

May 2022

Mindgard was founded on pioneering research by Dr. Peter Garraghan at Lancaster University, which showed traditional AppSec could not address AI-specific risks.

Dec 2024

Seed round led by top security investors, validating demand for an offensive-security approach to AI and the thesis that effective defenses must emulate real attacker behavior.

Sept 2025

Expanded leadership with key hires: CEO James Brear, Head of Research Aaron Portnoy, and Offensive Security Lead Rich Smith, accelerating the research-led foundation.

Jan 2026

Secured Fortune 500 design partners, validating enterprise demand for attacker-aligned AI security.

Our Team

We’ve assembled the strongest AI security team in the world, with deep roots in cybersecurity AI research and behavioral analysis.

James Brear

Chief Executive Officer

Dr. Peter Garraghan

Chief Science Officer, Founder

Aaron Portnoy

Head of Research & Innovation

Rich Smith

Offensive Security Lead

Fergal Glynn

Chief Marketing Officer

Jonathan Canizales

Operations Manager

Nicole Pellicena

Product Lead

Imran Bohoran

Engineering Lead

William Hackett

Founding ML Engineer

Stefan Trawicki

Founding ML Engineer

Lewis Birch

Founding ML Engineer

Ayomide Apantaku

Software Engineer

Alex Dutton

Software Engineer

Andrew Cook

Software Engineer

Janamejay Poddar

Designer

Piotr Ryciak

AI Red Teamer

Amanda Worker

AI Data Scientist

Robert Cook

Software Engineer

Rob Heath

Software Engineer

Join Us

Mindgard's values guide our actions and decisions. These principles form the foundation of our company's culture, shaping how we interact within our teams and with our clients. They inspire us to improve continuously and help us navigate the dynamic landscape of the AI security industry.

Innovative Environment
Join a team that fosters innovation and creativity, providing an environment where your ideas are valued.
Professional Growth
Experience continuous professional growth with access to learning resources, skill-building workshops.
Collaborative Culture
Be part of a collaborative culture that values teamwork and open communication.
Flexible Work Environment
Enjoy a flexible work environment that respects work-life balance. We understand the importance of flexibility.
Cutting-edge Technology
Work with the latest and most advanced technologies in the coding and development space as a member of our team.
Impactful Projects
Contribute to projects that make a real impact. Our team takes on exciting challenges that push the boundaries.

Mindgard in the News

scawardseurope.com / June 2025
Mindgard wins Best AI Solution and Best New Company at the SC Awards Europe 2025!
Read more
Safetydetectives.com / Feb 2025
"The best AI security solutions will balance automation with oversight, assessment through red teaming, and strengthening defenses without introducing new vulnerabilities."
Read the full article on safetydetectives.com
Techradar.com / Jan 2025
"Securing AI demands trust, socio-technical integration and transparency"
Read the full article on Techradar.com
TechCrunch.com / Dec 2024
“Mindgard raises $8M to safeguard AI with industry-first AI security solution”
Read the full article on TechCrunch.com
TNW.com Podcast / May 2024
"We discussed the questions of security of generative AI, potential attacks on it, and what businesses can do today to be safe."
Listen to the full episode on tnw.com
Businessage.com / May 2024
"Even the most advanced AI foundation models are not immune to vulnerabilities. In 2023, ChatGPT itself experienced a significant data breach caused by a bug in an open-source library."
Read the full article on businessage.com
Finance.Yahoo.com / April 2024
"AI is not magic. It's still software, data and hardware. Therefore, all the cybersecurity threats that you can envision also apply to AI."
Read the full article on finance.yahoo.com
Verdict.co.uk / April 2024
"There are cybersecurity attacks with AI whereby it can leak data, the model can actually give it to me if I just ask it very politely to do so."
Read the full article on verdict.co.uk
Sifted.eu / March 2024
"Mindgard is one of 11 AI startups to watch, according to investors."
Read the full article on sifted.eu
Maddyness.com / March 2024
"You don’t need to throw out your existing cyber security processes, playbooks, and tooling, you just need to update it or re-armor it for AI/GenAI/LLMs."
Read the full article on maddyness.com

FAQs

View and learn more about Mindgard's features, data handling capabilities, or integration options.

What makes Mindgard stand out from other AI security companies?
Founded in a leading UK university lab, Mindgard boasts over 10 years of rigorous research in AI security, with public and private partnerships that ensure access to the latest advancements and the most qualified talent in the field.
Can Mindgard handle different kinds of AI models?
Yes, Mindgard is neural network agnostic and supports a wide range of AI models, including Generative AI, LLMs, Natural Language Processing (NLP), audio, image, and multi-modal systems. This versatility allows it to address security concerns across various AI applications.
How does Mindgard ensure data security and privacy?
Mindgard follows industry best practices for secure software development and operation, including use of our own platform for testing AI components. We are GDPR compliant and expect ISO 27001 certification in early 2026.
Can Mindgard work with the LLMs I use today?
Absolutely. Mindgard is designed to secure AI, Generative AI, and LLMs, including popular models like ChatGPT. It enables continuous testing and minimisation of security threats to your AI models and applications, ensuring they operate securely.
What types of organisations use Mindgard?
Mindgard serves a diverse range of organisations, including those in financial services, healthcare, manufacturing, and cybersecurity. Any enterprise deploying AI technologies can benefit from Mindgard's platform to secure their AI assets and mitigate potential risks.
Why don't traditional AppSec tools work for AI models?
The deployment and use of AI introduces new risks, creating a complex security landscape that traditional tools cannot address. As a result, many AI products are being launched without adequate security assurances, leaving organisations vulnerable—an issue underscored by a Gartner finding that 29% of enterprises deploying AI systems have reported security breaches, and only 10% of internal auditors have visibility into AI risk. Many of these new risks such as LLM prompt injection and jailbreaks exploit the probabilistic and opaque nature of AI systems, which only manifest at runtime. Securing these risks, unique to AI models and their toolchains, requires a fundamentally new approach.
What is automated red teaming?
Automated red teaming involves using automated tools and techniques to simulate attacks on AI systems, identifying vulnerabilities without manual intervention. This approach allows for continuous, efficient, and comprehensive security assessments, ensuring AI models are robust against potential threats.
What are the types of risks Mindgard uncovers?
Mindgard identifies various AI security risks, including:

- Jailbreaking: Manipulating inputs to make AI systems perform unintended actions.
- Extraction: Reconstructing AI models to expose sensitive information.
- Evasion: Altering inputs to deceive AI models into incorrect outputs.
- Inversion: Reverse-engineering models to uncover training data.
- Poisoning: Tampering with training data to manipulate model behaviour.
- Prompt Injection: Inserting malicious inputs to trick AI systems into unintended responses.
Why is it important to test instantiated AI models?
Testing instantiated models is crucial because it ensures that AI systems function securely in real-world scenarios. Even if an AI system performs well in development, deployment can introduce new vulnerabilities. Continuous testing helps identify and mitigate these risks, maintaining the integrity and reliability of AI applications.
Learn how Mindgard can help you navigate AI Security

Take the first step towards securing your AI. Book a demo now and we'll reach out to you.