Our mission is to secure the world's AI

Mindgard is the leader in Artificial Intelligence Security Testing. Founded at Lancaster University and backed by cutting edge research, Mindgard enables organizations to secure their AI systems from new threats that traditional application security tools cannot address.

Join others Red Teaming their AI

Our Key Milestones

July 2016

AI Security Lab founded by Prof. Peter Garraghan at Lancaster University dedicated to the exploration and creation of next generation AI systems.

May 2022

Mindgard founded in 2022 at Lancaster University and now based in London, created to empower enterprise security teams to deploy AI and GenAI securely.

Jun 2024

Winners of the Cyber Innovation Prize,
at Infosecurity Europe 2024, reflecting
the team's dedication to excellence
in cybersecurity.

Our Team

Dr. Peter Garraghan

CEO/CTO, Co-Founder

Steve Street

COO/CRO, Co-Founder

Benji Weber

VP of Engineering

Fergal Glynn

VP of Marketing

Dave Ganly

Head of Product

Jonathan Canizales

Operations Manager

William Hackett

Founding ML Engineer

Stefan Trawicki

Founding ML Engineer

Lewis Birch

Founding ML Engineer

Ayomide Apantaku

Software Engineer

Alex Dutton

Software Engineer

Andrew Cook

Software Engineer

Nicole Pellicena

Software Engineer

Janamejay Poddar

Designer

Imran Bohoran

Software Engineer

Join Us

Mindgard's values guide our actions and decisions. These principles form the foundation of our company's culture, shaping how we interact within our teams and with our clients. They inspire us to improve continuously and help us navigate the dynamic landscape of the AI security industry.

Innovative Environment
Join a team that fosters innovation and creativity, providing an environment where your ideas are valued.
Professional Growth
Experience continuous professional growth with access to learning resources, skill-building workshops.
Collaborative Culture
Be part of a collaborative culture that values teamwork and open communication.
Flexible Work Environment
Enjoy a flexible work environment that respects work-life balance. We understand the importance of flexibility.
Cutting-edge Technology
Work with the latest and most advanced technologies in the coding and development space as a member of our team.
Impactful Projects
Contribute to projects that make a real impact. Our team takes on exciting challenges that push the boundaries.

Mindgard in the News

TNW.com Podcast/ May 2024
"We discussed the questions of security of generative AI, potential attacks on it, and what businesses can do today to be safe."
Listen to the full episode on tnw.com
Businessage.com / May 2024
"Even the most advanced AI foundation models are not immune to vulnerabilities. In 2023, ChatGPT itself experienced a significant data breach caused by a bug in an open-source library."
Read the full article on businessage.com
Finance.Yahoo.com / April 2024
"AI is not magic. It's still software, data and hardware. Therefore, all the cybersecurity threats that you can envision also apply to AI."
Read the full article on finance.yahoo.com
Verdict.co.uk / April 2024
"There are cybersecurity attacks with AI whereby it can leak data, the model can actually give it to me if I just ask it very politely to do so."
Read the full article on verdict.co.uk
Sifted.eu / March 2024
"Mindgard is one of 11 AI startups to watch, according to investors."
Read the full article on sifted.eu
Maddyness.com / March 2024
"You don’t need to throw out your existing cyber security processes, playbooks, and tooling, you just need to update it or re-armor it for AI/GenAI/LLMs."
Read the full article on maddyness.com
TechTimes.com / October 2023
"While LLM technology is potentially transformative, businesses and scientists alike will have to think very carefully on measuring the cyber risks associated with adopting and deploying LLMs."
Read the full article on Techtimes.com
Tech.eu / September 2023
"We are defining and driving the security for AI space, and believe that Mindgard will quickly become a must-have for any enterprise with AI assets."
Read the full article on tech.eu
Fintech.global / September 2023
"With Mindgard’s platform, the complexity of model assessment is made easy and actionable through integrations into common MLOps and SecOps tools and an ever-growing attack library."
Read the full article on fintech.global

FAQs

View and learn more about Mindgard's features, data handling capabilities, or integration options.

What makes Mindgard stand out from other AI security companies?
Founded in a leading UK university lab, Mindgard boasts over 10 years of rigorous research in AI security, with public and private partnerships that ensure access to the latest advancements and the most qualified talent in the field.
Can Mindgard handle different kinds of AI models?
Yes, Mindgard is neural network agnostic and supports a wide range of AI models, including Generative AI, LLMs, Natural Language Processing (NLP), audio, image, and multi-modal systems. This versatility allows it to address security concerns across various AI applications.
How does Mindgard ensure data security and privacy?
Mindgard follows industry best practices for secure software development and operation, including use of our own platform for testing AI components. We are GDPR compliant and expect ISO 27001 certification in early 2025.
Can Mindgard work with the LLMs I use today?
Absolutely. Mindgard is designed to secure AI, Generative AI, and LLMs, including popular models like ChatGPT. It enables continuous testing and minimisation of security threats to your AI models and applications, ensuring they operate securely.
What types of organisations use Mindgard?
Mindgard serves a diverse range of organisations, including those in financial services, healthcare, manufacturing, and cybersecurity. Any enterprise deploying AI technologies can benefit from Mindgard's platform to secure their AI assets and mitigate potential risks.
Why don't traditional AppSec tools work for AI models?
The deployment and use of AI introduces new risks, creating a complex security landscape that traditional tools cannot address. As a result, many AI products are being launched without adequate security assurances, leaving organisations vulnerable—an issue underscored by a Gartner finding that 29% of enterprises deploying AI systems have reported security breaches, and only 10% of internal auditors have visibility into AI risk. Many of these new risks such as LLM prompt injection and jailbreaks exploit the probabilistic and opaque nature of AI systems, which only manifest at runtime. Securing these risks, unique to AI models and their toolchains, requires a fundamentally new approach.
What is automated red teaming?
Automated red teaming involves using automated tools and techniques to simulate attacks on AI systems, identifying vulnerabilities without manual intervention. This approach allows for continuous, efficient, and comprehensive security assessments, ensuring AI models are robust against potential threats.
What are the types of risks Mindgard uncovers?
Mindgard identifies various AI security risks, including:

- Jailbreaking: Manipulating inputs to make AI systems perform unintended actions.
- Extraction: Reconstructing AI models to expose sensitive information.
- Evasion: Altering inputs to deceive AI models into incorrect outputs.
- Inversion: Reverse-engineering models to uncover training data.
- Poisoning: Tampering with training data to manipulate model behaviour.
- Prompt Injection: Inserting malicious inputs to trick AI systems into unintended responses.
Why is it important to test instantiated AI models?
Testing instantiated models is crucial because it ensures that AI systems function securely in real-world scenarios. Even if an AI system performs well in development, deployment can introduce new vulnerabilities. Continuous testing helps identify and mitigate these risks, maintaining the integrity and reliability of AI applications.
Learn how Mindgard can help you navigate AI Security

Take the first step towards securing your AI. Book a demo now and we'll reach out to you.