Meet Mindgard at:

LLMSEC 2025

Vienna, Austria

August 1, 2025

August 1, 2025

LLMSEC is an academic event publishing & presenting work on adversarially-induced failure modes of large language models, the conditions that lead to them, and their mitigations.

Mindgard researchers will be presenting our paper 'Bypassing LLM Guardrails: An Empirical Analysis of Evasion Attacks against Prompt Injection and Jailbreak Detection Systems'.

Thank you!

Your submission has been received!
Oops! Something went wrong while submitting the form.
Learn how Mindgard can help you navigate AI Security

Take the first step towards securing your AI. Book a demo now and we'll reach out to you.