Updated on
December 12, 2025
Bringing AI Security into Your CI/CD with Mindgard
Mindgard’s GitHub Action example repository shows how to integrate automated AI security testing into CI/CD pipelines so every model or code change is validated against the latest Mindgard capabilities.
TABLE OF CONTENTS
Key Takeaways
Key Takeaways
  • Provides a ready-to-use template for running Mindgard CLI tests automatically within GitHub Actions workflows.
  • Ensures every update to models, prompts, or configurations is evaluated for security regressions as part of standard CI/CD.
  • Keeps customers continuously aligned with Mindgard’s latest testing features because the action pulls updates directly from the repository.

Modern MLOps teams practice Continuous Integration and Continuous Delivery (CI/CD) not just for code, but also for model artifacts and model configurations. As those models evolve through prompt changes, parameter tweaks, or retraining, so does their attack surface. Mindgard’s continuous security testing framework helps teams detect when changes affect their risk posture. Integrating Mindgard into your automated pipelines ensures security doesn’t wait for the next manual review but happens every time code or models change and every time there is an update to the attacks in the Mindgard solution.

Why CI/CD Integration Matters for AI Security

In a CI/CD context, you want continuous feedback on how changes affect security risk. Traditional testing might catch unit failures or integration issues but it won’t tell you if a tweak to your model prompt suddenly makes your service vulnerable to prompt injection, jailbreaks, or other adversarial techniques. Mindgard’s CLI-based security test suite can be invoked as part of your pipeline so that:

  • Security tests run automatically on every commit or pull request.

  • Test results are fresh with every change, reflecting the latest model configuration, dependencies, or dataset shift.

  • Failures can block deployments until risk regressions are addressed (gating).

  • Observational baselines can be established before pipeline gating is enforced.

This approach gives you objective, automated visibility into your model’s adversarial risk posture as part of your standard build checks.

What the mindgard-github-action-example Repository Provides

The Mindgard GitHub Action is designed to pull the latest version of the Mindgard CLI each time a workflow runs. This means the pipeline is not only testing your model or application changes, it is also automatically incorporating every new Mindgard capability, attack technique, and testing enhancement the moment we publish it. Any time Mindgard ships an update, your CI/CD workflow will run against the newest release without requiring configuration changes or action maintenance on your side. This ensures that AI security testing stays continuously up to date with evolving adversarial methods, and that customers benefit from new detection logic and expanded coverage as soon as it becomes available.

Read more on GitHub.

How Mindgard Fits into an Actions Workflow

GitHub Actions workflows are defined in YAML files under .github/workflows/ and consist of jobs, each with a series of steps. These workflows trigger based on events like push, pull request, or schedule.

The example repo illustrates:

  1. Triggering a workflow on a relevant event (e.g., push to main, or pull_request).

  2. Checking out code so the workflow has access to your model and test configuration.

  3. Installing dependencies (e.g., installing Python and Mindgard CLI with pip install mindgard).

  4. Running a Mindgard test command using the CLI (mindgard test --config your-config.toml).

  5. Interpreting test results—passing or failing based on your configuration.

By placing these steps inside a workflow, you get automated risk analysis every time relevant changes touch your model stack.

Gating Builds Based on Risk Thresholds

Mindgard’s CLI supports exit codes that reflect the outcome of the test. By default, the CLI returns a zero exit code for success, but risk gating can be configured so that high-risk test results produce a non-zero exit code. That lets you fail the build automatically if a test crosses a given risk threshold.

For example, setting a --risk-threshold flag can instruct Mindgard to exit non-zero if any attack type exceeds your defined threshold of flagged events. This gives pipeline authors fine-grained control over when a test failure should stop deployment versus when it should just log a warning. 

Start Integrating Today

If you’re ready to incorporate Mindgard security testing into your GitHub-based CI/CD workflows, the mindgard-github-action-example repository provides a solid starting point.

Explore the repo and get started here:

https://github.com/Mindgard/mindgard-github-action-example 

https://docs.mindgard.ai/user-guide/workflow-integrations