AI security defends machine learning and generative systems from evolving threats like data poisoning and model theft through full-lifecycle protections and proactive testing.
Fergal Glynn

AI adoption is accelerating beyond experimentation. Enterprises are moving from isolated generative AI pilots to production-grade systems that include autonomous agents, multimodal models, retrieval pipelines, and AI-driven decision making. As this shift takes hold, AI risk becomes operational risk.
In its January 2026 research, Emerging Tech: Top-Funded Startups in AI TRiSM: Agentic AI and Beyond, Gartner examines the companies shaping how organizations manage trust, risk, and security across this new AI landscape. Mindgard is included among the startups advancing AI security testing, a category Gartner identifies as critical for understanding how AI systems fail under real-world conditions.
Gartner’s analysis reflects a broader market truth. As agentic and multimodal AI systems grow more capable, they also expand the attack surface in ways traditional security tooling cannot observe, reproduce, or test. Managing AI TRiSM requires more than governance frameworks or static controls. It requires adversarial testing that mirrors attacker behavior across the full AI system.
That is the problem Mindgard was built to solve.
AI TRiSM, or AI Trust, Risk, and Security Management, is Gartner’s framework for managing the full spectrum of risk introduced by enterprise AI systems, from development and deployment through runtime operation. As outlined in Gartner’s AI TRiSM Market Guide, it spans multiple disciplines including AI security, governance, information protection, and continuous risk management. The Top-Funded Startups in AI TRiSM: Agentic AI and Beyond research builds directly on this foundation by showing how the market is evolving from high-level governance concepts to concrete, technical capabilities that address real-world AI behavior.
In particular, Gartner highlights that emerging risks from agentic AI, multimodal systems, and complex AI applications cannot be managed through policy alone. They require security testing, runtime visibility, and system-level validation. This reinforces the core premise of AI TRiSM. Trust in AI systems must be earned through evidence. Risk must be continuously assessed as systems change. Security must reflect how AI actually behaves in production, not how it is expected to behave on paper. For a deeper overview of the AI TRiSM framework, see Gartner’s AI TRiSM Market Guide on the Mindgard blog.
AI TRiSM is often discussed in terms of governance, compliance, and policy. Those elements matter, but they are incomplete without technical validation. You cannot manage AI risk unless you can first discover it.
Mindgard approaches AI TRiSM from an attacker-aligned perspective. Instead of assuming models and agents behave as designed, Mindgard continuously tests how they behave when probed, manipulated, and coerced. This reflects a core principle of the Mindgard Philosophy. Real AI risk emerges from system behavior, not from specifications or intent.
Gartner highlights AI security testing as a distinct and growing category within AI TRiSM, particularly as organizations deploy agentic AI, multimodal models, and complex AI applications. These systems exhibit nondeterministic behavior that cannot be assessed through traditional application security testing or model benchmarking alone.
Mindgard supports AI TRiSM by providing continuous, automated AI security testing across the AI lifecycle. This includes pre-deployment testing, runtime assessment, and ongoing discovery of new vulnerabilities as systems evolve. By simulating real attack techniques, Mindgard reveals failure modes that governance checklists and static evaluations miss.
This approach allows security teams to move from theoretical risk to demonstrable impact. Instead of asking whether an AI system should be secure, Mindgard shows where and how it can be exploited.
Mindgard’s platform delivers AI TRiSM capabilities that align directly with Gartner’s view of where AI risk is emerging, particularly in AI security testing and agentic systems.
Continuous Automated Red Teaming for AI:
Mindgard conducts continuous automated AI red teaming against AI models, agents, and applications. This includes single-shot and multi-turn attacks that probe for prompt injection, data exfiltration, jailbreaks, agent manipulation, and unsafe emergent behaviors. Testing is ongoing, not point-in-time, reflecting the reality that AI systems change as models, data, and tools evolve.
Agentic AI Security Testing:
As Gartner notes, AI agents represent a new attack vector. They can retrieve data, invoke tools, and take actions across systems at machine speed. Mindgard tests how agents can be abused through indirect prompt injection, tool misuse, excessive agency, and cross-agent interaction failures. This exposes risks that only appear when agents operate within real workflows and ecosystems.
Multimodal AI Risk Assessment:
Many enterprises are deploying AI systems that process text, images, audio, and video. Mindgard tests multimodal models for vulnerabilities that arise when inputs are combined or manipulated across modalities. This is particularly important in regulated industries where vision, document processing, and voice systems are core to operations.
System-Level AI Testing Beyond the Model:
Mindgard does not treat AI models in isolation. Testing extends across retrieval pipelines, plugins, APIs, orchestration layers, and external integrations. This reflects the Mindgard Philosophy that attackers target systems, not models. By testing end-to-end behavior, Mindgard uncovers vulnerabilities that emerge only when components interact.
Actionable AI Risk Intelligence:
Mindgard translates findings into clear risk evidence aligned to enterprise security workflows. Results show what was exploited, how it was exploited, and why it matters. This supports informed decision making across security, risk, and governance teams and enables AI TRiSM programs to prioritize remediation based on real exposure.
Together, these capabilities allow organizations to operationalize AI TRiSM by grounding governance, policy, and compliance efforts in technical reality.
Gartner’s research underscores the scale and urgency of AI TRiSM. According to the report, early-stage AI TRiSM startups raised approximately $1.726 billion in venture funding between October 2022 and September 2025. That level of investment reflects a market recognizing that AI introduces a fundamentally different risk profile.
The research also makes clear that AI security testing plays a unique role within AI TRiSM. While governance frameworks define intent and controls, testing determines whether those controls hold under attack. As agentic AI adoption accelerates, security leaders can no longer rely on assumptions about how AI systems behave.
Mindgard’s inclusion in this research reflects its focus on exposing real vulnerabilities rather than theoretical ones. It validates the need for offensive, attacker-aligned testing as a core pillar of AI TRiSM.
Enterprises are no longer experimenting at the edges. AI systems are embedded in customer workflows, internal operations, and decision making. Agents retrieve sensitive data, take autonomous actions, and interact with external services. In this environment, AI failures propagate quickly and at scale.
Traditional security tools were not designed for probabilistic systems that change behavior based on context, memory, and interaction. Model evaluations and safety datasets provide breadth, but they lack depth. They do not show how attackers chain behaviors together to achieve real impact.
Mindgard was built to operate where AI risk actually manifests. By continuously testing AI systems as attackers would, Mindgard enables organizations to discover, measure, and reduce AI risk before it becomes an incident.
AI TRiSM is ultimately about trust. Trust that AI systems behave as intended. Trust that controls work under pressure. Trust that risk is understood, not assumed.
Gartner’s Emerging Tech: Top-Funded Startups in AI TRiSM: Agentic AI and Beyond highlights a market moving rapidly toward that reality. Mindgard’s role within AI security testing reflects a simple truth. You cannot govern what you cannot test, and you cannot secure what you do not understand.
By aligning AI TRiSM with attacker behavior and system-level testing, Mindgard helps enterprises move from AI optimism to AI confidence.
Gartner, Emerging Tech: Top-Funded Startups in AI TRiSM: Agentic AI and Beyond, 13 January 2026.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Gartner subscribers can access the full Emerging Tech: Top-Funded Startups in AI TRiSM: Agentic AI and Beyond report here.