Updated on
March 31, 2026
The State of MCP Security: Key Stats, Attack Patterns, and Risk Benchmarks
Model Context Protocol (MCP) significantly expands AI capabilities—but also introduces major security risks, as exposed servers, high attack success rates, and rapid adoption create a growing attack surface that traditional defenses can’t handle.
TABLE OF CONTENTS
Key Takeaways
Key Takeaways
  • By broadening what AI agents can do across systems, MCP also broadens the attack surface. Security models must adapt to accommodate intelligent agent capabilities, not simply rely on legacy defenses.
  • Surface-level or prompt-based defenses simply won't cut it. Organizations will need layers of system level defenses with robust access controls, monitoring, and governance to help mitigate risks they face in the real world.

Model Context Protocol (MCP) is what connects AI agents to tools that can actually perform actions on your behalf. Whether your AI agent sends messages or starts workflows, MCP enables the automation that you're used to.

With great power comes tradeoffs, though. Namely security. If an agent can do anything on your behalf, your security model should scale with their access. Each integration is a new attack surface for abuse, which is more prevalent today than ever before. 

MCP usage is growing along with AI use. While it provides a developer-friendly way to connect your favorite tools, it has a big impact on cybersecurity. These statistics explain why MCP platforms are at risk, what’s at stake, and how organizations can respond to this new reality. 

The Baseline Exposure of the MCP Ecosystem

AI agent interface connecting tools and workflows through Model Context Protocol (MCP)

Bitsight discovered an estimated 1,000 unauthenticated MCP servers exposed on the internet. Being exposed allows anyone who can access MCPs to enumerate tools. But that’s just the start of the potential damage they can cause. (Bitsight)

Knostic discovered 1,862 MCP servers exposed to the internet. When manually testing 119 servers, every server exposed its internal tool inventory without requiring authentication. (Knostic)

If you’re using AI today, MCP security risks already live in your production workflows. 87% of surveyed businesses said they were using at least one AI agent (e.g., Copilot). MCP risk is here now. (Obsidian Security)

The Reality of MCP Cyberattacks

One study found that 85% of tested attacks compromised at least one MCP platform. This means attackers are already launching effective MSP-aware attacks. (OpenReview)

An MCP server isn’t the only tool at risk. The client/host environment is also part of your security infrastructure. Unfortunately, host-side attacks have high success rates even with established AI platforms: 58.3% (Claude), 75% (OpenAI), and 81.7% (Cursor) (arXiv)

Performance varies widely across MCP server providers. AIMultiple found that Bright Data had a 76.8% success rate, Firecrawl had 64.8%, and Oxylabs was 54.4%. Since teams often trade security protocols for reliable platforms that work, opting for a dependable option can reduce risks from human users. (AIMultiple)

All components of MCP should be considered, but the MCP server side is where almost 75% of attacks begin. (arXiv)

Research indicates the two most common attacks were disruption attacks (46%) and stealth attacks (53%). (arXiv)

If an attacker has access to tooling and poor or no checks are in place, they’re highly likely to succeed. When an attacker utilized either bypasses or tool misuse, server-side attacks had a 75% success rate. (arXiv)

Increased AI Use Accelerates MCP Security Issues

Microsoft researchers recently discovered that AI agents are already being used by 80% of Fortune 500 companies. Agents must therefore be afforded the same observability, governance, and security that applies to human operators. (Microsoft Security Blog)

Anthropic’s ecosystem currently contains 88 official MCP integrations and 255 unofficial community integrations. (arXiv)

There are currently over 10,000 MCP servers actively running. Having so many runnable servers is fantastic for developers who want to build on the technology, but also opens up many more opportunities for attackers. (ITPro)

Analysts predict that adoption will continue to grow at a breakneck pace. One article reports that the MCP PyPI package saw an average of 1.8M downloads while the NPM package has been downloaded 6.9 million times as of May 2025.  (Tekkix)

Global AI spend will hit $2.52 trillion by the end of 2026, a 44% year-over-year increase over 2025. While AI cybersecurity spend is also projected to rise by 90% this year, it isn’t keeping pace with the demand for AI tools. Protections aren’t developing quickly enough, leaving a lot of security gaps. (SOCRadar / SOC Prime blog)

Examples of MCP Security Incidents

Cybersecurity alert warning symbol over digital data representing MCP security risks
Image by Getty Images from Unsplash

Supply chain risk from MCP servers is real, and it leads to consequences. In fact, when the malicious Postmark MCP Server package was uploaded, it was reportedly downloaded 1,500 times per week to be used in hundreds of workflows. The malicious server/package propagated itself quickly, stealing credentials via BCC-style email exfiltrations. (ITPro)

The U.S. House of Representatives barred staff from using Microsoft Copilot in March 2025, worried about data potentially ending up in the wrong cloud services. (Concentric AI)

In October 2024, the European Parliament’s IT department disabled built-in AI tools on official devices. Because of the high level of security uncertainty around confidential data, they had to impose a hard ban. It’s tempting to ban the technology outright, but these measures stifle innovation.  (CryptoRank)

Software supply chains ship risk quite frequently. In fact, Snyk’s 2024 Open Source Security report found that 45% of organizations had to replace vulnerable build components. (Snyk)

Data labels can’t stop users from having security incidents. Copilot was known to leak private emails with sensitivity labels for up to 6 weeks straight until security patches were released. (LetsDataScience)

Why Prompt-Only Security Falls Short

Unfortunately, most MCP security defensive techniques succeed less than 30% of the time. (arXiv)

Fifty-nine percent of business leaders say they need better automation for response and recovery. (Cohesity)

Safety prompts alone barely have an impact on security. Prompt-only defense led to a mere 1.22% reduction in attack success rates. (arXiv)

That said, prompt defenses do work some of the time. They can improve security in certain contexts. For example, security prompts decreased code execution by 21.5 points and credential stealing by 21.4 points. (arXiv)

Threat Patterns MCP Teams Have To Plan For

Eighty-seven percent of organizations run services with known exploitable vulnerabilities. (ITPro)

The biggest risks to MCP security are credential exfiltration, internal reconnaissance, firewall bypass, and data exfiltration. (Model Context Protocol)

Different attack categories have varying success rates: host-based attacks achieve a success rate of ~80%, while prompt injections succeed just over 70% of the time. Network poisoning only succeeds 7.7% of the time. (Emergent Mind)

Mainstream models deemed to be “safe” have also been successfully attacked with MCPs. Experimenting with prompts, researchers were able to have success rates of over 60% with GPT-4o-mini and DeepSeek-R1. (arXiv)

Agentic Access Turns Small Failures Into Big Incidents

AI agents move 16 times more data than human users do, making a single compromised agent a much bigger event. (Obsidian Security)

Copilot has access to 90% of all Microsoft 365 environments. With this type of setup, permission inheritance can create larger attack surfaces, especially if third-party tools share the same permissions. (SaaSSentinel)

Concentric AI reports that 16% of business-critical data is overshared. The report estimates that every organization has more than 800,000 files at risk. (Concentric AI)

Sensitive intellectual property is often reachable beyond its intended audience. Eighty-three percent of at-risk files are overshared internally, 17% to third parties, and 90% of business-critical documents are shared beyond the C-suite. (Concentric AI)

Microsoft found that 32% of organizations have had at least one AI-related data security incident. (Microsoft Security Blog)

Microsoft’s 2026 Data Security Index says that 82% of organizations plan to embed GenAI into data security, but only 47% have controls in place today. (Microsoft Security Blog)

The Challenge of Identity, Secrets, And Access Controls

Secure login screen highlighting authentication and access control challenges in MCP systems
Photo by Vova Kondriianenko from Unsplash

Ninety-seven percent of businesses that experienced AI breaches lacked AI-specific access controls. (IBM)

Access to MCP tooling is enabled by stealing tokens/secrets. Stealing credentials is the first attack vector in 22% of breaches. (Verizon)

GitGuardian found 23.8 million secrets were pushed to public GitHub repos in 2024 alone (YoY increase of 25%). (GitGuardian)

Choices That Matter for MCP Security

Making the wrong decisions can lead to building a riskier threat profile. Architecture decisions play a significantly larger role in overall security risk than many teams realize: poor MCP infrastructure choices increased likelihood of attack success by up to 41%. (Harvard)

Some estimates suggest that up to 25% of AI-generated code contains vulnerabilities such as auth bypasses and SQLi. These vulnerabilities are even more of an issue when you connect compromised code to AI agents.  (Digital Applied)

To address MCP security issues, you should implement layered controls such as authentication, access control, supply-chain security, input validation, data privacy, network security, and regular security testing. (Tetrate)

If you’re in a regulated industry, default protections aren’t enough. Among financial services firms using Copilot, 58% say they’ve added security controls beyond Microsoft defaults. (SaaSSentinel)

Secure Your AI Systems at Scale with Mindgard

Attack surfaces have outgrown point controls and response-at-scale. As MCP environments expand and AI agents gain greater access to system internals, security teams require inline visibility, testing, and enforcement throughout the application development lifecycle.

The Mindgard Platform helps security teams continuously discover vulnerabilities lurking in machine learning models, internal tools, integrations, and data pipelines. Only by knowing their entire AI attack surface can security teams know what and how to protect.

With automated AI red teaming, teams can ensure defenses are working by leveraging the same learning and querying capabilities an attacker would use to launch attacks across their environment at scale. Runtime protections are applied to stop threats from succeeding.

By mapping the attack surface, prioritizing vulnerabilities based on potential impact, and surfacing actionable insights directly into existing workflows, organizations can remediate vulnerabilities before they lead to breaches.

Attackers don’t have to perfectly craft exploits to target systems with small vulnerabilities. Just one doorway is often enough to exploit a system and cause widespread damage. 

That’s why it’s more important than ever to secure AI like you would any other critical system on your network. Schedule a demo to learn how Mindgard’s continuous testing, attacker-aligned insights, and runtime defense allow organizations to confidently scale AI.

Frequently Asked Questions

How do MCP servers differ from clients and plugins?

A client is the application (ChatGPT, Claude, etc.) making the request to utilize the tool. An MCP server is the tool that performs actions or fetches data on behalf of that client. A plugin is a function that the server has, such as sending emails or searching files.

How does MCP compare to traditional API integrations?

APIs are great for well-defined workflows with clearly defined endpoints. MCP shines for more dynamic, fluid, difficult-to-support agentic solutions. Due to this, securing MCP is more involved than just securing an endpoint. It requires a layered approach that secures access to tools, validates inputs/outputs, and guards against abuse.

What does applying least privilege look like with MCP tools?

Tools should be provided the minimum access necessary to complete a well-defined job. This looks like read-only access, providing short-lived tokens, or role-based access for high-privilege actions. This way, if a tool is abused or compromised, you limit the blast radius.