Updated on
September 3, 2025
AI Vulnerability Database: The Top AI Vulnerabilities
AI systems are vulnerable across models, data, infrastructure, and governance. Resources like the AI Vulnerability Database (AVID) and Mindgard help organizations identify, prioritize, and defend against these risks.
TABLE OF CONTENTS
Key Takeaways
Key Takeaways
  • AI systems face critical vulnerabilities across models, data, infrastructure, and governance that attackers can exploit, putting compliance, security, and intellectual property at risk.
  • Open-source repositories like AVID help organizations identify and mitigate these threats, while platforms such as Mindgard provide proactive defense against AI-specific risks.

Artificial intelligence helps organizations save time and improve the quality of their outputs. However, AI isn’t impenetrable; malicious attackers will exploit known weaknesses to exfiltrate data, steal your model, or generate biased outputs that harm the user experience. Compliance, security, and intellectual property are on the line, and it’s never been more important for companies to protect their investment in AI. 

It can be challenging to stay on top of the latest threats against AI. Fortunately, the AI Vulnerability Database (AVID), along with other AI vulnerability and AI risk resources, make it much easier to understand and mitigate known threats before attackers use this knowledge against you. Learn about the top AI vulnerabilities facing AI models today.

4 Top AI Vulnerabilities, Per AVID

Screenshot of the AVID AI Vulnerability Database webpage showing definitions of vulnerabilities and reports, with a section listing vulnerabilities by year

AVID is an open-source repository that catalogs failures and vulnerabilities for AI models. AVID categorizes known threats based on security, ethics, and performance to help businesses understand and prioritize potential attacks. 

We mapped AVID’s findings to four broad risk categories you can use in your own AI vulnerability assessments:

  • Model vulnerabilities: Weaknesses in the AI model’s architecture that attackers can exploit or use in unsafe ways.
  • Data vulnerabilities: Risks in the datasets used to train, validate, or operate AI models.
  • System or infrastructure vulnerabilities: Flaws in the surrounding infrastructure, APIs, or supply chain.
  • Human or governance vulnerabilities: Issues stemming from poor policy, oversight, or ethical decision-making.

Model Vulnerabilities

These flaws live within the AI model itself, often in how it processes inputs, generates outputs, or exposes sensitive functionality: 

  • Generative misinformation (AVID-2023-V026): ChatGPT recommended research papers that didn’t exist, or mismatched correct titles with wrong authors.
  • Inappropriate or unsafe outputs (AVID-2023-V017): YouTube Kids’ recommendation system suggested disturbing videos to children.
  • Autonomous decision failures (AVID-2023-V021, AVID-2023-V019): From Uber’s self-driving cars running red lights to Boeing’s MCAS software pushing aircraft noses down due to faulty sensor data, flawed model logic has caused significant safety issues. 
  • Model theft (AVID-2023-V008): Researchers replicated OpenAI’s GPT-2, highlighting how proprietary models can be cloned and misused.
  • Prompt override and DDoS risk (AVID-2023-V016): Attackers bypassed model safety features and used them to initiate denial-of-service attacks.

Data Vulnerabilities

A padlock placed on a laptop keyboard with red, green, and blue light trails representing cybersecurity and protection against digital threats
Image by FlyD from Unsplash

When AI is trained or fed on compromised, biased, or malicious data, its outputs and security suffer: 

  • Data poisoning through prompt injection (AVID-2023-V027): Python and Ruby apps allowed SQL injection and remote code execution. All attackers had to do was politely request it through an LLM.
  • Bias in training data (AVID-2023-V024): The Northpointe COMPAS algorithm was twice as likely to label Black defendants as high-risk compared to white defendants.
  • Geopolitical bias in sentiment analysis (AVID-2025-R0002): Even neutral phrases received biased evaluations based on region.
  • Algorithmic harm from biased scheduling data (AVID-2023-V023): Starbucks’ scheduling algorithm disadvantaged wage workers and hurt scheduling stability.

System or Infrastructure Vulnerabilities

These weaknesses exist in the environment surrounding your AI, including APIs, dependencies, and hardware: 

  • Supply chain compromise (AVID-2023-V015): A malicious binary was uploaded to the PyTorch-nightly dependency chain via “dependency confusion.”
  • Cloud-based model evasion (AVID-2023-V014): Attackers exploited the training process of antimalware models on user systems before they’re uploaded to the cloud.
  • Physical evasion of recognition systems (AVID-2023-V012, AVID-2023-V005): Facial recognition systems were fooled or hijacked, allowing hackers to impersonate users and commit large-scale fraud.

Human Vulnerabilities

Not all vulnerabilities are purely technical. Poor oversight, governance, or ethical judgment can be just as dangerous:

  • Unsafe physical deployments (AVID-2023-V018): An Amazon warehouse robot ruptured a can of bear spray, hospitalizing 24 workers.
  • Medical device failures (AVID-2023-V020): Between 2003 and 2013, numerous surgical robots malfunctioned, resulting in thousands of injuries and fatalities.
  • Academic integrity violations (AVID-2025-R0001): AI systems providing direct answers to homework undermined policy compliance.

Other Resources on AI Risks & Adversary Tactics

AVID catalogs specific AI vulnerabilities, but it’s not the only resource available. There are several other repositories and frameworks that help security teams understand AI risks from different perspectives. 

MIT AI Risk Repository

Focused on ethics, governance, and systemic risks, the MIT AI Risk Repository organizes AI-related hazards beyond technical exploits. It doesn’t track vulnerabilities like a CVE database, but it’s valuable for connecting technical flaws to business, compliance, and societal impacts.

MITRE ATLAS™

Dashboard view of Mindgard’s MITRE ATLAS™ Adviser mapping AI attack tactics, with columns for reconnaissance, access, execution, privilege escalation, and critical risks like prompt injection and model evasion

MITRE ATLAS™ is modeled after MITRE ATT&CK® and maps how adversaries exploit AI systems through tactics like data poisoning, model evasion, and prompt injection. Mindgard’s ATLAS™ Adviser takes this framework a step further by linking red teaming results directly to ATLAS™, standardizing how organizations measure AI security

NIST AI Risk Management Framework (RMF)

The NIST AI Risk Management Framework (RMF) is a U.S. framework that guides organizations on how to identify, measure, and manage risks in AI systems. It emphasizes trustworthy AI principles (fairness, transparency, and resilience), making it a useful complement to vulnerability-focused databases.

OECD AI Policy Observatory

The OECD AI Policy Observatory provides research and policy resources on global AI risks, including governance frameworks and ethical guidelines. While it’s not a vulnerability tracker, it’s helpful for organizations operating across borders who need to align with international standards.

ENISA Reports on AI Cybersecurity

The European Union Agency for Cybersecurity (ENISA) publishes guidance on securing AI systems against adversarial attacks and supply chain threats. These reports provide regional best practices and sector-specific insights.

OWASP AI/LLM Security Projects

Best known for its Top 10 lists in web application security, OWASP has expanded into AI and machine learning risks. The OWASP Top 10 for LLM Applications outlines common threats such as prompt injection, data leakage, and insecure plugin design. 

While not a vulnerability database, OWASP provides developer-focused guidance that helps teams translate abstract risks into secure coding practices, making it a practical complement to AVID, ATLAS™, and other frameworks.

Together, these resources create a layered view for a more comprehensive understanding of AI-specific risks. AVID covers the known vulnerabilities in AI models and systems, the MIT AI Risk Repository covers systemic and ethical risks to watch, MITRE ATLAS™ explains adversary tactics that are already in practice, and NIST, OECD, and ENISA give organizations governance and compliance frameworks to manage AI risks responsibly. 

Mindgard: Your First Line of AI Defense

AI is the foundation of modern innovation, but as the AVID database shows, no system is immune to failure or exploitation. From biased algorithms to poisoned datasets and compromised supply chains, vulnerabilities can strike at any stage of your AI lifecycle. 

That’s where Mindgard’s Artifact Scanning and Offensive Security solutions come in. Our platform continuously scans for AI-specific vulnerabilities, detects risks before they can be exploited, and provides you with actionable intelligence to keep your models safe. Whether it’s adversarial attacks, data poisoning, or supply chain threats, Mindgard helps you stay one step ahead.

Protect your investment in AI. Book a Mindgard demo today and see how we can safeguard your models from vulnerabilities.

Frequently Asked Questions

What is an AI vulnerability database?

An AI vulnerability database is a centralized repository of known weaknesses and exploits specific to AI applications and systems. AI teams can use databases like AVID to stay informed about threats to models, datasets, and infrastructure.

Can AI vulnerabilities really cause physical harm?

Yes. Flawed AI decisions have contributed to autonomous vehicle crashes, surgical robot errors, and warehouse accidents, resulting in injuries, costly recalls, and even fatalities.

How often should I check for AI vulnerabilities?

AI security isn’t a “set it and forget it” process. You should regularly monitor AI vulnerability databases. Ideally, you should integrate real-time alerts into your DevSecOps pipeline, so you can patch or retrain models before attackers exploit them.