January 22, 2025
The MIT AI Risk Repository: Practical Insights for AI Red Teamers and Pen Testers
Discover how the MIT AI Risk Repository is helping AI red teamers and pen testers tackle complex security challenges with actionable insights.
TABLE OF CONTENTS
Key Takeaways
Key Takeaways

Comprehensive and Free Resource: The MIT AI Risk Repository provides an open, structured database categorizing 1,000 AI risks by cause and domain, offering a practical and accessible tool for AI security professionals globally.

Significant Growth and Adoption: Since its launch in August 2024, the repository has expanded its coverage by 300 risks, added 13 new frameworks, and reached over 90,000 users, demonstrating its value to governments, companies, and researchers worldwide.

Unique Features: The repository’s causal and domain taxonomies allow users to analyze risks across deployment stages and explore how causal factors relate to specific domains, setting it apart from resources like OWASP LLM Top 10 and Mitre Atlas.

As artificial intelligence becomes ubiquitous, ensuring the security of these systems is more important than ever. The MIT AI Risk Repository provides red teamers and penetration testers with actionable insights into AI risks, making it a critical open-source resource for addressing the complex challenges of AI security.

What Is the MIT AI Risk Repository?

The MIT AI Risk Repository is a central database that categorizes and documents vulnerabilities and threats in AI systems. It offers a structured way to understand and mitigate risks, making it a valuable tool for professionals working in AI security. It stands alongside other important efforts in the space, such as OWASP's work on security guidelines and Mitre's ATLAS framework, which focus on adversarial tactics and techniques. While OWASP and Mitre provide excellent general and cyber-specific resources, the MIT AI Risk Repository focuses specifically on the unique challenges of AI systems, complementing these other tools.

The MIT AI Risk Repository is free to copy and use, making it an accessible resource for organizations of all sizes. The database is delivered as a spreadsheet that is available to everyone to make their own copy of. Click here to access the spreadsheet.  

The spreadsheet/database’s causal and domain taxonomies provide users with powerful tools to filter and analyze risks based on specific needs. For example, risks can be identified as pre-deployment or post-deployment issues or categorized under domains like misinformation. Additionally, these taxonomies can be used together to understand how causal factors such as entity, intention, and timing relate to each risk domain. This allows users to, for instance, explore intentional and unintentional variations of discrimination and toxicity risks.

What The MIT AI Risk Repository Team Has Done

The researchers have systematically reviewed and synthesized existing classifications, taxonomies, and frameworks of AI risks. They employed a multi-step process involving systematic searches, expert consultations, and iterative coding to identify and structure 777 risks across 43 documents. A best-fit framework synthesis was applied to develop two intersecting taxonomies:

  1. Causal Taxonomy: High-level categorizations based on causal factors such as entity (human, AI, or other), intent (intentional or unintentional), and timing (pre-deployment or post-deployment).
  2. Domain Taxonomy: A mid-level categorization that organizes risks into seven primary domains, such as discrimination, privacy, misinformation, and AI system failures, further divided into 23 subdomains.

What the MIT AI Risk Repository Team Found

The analysis revealed several insights into the AI risk landscape:

  • Dominance of AI-Caused Risks: Most risks (51%) were attributed to AI systems rather than human actions (34%).
  • Timing of Risks: A majority of risks (65%) were found to emerge after AI deployment, with only 10% identified as pre-deployment risks.
  • Top Risk Domains: The most frequently covered domains in existing documents were AI system safety and limitations (76%), socioeconomic and environmental harms (73%), and discrimination and toxicity (71%).
  • Underexplored Areas: Certain domains, such as AI welfare and rights, and risks related to consensus reality and competitive dynamics, were identified as underrepresented in existing literature.
  • Intersecting Factors: The most common intersection was unintentional, post-deployment risks caused by AI systems, highlighting the need for robust post-deployment monitoring and safeguards.

This effort has laid the groundwork for a coordinated and comprehensive approach to understanding and mitigating AI risks. 

Progress Over the Last Six Months

Since its launch in August 2024, the repository has grown significantly, expanding its coverage from 700 to 1,000 documented AI risks. These risks are categorized by their cause—such as technical flaws, human factors, or environmental conditions—and their risk domain, which spans areas like data security, model integrity, and deployment vulnerabilities.

Key updates include:

  1. Enhanced Risk Coverage: The addition of 300 new risks has broadened the repository’s scope, providing deeper insights into emerging vulnerabilities in advanced AI models.

  2. Refined Taxonomies: Improved categorization has made it easier for users to identify specific risks and their potential impacts.

  3. Interactive Tools: New tools, such as scenario simulators and risk assessment templates, help users apply the repository’s insights in their daily work.

  4. Community Contributions: The repository now supports user submissions, fostering collaboration and ensuring it remains up-to-date with real-world developments.

  5. Increased Reach and Adoption: The repository’s website, airisk.mit.edu, has received about 90,000 hits and is linked to by approximately 2,000 other websites. Several governments and large companies have incorporated it into their AI risk management processes, and it has also been used to classify incidents of harm from AI in related projects.

  6. New AI Risk Frameworks: In December 2024, 13 new AI risk frameworks were added to the repository, incorporating user and expert suggestions. These additions further enrich the database, ensuring it reflects the latest challenges and solutions in AI security.

Why It Matters

The repository is a practical tool for identifying vulnerabilities and designing effective penetration tests. By categorizing risks and offering actionable insights, it enables practitioners to:

  • Streamline risk assessments.
  • Build more targeted and efficient security strategies.
  • Stay ahead of emerging threats in AI systems.

Streamline Red Teaming with Mindgard

As generative AI platforms become more embedded in business operations, the complexity of their security risks increases. Threat actors exploit vulnerabilities in AI through adversarial attacks, data poisoning, model misuse, and other techniques. To address these challenges, red teaming training and certification are essential for equipping security professionals with the skills to proactively identify and mitigate risks.

While databases and training provide foundational knowledge, combining them with a specialized tool like Mindgard takes AI red teaming to the next level. Mindgard enhances the practical application of training, streamlines workflows, and scales red teaming efforts tailored to generative AI platforms. Explore how Mindgard can help secure your AI systems—book a demo today to see it in action.

Frequently Asked Questions

How does this differ from OWASP LLM Top 10 and Mitre Atlas?

While OWASP LLM Top 10 focuses on the most critical security vulnerabilities in large language models and Mitre Atlas maps adversarial tactics and techniques, the MIT AI Risk Repository offers a broader perspective. It categorizes a wide range of AI risks across multiple domains and stages of deployment, providing a comprehensive framework tailored specifically to AI systems. It complements these resources by addressing risks beyond specific attack scenarios or techniques.

What are the plans for 2025?

In 2025, the Repository plans to expand its coverage further, integrating additional user-submitted frameworks and focusing on emerging AI technologies such as autonomous systems and advanced generative models. There are also plans to introduce more interactive tools and guidance tailored to industry-specific applications of AI.

What are other red teaming databases in existence?

In addition to the MIT AI Risk Repository, notable resources include OWASP’s AI Security projects, Mitre Atlas, the AIAAIC Repository (webpage, spreadsheet) which details incidents and controversies driven by and relating to artificial intelligence, the Responsible AI Collaborative working to present real-world AI harms through the AI Incident Database, and other proprietary databases maintained by organizations across the AI risk management spectrum. Each resource has its unique focus, with the MIT AI Risk Repository standing out for its open access and structured approach to causal and domain taxonomies.