Research

Report: Cyber Security for AI Recommendations

Stay informed on the latest research findings on cybersecurity for AI recommendations. Explore key vulnerabilities, solutions, and trends in AI security.

Cybersecurity for AI Recommendations

A Study of Recommendations to Address Cybersecurity Risks to AI by Mindgard.

Cybersecurity for AI Recommendations

UK government commissioned Mindgard to conduct a systematic study to identify recommendations linked to addressing cyber security risks to Artificial Intelligence (AI).

We used a systematic search method to review data sources across multiple domains to identify various recommendations and evidence of cyber risks against AI across academia, technology companies, government bodies, cross-sector initatives (e.g. OWASP), news articles, and technical blogs.

The review also examined common themes and knowledge gaps within AI security remediation actions.

Key findings of the report include:

  • We found sufficient evidence indicating that many of the reported cyber security risks to AI strongly justify the need to identify, create, and adopt new recommendations to address. 
  • Many of the recommendations for securing AI are based on established cybersecurity practises and various conventional cyber security recommendations are directly or indirectly applicable to AI.
  • Many recommendations are derived from few unique data sources and there are limited empirical studies of security vulnerabilities in AI used in the production of cyber attacks, and there a lack of information on how to enact recommendations described.

 

NIST: “Currently, there is no approach in the field of machine learning that can protect against all the various adversarial attacks.”

 

 

Report Highlights

Snapshot

 

AI Vulnerabilities

 
  1. Model Poisoning
  2. Model Inversion
  3. Model Extraction
  4. Model Evasion
  5. Model Backdoor
  6. LLM Prompt Jailbreak
  7. LLM Prompt Injection
  8. ML Supply Chain Compromise
  9. Adversarial Perturbations 

Recommendations

  1. Establish organizational framework to mitigate cyber security risks in AI models.

  2. Develop company wide policies that address AI-related cybersecurity concerns.

  3. Implement tools and governance frameworks to oversee AI security practices.

  4. Adopt security practices ('security hygiene') based on expertise and experience from academia, industry, and government.

Conclussions

  1. Cross-sector initiatives for sharing recommendations are encouraging.

  2. There are issues with limited empirical recommendations and knowledge gaps in AI security.

  3. AI security is an unsolved, evolving area of research.

  4. This study provides an overview, not an exhaustive list, of AI security recommendations and trends.

Continuous Automated Red Teaming for AI

We empower enterprise security teams to deploy AI and GenAI securely.

mindgard-security-report

Similar posts