UK government commissioned Mindgard to conduct a systematic study to identify recommendations linked to addressing cyber security risks to Artificial Intelligence (AI).
We used a systematic search method to review data sources across multiple domains to identify various recommendations and evidence of cyber risks against AI across academia, technology companies, government bodies, cross-sector initatives (e.g. OWASP), news articles, and technical blogs.
The review also examined common themes and knowledge gaps within AI security remediation actions.
Key findings of the report include:
We found sufficient evidence indicating that many of the reported cyber security risks to AI strongly justify the need to identify, create, and adopt new recommendations to address.
Many of the recommendations for securing AI are based on established cybersecurity practises and various conventional cyber security recommendations are directly or indirectly applicable to AI.
Many recommendations are derived from few unique data sources and there are limited empirical studies of security vulnerabilities in AI used in the production of cyber attacks, and there a lack of information on how to enact recommendations described.
NIST:“Currently, there is no approach in the field of machine learning that can protect against all the various adversarial attacks.”