Mindgard’s GitHub Action example repository shows how to integrate automated AI security testing into CI/CD pipelines so every model or code change is validated against the latest Mindgard capabilities.
Dr. Peter Garraghan

Adversarial Machine Learning (AML) is a rapidly growing field of security research, with an often overlooked area being model attacks through side-channels.

Previous works show such attacks to be serious threats, though little progress has been made on efficient remediation strategies that avoid costly model re-engineering.
This work demonstrates a new defense against AML side-channel attacks using model compilation techniques, namely tensor optimization. We show relative model attack effectiveness decreases of up to 43% using tensor optimization, discuss the implications, and direction of future work.
Access the complete insights into Enhancing DL Model Attack Robustness.
Thank you for reading our research about Compilation as a Defense!