AI guardrails are often used as the first line of defense within AI systems, however how effective are they in practice against actual attackers?
Dr. Peter Garraghan

Adversarial Machine Learning (AML) is a rapidly growing field of security research, with an often overlooked area being model attacks through side-channels.

Previous works show such attacks to be serious threats, though little progress has been made on efficient remediation strategies that avoid costly model re-engineering.
This work demonstrates a new defense against AML side-channel attacks using model compilation techniques, namely tensor optimization. We show relative model attack effectiveness decreases of up to 43% using tensor optimization, discuss the implications, and direction of future work.
Access the complete insights into Enhancing DL Model Attack Robustness.
Thank you for reading our research about Compilation as a Defense!