Learning with Mindgard
Mindgard has deep roots in R&D and as such, we believe in publishing the results of these endeavors as well as regularly providing educational content.
Model Leeching: An Extraction Attack Targeting LLMs
Model Leeching is a novel extraction attack targeting Large Language Models (LLMs), capable of distilling task-specific knowledge from a target LLM into a reduced parameter model. We demonstrate the effectiveness of our attack by extracting task capability from ChatGPT-3.5-Turbo, achieving 73% Exact Match (EM) similarity, and SQuAD EM and F1accuracy scores of 75% and 87%, respectively for only $50 in API cost. We further demonstrate the feasibility of adversarial attack transferability from an extracted model extracted via Model Leeching to perform ML attack staging against a target LLM, resulting in an 11% increase to attack success rate when applied to ChatGPT-3.5-Turbo.
Compilation as a Defense: Enhancing DL Model Attack Robustness via Tensor Optimization
PINCH: An Adversarial Extraction Attack Frame work for Deep Learning Models
How to build B2B SaaS landing pages that actually convert
Well crafted landing pages pay dividends. They’re often made out to be an enigma but don’t need to be complicated.