This domain is intended to codify the landscape of threats to a ML system.
|Vulnerability in system around model—a traditional vulnerability
|Supply Chain Compromise
|Compromising development components of a ML model, e.g. data, model, hardware, and software stack.
|Infected model file
|Upstream Dependency Compromise
|Unintended information leakage through API
|Cloud Model API leaks more information than it needs to
|Cloud Model API isn’t sufficiently rate limited
|Intentionally try to make a model perform poorly
|The model uses features that are easily gamed by the attacker
|Insufficient Training Data
|The bypass is not represented in the training data
|Input data points intentionally supplied to draw mispredictions. Potential Cause: Over permissive API
|Directly or indirectly exfiltrate ML artifacts
|Reconstruct training data through strategic queries
|Extract model functionality through strategic queries
|Usage of poisoned data in the ML pipeline
|Attackers inject poisoned data into the ingest pipeline
A number of categories map directly to techniques codified in MITRE ATLAS. In future, we intend to cover the full landscape of adversarial ML attacks under the Security domain.