Home » Taxonomy

Security

This domain is intended to codify the landscape of threats to a ML system.

IDNameDescription
S0100Software VulnerabilityVulnerability in system around model—a traditional vulnerability
S0200Supply Chain CompromiseCompromising development components of a ML model, e.g. data, model, hardware, and software stack.
S0201Model CompromiseInfected model file
S0202Software compromiseUpstream Dependency Compromise
S0300Over-permissive APIUnintended information leakage through API
S0301Information LeakCloud Model API leaks more information than it needs to
S0302Excessive QueriesCloud Model API isn’t sufficiently rate limited
S0400Model BypassIntentionally try to make a model perform poorly
S0401Bad FeaturesThe model uses features that are easily gamed by the attacker
S0402Insufficient Training DataThe bypass is not represented in the training data
S0403Adversarial ExampleInput data points intentionally supplied to draw mispredictions. Potential Cause: Over permissive API
S0500ExfiltrationDirectly or indirectly exfiltrate ML artifacts
S0501Model inversionReconstruct training data through strategic queries
S0502Model theftExtract model functionality through strategic queries
S0600Data poisoningUsage of poisoned data in the ML pipeline
S0601Ingest PoisoningAttackers inject poisoned data into the ingest pipeline

NOTE
A number of categories map directly to techniques codified in MITRE ATLAS. In future, we intend to cover the full landscape of adversarial ML attacks under the Security domain.