Introducing AVID v0.1

 · 3 min · Subho Majumdar

The First AVID Taxonomy and Database is now Live!

community vulnerabilities taxonomy

We at AI Vulnerability Database (AVID) are proud to unveil the very first release (v0.1) of our taxonomy and database of AI failures to the public. This was the result of many hours of collaborations, debates, and meetings, inside and outside the AVID community. I am grateful to the contributors to this exercise.

As a short introduction to AVID, we have two main focus areas: the Taxonomy and The Database.

Taxonomy

This is intended to serve as a common foundation for data science/AI engineering, product, and policy teams to manage potential risks at different stages of developing an AI system. In spirit, this taxonomy is analogous to MITRE ATT&CK for cybersecurity vulnerabilities, and MITRE ATLAS for adversarial attacks on AI systems.

The AVID Taxonomy has two views: an effect view, aimed at the auditor user persona, and an lifecycle view, aimed at the developer persona of users. The effect view has three domains of harms: Security, Ethics, and Performance. Each domain is divided into multiple categories and subcategories. The lifecycle view represents sequential steps of a typical AI development workflow, adapted from the CRISP-DM framework.

For more details about the taxonomy, go to: https://avidml.org/taxonomy.

Database

This houses full-fidelity information (model metadata, harm metrics, measurements, benchmarks, and mitigation techniques if any) on evaluation examples of harm (sub)categories defined by the taxonomy. The aim here is transparent and reproducible evaluations that give practitioners a path to implementation.

The database is organized into two layers:

  1. Vunerability: high-level evidence of an AI failure mode, in line with the NIST CVEs
  2. Report: one example of a particular vulnerability occurring, supported by qualitative or quantitative evaluation. Based on the references provided in a specific report, reports can potentially more granular and reproducible than vulnerabilities.

We are starting small with 13 vulnerabilities and 5 reports, but will expand quickly.

To check out the database, and vulnerabilities and reports therein, go to: https://avidml.org/database.

How can you help

If there’s an AI failure you’d like to be included in AVID (e.g. examples of a large language model malfunctioning), you can do so today using our Vulnerability Reporting Form or opening a pull request in the github repository housing all vulnerabilities and reports.

You can collaborate with us by joining our discord server, and through other avenues here.

About the Author

Subho Majumdar is a technical leader in applied trustworthy machine learning who believes in a community-centric approach to data-driven decision making. He has pioneered the use of responsible ML methods in industry settings, wrote a book, and founded multiple nonprofit efforts in this area—TrustML, Bias Buccaneers, and AVID. Currently, Subho is a ML scientist in Twitch, where he leads applied science efforts in Responsible AI.