AVID Response to the NIST AI RMF

 · 3 min · Nathan Butters

Public Comments from AVID on the NIST AI Risk Management Framework

nist community taxonomy vulnerabilities security

On August 18th, 2022 the National Institute for Standards and Technology (NIST) released the second draft of their framework to manage the risks of artificial intelligence (AI). They also provided access to an associated playbook to help operationalize the framework. NIST intends for the AI RMF (Risk Management Framework) to support the voluntary consideration and mitigation of the risks and harms posed by AI on individuals, corporations, and society. They have asked for public comments in an effort to incorporate diverse perspectives and build a meaningful consensus.

AVID (AI Vulnerability Database)—represented by Subho Majumdar, Sven Cattell, and Nathan Butters—recently provided public comments on the AI RMF. These groups and their members represent diverse interests among professionals in the field as well as community members interested in the practical application of AI in a global context. We are grateful to NIST for the opportunity to provide comments and are looking forward to seeing the next iteration of the AI RMF and associated playbook.

If you would like to read our full response, feel free to check it out.

Summary of our comments on the AI RMF

We start by addressing some areas where the AI RMF could be strengthened to support the mitigation of risks throughout the development lifecycle. We focus on addressing specific sections of the framework first:

  • The target audience of the AI RMF, as outlined in Section 2, is restricted to corporations with a high level of staffing and monetary resources. This fundamental choice creates a significant roadblock for operationalizing the framework by excluding, implicitly or explicitly, other organizations who create, deploy, and use AI in a variety of forms.
  • The document does not do enough to explicitly connect AI to traditional security concerns or address how those should be adapted. Sections 4.4 (Secure and Resilient) and 4.6 (Explainable and Interpretable) represent the core of our concerns because of how the document fails to address the complexity of these in connection with a broader context.

We follow these comments up with a deeper perspective on Community, Taxonomy, and Documentation. These areas are of particular importance to our efforts and represent a more generalized statement about the nature of the AI RMF and the work ahead. Each section has an associated summary:

  • Community- The AI RMF will benefit from the perspectives of diverse actors from affected communities, both as guidance within the document and as a target audience to help them manage their risks from using AI.

  • Taxonomy- AVID aims to build a community driven taxonomy for AI risks, as vulnerabilities and harms, to support the advancement of the RAI field.

  • Documentation- The AI RMF will benefit the Responsible AI (RAI) field by establishing a definition of AI vulnerabilities that expands on the current CVE definition of vulnerability.

Please see our detailed response covering these areas, as well as some direct refinements of the current version of the AI RMF here.

As we build AVID in operationalizing the RMF and similar responsible ML principles, we welcome contributions from the community. Reach out to us by joining our discord server, or through the interest form here.

About the Author

Nathan Butters is a product manager based in Seattle, Washington. He is passionate about using his proximity to power to support the voices of those in the Global South. His interests bridge the humanities and computer science fields, focusing on what humans believe, and why, in the context of how systems are built, understood, and contested. He collaborates with people to make tools, and inform policy decisions, aimed directly at promoting the flourishing of all life on Earth (not just humans) in a world full of AI.