On June 12, AI Risk and Vulnerability Alliance (ARVA), the parent organization of AVID, provided a response to a request for comments from the National Telecommunications and Information Administration (NTIA). The NTIA is the U.S. federal agency charged with advising the President on telecommunications and information policy issues. Their request for comments focused on “AI Accountability Policy”, which encompasses AI audits, certifications, assessments, and similar mechanisms that “can help provide assurance that an AI system is trustworthy.”
This was not a request for comment on a draft policy; no such draft exists. Instead, the agency sought responses to a broad set of questions, designed to gather information and opinions about AI accountability practices to help inform policy development.
Commenters were invited to respond to any or all of the 34 provided questions, or to respond to a relevant question that wasn’t asked. ARVA—represented by Carol Anderson, Borhane Blili-Hamelin, Nathan Butters, and Subho Majumdar—responded to two questions closely related to our mission.
Summary of our comments
Question 11: What lessons can be learned from accountability processes and policies in cybersecurity, privacy, finance, or other areas?
Our answer here focuses on the handling of cybersecurity vulnerabilities as a model for documenting and sharing knowledge about problems in AI systems. In the cybersecurity ecosystem, standardized documentation and centralized databases of threats and failures have proved immensely valuable in enabling efficient sharing of information among practitioners and researchers. These include, for example, the Common Vulnerabilities and Exposures (CVE) system, which provides a standardized naming scheme for vulnerabilities, and centralized, public databases of vulnerabilities and exploits, such as NIST’s National Vulnerability Database (NVD).
We believe these resources should serve as models for developing analogous mechanisms in the field of AI. Vulnerabilities in AI systems go well beyond security (vulnerability to intentional exploits) to include silent, unintentional failures pertaining to human rights, discrimination, reliability, measurability, transparency, and misuse. We emphasize the need to capture and share information about such failures, which is AVID’s mission.
We highlight the following features that, in our view, should be core components of any open-source database for AI vulnerabilities:
- A comprehensive taxonomy of AI risks to enable classification of vulnerabilities along both social and technical dimensions.
- Incident and vulnerability reports including severity scores and remediation techniques, when available.
- A flexible, standard technical infrastructure to enable organizations developing AI to interface with and build upon the database.
- A robust adjudication process for AI vulnerabilities, including multi-stakeholder standard setting and an editorial process that is accountable and transparent to outside parties.
Question 23. How should AI accountability “products” ( e.g., audit results) be communicated to different stakeholders? Should there be standardized reporting within a sector and/or across sectors? How should the translational work of communicating AI accountability results to affected people and communities be done and supported?
While AI audits may cover numerous aspects of AI systems, including governance structures, documentation, and compliance with relevant legal frameworks, our answer here focuses narrowly on the reporting of vulnerabilities (known flaws or weaknesses) in AI systems. We strongly support development of standardized reporting formats and centralized repositories for AI vulnerabilities, as we outlined above.
We also address the issue of whether, when, and how public disclosures should be made. Vulnerability disclosure practices should be standardized across sectors to the extent possible. These decisions must balance the competing needs of the public and organizations. We argue that the concept of ethical disclosure – reporting the vulnerability or results to the company before the public – should be used where there is evidence that the reports will be taken seriously and the problem mitigated. Full disclosure – reporting the vulnerability or results to the public without warning the company – should go through an adjudicating body informed by the relevant ethical considerations.
We also point out the need for such an adjudicating body to translate this work to affected communities in a meaningful way. This work should be guided by three specific objectives:
- Informing communities – make the people affected by the AI system in question aware that they can be targeted or impacted by the vulnerability.
- Preventing future harm – provide access to developers of systems in a way that allows them to mitigate the vulnerability or choose safer options for their systems.
- Enabling contestation and dialogue – bringing people together to discuss how the AI is being used and who it benefits to offer recourse to those negatively affected and address inequalities in the way benefits are distributed by the use of AI.
We appreciate the opportunity to provide these comments to the NTIA, and welcome the agency’s interest in fostering accountability in the AI ecosystem.
Please see our detailed comments here.
As we build AVID in operationalizing the principles discussed here, we welcome contributions from the community. Reach out to us by joining our discord server, or through the interest form here.
About the Author
Carol Anderson is a data scientist and machine learning practitioner with expertise in natural language processing (NLP), biological data, and AI ethics. She previously worked as a data scientist at NVIDIA, ConcertAI, and Ancestry, and prior to that as an academic molecular biologist at UCSF and Indiana University. Currently, her main interest is operationalizing AI ethics and safety, especially in the area of generative AI.