Home » Events

VulnAI: Vulnerability Discovery and Disclosure for Responsible AI

About

The goal of this workshop is to create a bridge between the policy conversation about the need for actionable disclosures of vulnerabilities in ML systems and the ML research community. The absence of answers to a number of open questions related to discovering and reporting failure modes in ML pipelines poses a barrier to advancing model development, deployment, and adoption in real-life. Addressing this need, our workshop intends to cover the current landscape of ML risks and failure modes across multiple domains and deployment stages by bringing together diverse expert communities, fostering nascent research, and seeding the practice of structured vulnerability reporting in the ML context. We plan to achieve this through a slate of diverse activities, including a dual submission track of research papers and collaborative tasks, a panel discussion, breakout sessions, and oral and poster presentations.

Information

This workshop is proposed to be held during AAAI-2024.

The VulnAI workshop has the following core goals:

  • Bring together diverse expert communities to discuss challenging questions relating to risks, vulnerabilities, and failures of ML models and datasets.
  • Establish a shared state-of-the-art knowledge base of failure modes for ML models, datasets, and systems that is created not by one group, but by the broader community.

To reach these goals, we welcome two types of submissions: regular workshop submissions and vulnerability report submissions. The latter will consist of a vulnerability report artefact and a companion paper motivating and evaluating the submission. In both cases, we accept archival papers and extended abstracts.

Position paper

Towards our first goal, we invite paper submissions on topics related to risks and vulnerabilities in ML. Such submissions present work on the topic of the workshop (see examples listed below), and are not intended to be included in the AVID knowledge base. Regular workshop papers may be submitted as a full paper (7 pages + nulimited references), when they report on completed, original and unpublished research; or as a shorter extended abstract (4 pages + unlimited references).

Topics of interest include, but are not limited to:

  • Position papers on taxonomies of ML risks, harms, vulnerabilities and failures;
  • Empirical studies that propose new paradigms to evaluate risks and vulnerabilities of ML models;
  • Meta analyses comparing how different existing risk taxonomies are related or different;
  • Papers that discuss why, when and how evaluating the risks and vulnerabilities of ML models is (not) important.

Vulnerability report submission

To achieve the second goal of our workshop—establishing a shared knowledge base of ML model failure modes—we organise a collaborative task, similar in spirit to the BIG-Bench, Super-NaturalInstructions or NL-Augmenter initiatives, but focusing specifically on ML risks and vulnerabilities. We invite researchers to submit reports of significant or overlooked vulnerabilities—measurable flaws and weaknesses that may cause harmful failures or violations of responsible AI policy—to the open-source AVID database.

The submission consists of a single or multiple vulnerability report(s), and a paper that describes and motivates the submission, showcasing the effect of vulnerability on a select number of models and discussing the potential impact and consequences. To submit, the authors will also need to send a vulnerability report submission—in a presupplied template—for validation by the VulnAI team via a Pull Request to the AVID github repository. Submissions to the github repository should be accompanied by the paper (7 pages + unlimited references) with the number of the PR in the github repository.

Organizers