AI Vulnerability Database: 2023 in review

 · 6 min · Subho Majumdar, Nathan Butters, Borhane Blili-Hamelin, Brian Pendleton

A recap of AVID's work in 2023, and note of gratitude.

vulnerabilities ai community

During the 2022 holiday season, a small group of us coded up the first version of the AVID website, compiled the first set of reports and vulnerabilities, and put together our taxonomy based on discussions in the past months. This body of work went live in January 2023. In the months that followed, AI became a household name: alongwith which came the need for rigorous approaches to managing the risks of this emergent technology became obvious to the general public, companies, and governments around the world. AVID provided an outlet to this groundswell of interest. During the rest of 2023, we launched a slew of public events to channel this interest to productive outlets, partnered with like-minded organizations in community efforts, and had a number of releases seeding technical resources that would be indispensable to AI risk practitioners for decades to come.

A Message of Gratitude

Before diving into what we did and achieved in 2023, let’s take a moment to acknowledge our collaborators and supporters. It’s hard to not pinch ourselves and be in awe of how far we have come in just one year, that too as a purely grassroots effort with no deep pockets backing us. This progress wouldn’t be possible without our friends.

There are too many folks to thank individually, so we’ll go by organizations we have collaborated with. In alphabetical order, we are grateful to

To the folks at these places we’ve worked with, thank you so much! If we’ve missed anyone, we apologize: please let us know and we’ll include you.

Last but not the least, we thank AVID leads Carol Anderson, Jekaterina Novikova, and Will Pearce for leading AVID activities; and responsible AI pioneers Rumman Chowdhury and Kush Varshney for their guidance as board members of AVID’s parent nonprofit, the AI Risk and Vulnerability Alliance (ARVA). Finally, we thank the growing AVID community for making all of the following happen.

Timeline and Accomplishments

AVID started out as a database and a taxonomy for AI failure modes, but has grown to become so much more than that. It’s now a living, breathing community of like-minded people interested in managing realistic and concrete risks of AI. Here is a summary of what the AVID community has done and achieved in 2023.

MonthWhat we did
Jan 2023Released v0.1 of AVID taxonomy and database
Feb 2023Organized a community workshop around Ethics and Performance
Mar 2023Organized a participatory workshop in Mozilla Festival 2023
Apr 2023ARVA is recognized as a United States nonprofit organization
May 2023Awarded the Magic Grant by Brown Institute for Media Innovation
Jun 2023Organized two CRAFT workshops in FAccT 2023 (in collaboration with Hugging Face, UNICEF, and Northwestern)
Responded to the NTIA AI Accountability Policy Request
Our joint research with Data & Society on AI red teaming was funded by Omidyar Network
July 2023Announced integration with garak, released first version of our Python SDK avidtools and our API
Kicked off our taxonomy Working Group
Aug 2023Partnered with AI Village to organize the largest ever AI red teaming event
Published an Op-ed in Tech Policy Press on Generative AI Red Teaming
Oct 2023Announced integration with Apollo
Released a Policy Brief on AI red teaming with Data & Society
Nov 2023Released Biasaware as an open-source tool for dataset bias detection
Dec 2023Our builders Cohort 2 started work on the AVID API/DB and Editorial UI

Looking ahead to 2024

If 2023 was the year of AI, 2024 will be the year of trusted AI. To that end, we are planning a slew of activities to continue our community-oriented efforts.

  • We look forward to helping NIST develop best practices to measure and improve AI safety and trustworthiness by collaborating through the AI Safety Institute Consortium.
  • In January, we are taking AI red teaming to the US Capitol, partnering once again with one of our close collaborator organizations.
  • We’ll organize a number of public events to build awareness on AI risk management and standard setting approaches.
  • On the technical front, we will announce integrations with a number of developer tools, and continue to enrich our database with curated and contributed vulnerability reports.

In short, this is going to be a busy year, and we can’t wait to get started. Keep an eye out for more details in our social channels during the next weeks!


How you can Help Us

You can support and accelerate our activities by donating to ARVA through Open Collective. All contributions will be made public (anonymized if you prefer), and will go towards event logistics, membership fees, and project-specific support to our researchers and engineers.

If you are passionate about limiting the downsides of AI, join our Discord or drop an email to get in touch! On social media, consider giving us a follow on LinkedIn and Twitter.

About the Authors

Subho Majumdar is a technical leader in applied trustworthy AI who believes in a community-centric approach to data-driven decision-making. He has pioneered the use of responsible AI methods in industry settings, wrote a book, and founded multiple nonprofit efforts in this area—TrustML, Bias Buccaneers, and AVID. Currently, Subho is Co-founder and Head of AI at Vijil, an AI software startup on a mission to help developers build and operate intelligent agents that people can trust.

Nathan Butters is Senior Product Manager at Salesforce, where he support the product teams building out Salesforce’s Trust Layer to make sure their products adhere to their Trustworthy AI Principles and guidelines for the responsible development of generative AI. A founding director of ARVA, Nathan is passionate about using his proximity to power to support the voices of those in the Global South. His interests bridge the humanities and computer science fields, focusing on what humans believe, and why, in the context of how systems are built, understood, and contested.

Borhane Blili-Hamelin brings a philosophical and qualitative lens to socio-technical areas like ML evaluation, auditing, risk assessment, red teaming, and vulnerability management. He is an officer at ARVA, an affiliate at Data & Society, and a senior consultant at BABL AI. He’s also grateful to be a recipient of the 2023-2024 Magic Grant from The Brown Institute for Media Innovation.

Brian Pendleton is an AI security researcher and Founding Director of ARVA. He is passionate about pariticipatory approaches in mitigating AI security harms, and was one of the founding members of the OWASP LLM Top 10. Besides leading ARVA activities, Brian also leads community efforts in AI Village.