ARVA Announces Support for Open Letter on Voluntary Safe Harbor Protections for Good Faith Testing of Generative AI Systems

 · 2 min · Borhane Blili-Hamelin, Carol Anderson, Jekaterina Novikova, Ric McLaughlin, Nathan Butters, Brian Pendleton, and Subho Majumdar

Voluntary safe harbor protections are crucial to a thriving AI risk management ecosystem.

vulnerabilities generative AI

At AI Risk and Vulnerability Alliance (ARVA), we want to see a world where communities are empowered to identify and mitigate the harmful flaws in AI systems that affect them. This can’t happen without outsider oversight by journalists, civil society, researchers, and countless other communities.

Yet, these kinds of studies are legally risky. Researchers risk having their accounts suspended or even being sued for violating terms of service.

We also heed the insight — from work like the Algorithmic Justice League’s report on “Bug Bounties for Algorithmic Harms” — that voluntary safe harbor policies and vulnerability disclosure programs could lead to deeper structural change, by offering organizations an opportunity to develop the capacity to translate disclosures of harmful flaws into changes that meaningfully mitigate harm.

ARVA is thrilled to support this open letter, written by a group of researchers including ARVA’s Borhane Blili-Hamelin, calling on organizations to adopt voluntary safe harbor protections for good faith testing of generative AI systems.

The letter’s main points:

  • Independent evaluation is necessary for public awareness, transparency, and accountability of high-impact generative AI systems.

  • Currently, AI companies' policies can chill independent evaluation.

  • AI companies should provide basic protections and more equitable access for good faith AI safety and trustworthiness research.

Please sign the letter if you agree and share it with your network!

We are thrilled to have played a part in co-authoring the letter and the accompanying research paper, thanks to ARVA’s Borhane Blili-Hamelin.

We are grateful to Nitasha Tiku for reporting on this effort at The Washington Post.

We are also grateful for support from the Brown Institute for Media Innovation Magic Grant, which helped support our time on this effort.