general

AVID Announces Integration with Giskard Scan


Giskard Scan results can now be exported as AVID reports....

 · 3 min · Matteo Dora, Subho Majumdar

AI Vulnerability Database: 2023 in review


A recap of AVID's work in 2023, and note of gratitude....

 · 6 min · Subho Majumdar, Nathan Butters, Borhane Blili-Hamelin, Brian Pendleton

AVID Announces BiasAware - A Powerful Tool for Detecting Bias in Datasets


A new tool to detect bias in datasets, developed by AVID and hosted on Hugging Face....

 · 7 min · Freyam Mehta, Sudipta Ghosh, Carol Anderson, Jekaterina Novikova and Subho Majumdar

AVID Announces Integration with garak, a Vulnerability Scanner for LLMs


Vulnerabilities found by garak can now be converted easily into AVID reports....

 · 4 min · Leon Derczynski, Carol Anderson, and Subho Majumdar

ARVA Response to the NTIA AI Accountability Policy Request for Comment


Public Comments from ARVA in Response to NTIA's Request for Comment on AI Accountability Policy...

 · 5 min · Carol Anderson

Large Language Models Can Steal Work and Spill Secrets. Here's Why We Should Care.


An overview of the risks to privacy and intellectual property posed by large language models....

 · 8 min · Carol Anderson

Guardrails on Large Language Models, Part 4: Content Moderation


A non-technical introduction to the major guardrails on systems like ChatGPT. Part 4 of a four-part …...

 · 7 min · Carol Anderson

Guardrails on Large Language Models, Part 3: Prompt Design


A non-technical introduction to the major guardrails on systems like ChatGPT. Part 3 of a four-part …...

 · 12 min · Carol Anderson

Guardrails on Large Language Models, Part 2: Model Fine Tuning


A non-technical introduction to the major guardrails on systems like ChatGPT. Part 2 of a four-part …...

 · 5 min · Carol Anderson