Blog

AVID Announces BiasAware - A Powerful Tool for Detecting Bias in Datasets


A new tool to detect bias in datasets, developed by AVID and hosted on Hugging Face....

 · 7 min · Freyam Mehta, Sudipta Ghosh, Carol Anderson, Jekaterina Novikova and Subho Majumdar

AVID Announces Integration with garak, a Vulnerability Scanner for LLMs


Vulnerabilities found by garak can now be converted easily into AVID reports....

 · 4 min · Leon Derczynski, Carol Anderson, and Subho Majumdar

ARVA Response to the NTIA AI Accountability Policy Request for Comment


Public Comments from ARVA in Response to NTIA's Request for Comment on AI Accountability Policy...

 · 5 min · Carol Anderson

Large Language Models Can Steal Work and Spill Secrets. Here's Why We Should Care.


An overview of the risks to privacy and intellectual property posed by large language models....

 · 8 min · Carol Anderson

Guardrails on Large Language Models, Part 4: Content Moderation


A non-technical introduction to the major guardrails on systems like ChatGPT. Part 4 of a four-part …...

 · 7 min · Carol Anderson

Guardrails on Large Language Models, Part 3: Prompt Design


A non-technical introduction to the major guardrails on systems like ChatGPT. Part 3 of a four-part …...

 · 12 min · Carol Anderson

Guardrails on Large Language Models, Part 2: Model Fine Tuning


A non-technical introduction to the major guardrails on systems like ChatGPT. Part 2 of a four-part …...

 · 5 min · Carol Anderson

Guardrails on Large Language Models, Part 1: Dataset Preparation


A non-technical introduction to the major guardrails on systems like ChatGPT. Part 1 of a four-part …...

 · 3 min · Carol Anderson

Introducing AVID v0.1


The First AVID Taxonomy and Database is now Live!...

 · 3 min · Subho Majumdar