Hello World!

 · 2 min · Kotti Sasikanth

The journey begins

community vulnerabilities

Artificial Intelligence (AI) and Machine Learning (ML) has become pervasive in every walk of life. They are being used across industries and verticals for a variety of applications. With the advent of the deep learning revolution, the performance of these models has improved by leaps and bounds, resulting in the deployment of large-scale models in many critical applications. However, AI/ML models are prone to different vulnerabilities across different coordinates, such as fairness, robustness, privacy, reliability, and alignment. To help the AI community mitigate these vulnerabilities, we are introducing the AI Vulnerability Database (AVID).

AVID is a place by the community, for the community, enabling AI practitioners proactively navigate potential vulnerabilities in their AI pipelines through housing a single taxonomy, encompassing the different coordinates potential vulnerabilities may occur, be in along the lines of security, ethics, or performance-related concerns pertaining to a model, dataset, or system. The AVID community also evaluates models and datasets for vulnerabilities with responsible disclosure to interested parties.

We hope AVID helps organizations make AI applications and products safer and more inclusive.

We always welcome contributions from the community. You can reach out by joining our discord server, or through the interest form here.

About the Author

Kotti Sasikanth is completing his MS majoring in Artificial Intelligence at IIT Jodhpur, India. He is fascinated by the vast potential that AI as technology offers for making our lives better, while ensuring that this technology remains socially inclusive. His interests span Computer Vision, Trusted AI and ML. Sasikanth actively participates in open research activities at the OpenMined, ML Collective and Cohere For AI communities.