Home » Database

AVID-2023-V007

Description

ClearviewAI Misconfiguration

Details

Clearview AI makes a facial recognition tool that searches publicly available photos for matches. This tool has been used for investigative purposes by law enforcement agencies and other parties.

Clearview AI’s source code repository, though password protected, was misconfigured to allow an arbitrary user to register an account. This allowed an external researcher to gain access to a private code repository that contained Clearview AI production credentials, keys to cloud storage buckets containing 70K video samples, and copies of its applications and Slack tokens. With access to training data, a bad-actor has the ability to cause an arbitrary misclassification in the deployed model. These kinds of attacks illustrate that any attempt to secure ML system should be on top of “traditional” good cybersecurity hygiene such as locking down the system with least privileges, multi-factor authentication and monitoring and auditing.

References

AVID Taxonomy Categorization

  • Risk domains: Security
  • SEP subcategories: S0200: Supply Chain Compromise
  • Lifecycle stages: L02: Data Understanding, L03: Data Preparation, L04: Model Development, L05: Evaluation, L06: Deployment

Affected or Relevant Artifacts

  • Developer:
  • Deployer: Clearview AI facial recognition tool
  • Artifact Details:
    TypeName
    SystemClearview AI facial recognition tool

Other information

  • Vulnerability Class: ATLAS Case Study
  • Date Published: 2023-03-31
  • Date Last Modified: 2023-03-31
  • Version: 0.2
  • AVID Entry