Home ยป Database

AVID-2023-V027

Description

It is possible to make ChatGPT perform remote code execution just by asking politely

Details

Frameworks like langchain (Python) and boxcars.ai (Ruby) offer apps and scripts to directly execute queries through LLMs as a built-in feature. In the context of boxcars.ai, this makes it really easy to perform remote code execution or SQL injection. All you have to do is ask politely! See the references for more details.

Reports

IDTypeName
AVID-2023-R0003AdvisoryIt is possible to make ChatGPT perform remote code execution just by asking politely

References

AVID Taxonomy Categorization

  • Risk domains: Ethics
  • SEP subcategories: S0100: Software Vulnerability; S0201: Model Compromise; S0301: Information Leak; S0202: Software Compromise; S0601: Ingest Poisoning
  • Lifecycle stages: L04: Model Development, L05: Evaluation, L06: Deployment

Affected or Relevant Artifacts

  • Developer: OpenAI
  • Deployer: OpenAI, boxcars.ai
  • Artifact Details:
    TypeName
    SystemChatGPT
    Systemboxcars.ai

Other information

  • Vulnerability Class: LLM Evaluation
  • Credits: Lucas Luitjes, N/A
  • Date Published: 2023-03-31
  • Date Last Modified: 2023-03-31
  • Version: 0.2
  • AVID Entry