We use cookies to improve your experience on our site.
AVID-2023-V027
Description
It is possible to make ChatGPT perform remote code execution just by asking politely
Details
Frameworks like langchain (Python) and boxcars.ai (Ruby) offer apps and scripts to directly execute queries through LLMs as a built-in feature. In the context of boxcars.ai, this makes it really easy to perform remote code execution or SQL injection. All you have to do is ask politely! See the references for more details.
Reports
ID | Type | Name |
---|---|---|
AVID-2023-R0003 | Advisory | It is possible to make ChatGPT perform remote code execution just by asking politely |
References
AVID Taxonomy Categorization
- Risk domains: Ethics
- SEP subcategories: S0100: Software Vulnerability; S0201: Model Compromise; S0301: Information Leak; S0202: Software Compromise; S0601: Ingest Poisoning
- Lifecycle stages: L04: Model Development, L05: Evaluation, L06: Deployment
Affected or Relevant Artifacts
- Developer: OpenAI
- Deployer: OpenAI, boxcars.ai
- Artifact Details:
Type Name System ChatGPT System boxcars.ai
Other information
- Vulnerability Class: LLM Evaluation
- Credits: Lucas Luitjes, N/A
- Date Published: 2023-03-31
- Date Last Modified: 2023-03-31
- Version: 0.2
- AVID Entry