We use cookies to improve your experience on our site.
AVID-2023-R0003
Description
It is possible to make ChatGPT perform remote code execution just by asking politely
Details
Frameworks like langchain (Python) and boxcars.ai (Ruby) offer apps and scripts to directly execute queries through LLMs as a built-in feature. In the context of boxcars.ai, this makes it really easy to perform remote code execution or SQL injection. All you have to do is ask politely! See the references for more details.
References
AVID Taxonomy Categorization
- Risk domains: Ethics
- SEP subcategories: S0100: Software Vulnerability; S0201: Model Compromise; S0301: Information Leak; S0202: Software Compromise; S0601: Ingest Poisoning
- Lifecycle stages: L04: Model Development, L05: Evaluation, L06: Deployment
Affected or Relevant Artifacts
- Developer: OpenAI
- Deployer: OpenAI, boxcars.ai
- Artifact Details:
Type Name System ChatGPT System boxcars.ai
Other information
- Report Type: Advisory
- Credits: Lucas Luitjes, N/A
- Date Reported: 2023-03-26
- Version: 0.2
- AVID Entry