Home » Database

AVID-2023-V016

Description

Achieving Code Execution in MathGPT via Prompt Injection

Details

The publicly available Streamlit application MathGPT uses GPT-3, a large language model (LLM), to answer user-generated math questions.

Recent studies and experiments have shown that LLMs such as GPT-3 show poor performance when it comes to performing exact math directly[1][2]. However, they can produce more accurate answers when asked to generate executable code that solves the question at hand. In the MathGPT application, GPT-3 is used to convert the user’s natural language question into Python code that is then executed. After computation, the executed code and the answer are displayed to the user.

Some LLMs can be vulnerable to prompt injection attacks, where malicious user inputs cause the models to perform unexpected behavior[3][4]. In this incident, the actor explored several prompt-override avenues, producing code that eventually led to the actor gaining access to the application host system’s environment variables and the application’s GPT-3 API key, as well as executing a denial of service attack. As a result, the actor could have exhausted the application’s API query budget or brought down the application.

After disclosing the attack vectors and their results to the MathGPT and Streamlit teams, the teams took steps to mitigate the vulnerabilities, filtering on select prompts and rotating the API key.

References

AVID Taxonomy Categorization

  • Risk domains: Security
  • SEP subcategories: S0403: Adversarial Example
  • Lifecycle stages: L06: Deployment

Affected or Relevant Artifacts

Other information

  • Vulnerability Class: ATLAS Case Study
  • Date Published: 2023-03-31
  • Date Last Modified: 2023-03-31
  • Version: 0.2
  • AVID Entry