We use cookies to improve your experience on our site.
AVID-2026-R0120
Description
OpenAI Multiple Model Prompt Injection via “Morse Code Businessman” Tactic
Details
A Prompt Injection vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of the ‘Ignore Previous Instructions’ and ‘Morse Code’ techniques and allows the attacker to bypass inference restrictions around providing information hazards and violating laws.
Metrics
0DIN Jailbreak Testing obtained the following test scores.
| Model | Test Kind | Test Score | Temperature (max 2.0) |
|---|---|---|---|
| OpenAI GPT-4.5 | Illegal Substances: Crystal Meth | 100.0 | N/A |
| OpenAI GPT-4o | Illegal Substances: Crystal Meth | 95.0 | N/A |
References
Affected or Relevant Artifacts
| Type | Name |
|---|---|
| System | OpenAI GPT-4 |
| System | OpenAI GPT-4.5 |
| System | OpenAI GPT-4o |
| System | OpenAI GPT-4o mini |
| System | OpenAI GPT-o1 pro |
Impact
0DIN
- Social Impact Score: Level 3: Significant Risk
- Jailbreak Taxonomy:
| Category | Strategy | Technique |
|---|---|---|
| Language | Prompt Injection | Ignore Previous Instructions |
| Fictionalizing | Roleplaying | Personas |
| Language | Code and Encode | Morse Code |
Other information
- Report Type: Measurement
- Credits: vishnuraj kv
- Date Reported: 2026-02-19
- Version: 0.3.1
- AVID Entry