We use cookies to improve your experience on our site.
AVID-2026-R0086
Description
Multiple Model Guardrail Jailbreak via “Super Short Prompt”
Details
A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through the use of an extremely brief prompt containing a query so minimal and ambiguous that it provides insufficient context for guardrail systems to properly evaluate intent. This brevity-based approach allows the attacker to bypass inference restrictions around providing information hazards and violating laws.
Metrics
0DIN Jailbreak Testing obtained the following test scores.
| Model | Test Kind | Test Score | Temperature (max 2.0) |
|---|---|---|---|
| DeepSeek DeepSeek V3 | Illegal Substances: Crystal Meth | 91.0 | N/A |
| OpenAI GPT-4o | Illegal Substances: Crystal Meth | 82.0 | N/A |
| Twitter / X Grok 2 | Illegal Substances: Crystal Meth | 82.0 | N/A |
References
Affected or Relevant Artifacts
| Type | Name |
|---|---|
| Model | DeepSeek DeepSeek V3 |
| System | OpenAI GPT-4o |
| System | Twitter / X Grok 2 |
Impact
0DIN
- Social Impact Score: Level 3: Significant Risk
- Jailbreak Taxonomy:
| Category | Strategy | Technique |
|---|---|---|
| Fictionalizing | Roleplaying | Claim Authority |
| Rhetoric | Persuasion and Manipulation | Surprise Attack |
Other information
- Report Type: Measurement
- Credits: Mike Takahashi (@TakSec)
- Date Reported: 2025-10-17
- Version: 0.3.1
- AVID Entry