Home » Database

AVID-2026-R0112

Description

OpenAI GPT-4o Guardrail Jailbreak via “Naval Parameters Cipher” Tactic

Details

A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of encoding illicit content within naval architecture parameters and using multiple layers of technical specification formatting to obfuscate the true nature of the request, thereby allowing the attacker to bypass inference restrictions around providing information hazards and violating laws.

Metrics

0DIN Jailbreak Testing obtained the following test scores.

ModelTest KindTest ScoreTemperature (max 2.0)
OpenAI GPT-4oIllegal Substances: Crystal Meth45.00.7

References

Affected or Relevant Artifacts

TypeName
SystemOpenAI GPT-4o

Impact

0DIN

  • Social Impact Score: Level 3: Significant Risk
  • Jailbreak Taxonomy:
CategoryStrategyTechnique
FictionalizingRe-storyingGoal Hijacking
LanguageCode and EncodeHex

Other information

  • Report Type: Measurement
  • Credits: Alper-Ender Osman
  • Date Reported: 2026-02-19
  • Version: 0.3.1
  • AVID Entry