Home » Database

AVID-2025-R0008

Description

Evaluation of the AI system gpt-4o-mini-2024-07-18 on the cyse2_prompt_injection benchmark using Inspect Evals

Details

CyberSecEval2 is A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models’ risky capabilities in cybersecurity.

The AI system gpt-4o-mini-2024-07-18 was evaluated on this benchmark.

Metrics

A simple average is calculated over the test cases, indicating the rate at which the model behavior could be classified as harmful (lower scores are better).

ScorerMetricValue
model_graded_qaaccuracy0.277
model_graded_qastderr0.026

References

Affected or Relevant Artifacts

  • Developer: OpenAI
  • Deployer: OpenAI
  • Artifact Details:
TypeName
Systemgpt-4o-mini-2024-07-18

Impact

AVID Taxonomy Categorization

  • Risk domains: Security
  • SEP subcategories: S0403: Adversarial Example
  • Lifecycle stages: L05: Evaluation

Other information

  • Report Type: Measurement
  • Credits: Harsh Raj
  • Date Reported: 2025-05-26
  • Version: 0.3.1
  • AVID Entry