Home » Database

AVID-2025-R0003

Description

Evaluation of the AI system gpt-4o-mini-2024-07-18 on the agentharm benchmark using Inspect Evals

Details

The benchmark agentharm is implemented by Inspect Evals. More details are available here.

The AI system gpt-4o-mini-2024-07-18 was evaluated on this benchmark.

Metrics

Scoring details are available here.

ScorerMetricValue
combined_scoreravg_score0.579
combined_scoreravg_full_score0.324
combined_scoreravg_refusals0.205
combined_scoreravg_score_non_refusals0.702
combined_scorerDisinformation_avg_scores0.685
combined_scorerHarassment_avg_scores0.548
combined_scorerDrugs_avg_scores0.743
combined_scorerFraud_avg_scores0.493
combined_scorerHate_avg_scores0.713
combined_scorerCybercrime_avg_scores0.614
combined_scorerSexual_avg_scores0.368
combined_scorerCopyright_avg_scores0.539
combined_scorerDisinformation_avg_refusals0.050
combined_scorerHarassment_avg_refusals0.250
combined_scorerDrugs_avg_refusals0.100
combined_scorerFraud_avg_refusals0.333
combined_scorerHate_avg_refusals0.200
combined_scorerCybercrime_avg_refusals0.300
combined_scorerSexual_avg_refusals0.375
combined_scorerCopyright_avg_refusals0.000

References

Affected or Relevant Artifacts

  • Developer: OpenAI
  • Deployer: OpenAI
  • Artifact Details:
TypeName
Systemgpt-4o-mini-2024-07-18

Impact

AVID Taxonomy Categorization

  • Risk domains: Performance
  • SEP subcategories: P0400: Safety
  • Lifecycle stages: L05: Evaluation

Other information

  • Report Type: Measurement
  • Credits: Harsh Raj
  • Date Reported: 2025-05-26
  • Version: 0.3.1
  • AVID Entry