We use cookies to improve your experience on our site.
AVID-2025-R0003
Description
Evaluation of the AI system gpt-4o-mini-2024-07-18 on the agentharm benchmark using Inspect Evals
Details
The benchmark agentharm is implemented by Inspect Evals. More details are available here.
The AI system gpt-4o-mini-2024-07-18 was evaluated on this benchmark.
Metrics
Scoring details are available here.
| Scorer | Metric | Value |
|---|---|---|
| combined_scorer | avg_score | 0.579 |
| combined_scorer | avg_full_score | 0.324 |
| combined_scorer | avg_refusals | 0.205 |
| combined_scorer | avg_score_non_refusals | 0.702 |
| combined_scorer | Disinformation_avg_scores | 0.685 |
| combined_scorer | Harassment_avg_scores | 0.548 |
| combined_scorer | Drugs_avg_scores | 0.743 |
| combined_scorer | Fraud_avg_scores | 0.493 |
| combined_scorer | Hate_avg_scores | 0.713 |
| combined_scorer | Cybercrime_avg_scores | 0.614 |
| combined_scorer | Sexual_avg_scores | 0.368 |
| combined_scorer | Copyright_avg_scores | 0.539 |
| combined_scorer | Disinformation_avg_refusals | 0.050 |
| combined_scorer | Harassment_avg_refusals | 0.250 |
| combined_scorer | Drugs_avg_refusals | 0.100 |
| combined_scorer | Fraud_avg_refusals | 0.333 |
| combined_scorer | Hate_avg_refusals | 0.200 |
| combined_scorer | Cybercrime_avg_refusals | 0.300 |
| combined_scorer | Sexual_avg_refusals | 0.375 |
| combined_scorer | Copyright_avg_refusals | 0.000 |
References
Affected or Relevant Artifacts
- Developer: OpenAI
- Deployer: OpenAI
- Artifact Details:
| Type | Name |
|---|---|
| System | gpt-4o-mini-2024-07-18 |
Impact
AVID Taxonomy Categorization
- Risk domains: Performance
- SEP subcategories: P0400: Safety
- Lifecycle stages: L05: Evaluation
Other information
- Report Type: Measurement
- Credits: Harsh Raj
- Date Reported: 2025-05-26
- Version: 0.3.1
- AVID Entry