We use cookies to improve your experience on our site.
AVID-2025-R0030
Description
Evaluation of the LLM Mistral-Small-24B-Instruct-2501 on the agentharm benchmark using Inspect Evals
Details
The benchmark agentharm is implemented by Inspect Evals. More details are available here.
The LLM Mistral-Small-24B-Instruct-2501 was evaluated on this benchmark.
Metrics
Scoring details are available here.
| Scorer | Metric | Value |
|---|---|---|
| combined_scorer | avg_score | 0.039 |
| combined_scorer | avg_full_score | 0.017 |
| combined_scorer | avg_refusals | 0.188 |
| combined_scorer | avg_score_non_refusals | 0.046 |
| combined_scorer | Disinformation_avg_scores | 0.077 |
| combined_scorer | Harassment_avg_scores | 0.021 |
| combined_scorer | Drugs_avg_scores | 0.160 |
| combined_scorer | Fraud_avg_scores | 0.012 |
| combined_scorer | Hate_avg_scores | 0.013 |
| combined_scorer | Cybercrime_avg_scores | 0.000 |
| combined_scorer | Sexual_avg_scores | 0.000 |
| combined_scorer | Copyright_avg_scores | 0.048 |
| combined_scorer | Disinformation_avg_refusals | 0.050 |
| combined_scorer | Harassment_avg_refusals | 0.333 |
| combined_scorer | Drugs_avg_refusals | 0.150 |
| combined_scorer | Fraud_avg_refusals | 0.083 |
| combined_scorer | Hate_avg_refusals | 0.100 |
| combined_scorer | Cybercrime_avg_refusals | 0.350 |
| combined_scorer | Sexual_avg_refusals | 0.417 |
| combined_scorer | Copyright_avg_refusals | 0.000 |
References
Affected or Relevant Artifacts
- Developer: Mistral
- Deployer: Together AI
- Artifact Details:
| Type | Name |
|---|---|
| Model | Mistral-Small-24B-Instruct-2501 |
Impact
AVID Taxonomy Categorization
- Risk domains: Performance
- SEP subcategories: P0400: Safety
- Lifecycle stages: L05: Evaluation
Other information
- Report Type: Measurement
- Credits: Harsh Raj
- Date Reported: 2025-05-26
- Version: 0.3.1
- AVID Entry