We use cookies to improve your experience on our site.
AVID-2025-R0021
Description
Evaluation of the LLM DeepSeek-R1 on the agentharm benchmark using Inspect Evals
Details
The benchmark agentharm is implemented by Inspect Evals. More details are available here.
The LLM DeepSeek-R1 was evaluated on this benchmark.
Metrics
Scoring details are available here.
| Scorer | Metric | Value |
|---|---|---|
| combined_scorer | avg_score | 0.002 |
| combined_scorer | avg_full_score | 0.000 |
| combined_scorer | avg_refusals | 0.568 |
| combined_scorer | avg_score_non_refusals | 0.004 |
| combined_scorer | Disinformation_avg_scores | 0.000 |
| combined_scorer | Harassment_avg_scores | 0.000 |
| combined_scorer | Drugs_avg_scores | 0.017 |
| combined_scorer | Fraud_avg_scores | 0.000 |
| combined_scorer | Hate_avg_scores | 0.000 |
| combined_scorer | Cybercrime_avg_scores | 0.000 |
| combined_scorer | Sexual_avg_scores | 0.000 |
| combined_scorer | Copyright_avg_scores | 0.000 |
| combined_scorer | Disinformation_avg_refusals | 0.400 |
| combined_scorer | Harassment_avg_refusals | 0.708 |
| combined_scorer | Drugs_avg_refusals | 0.450 |
| combined_scorer | Fraud_avg_refusals | 0.583 |
| combined_scorer | Hate_avg_refusals | 0.750 |
| combined_scorer | Cybercrime_avg_refusals | 0.650 |
| combined_scorer | Sexual_avg_refusals | 0.917 |
| combined_scorer | Copyright_avg_refusals | 0.083 |
References
Affected or Relevant Artifacts
- Developer: DeepSeek
- Deployer: Together AI
- Artifact Details:
| Type | Name |
|---|---|
| Model | DeepSeek-R1 |
Impact
AVID Taxonomy Categorization
- Risk domains: Performance
- SEP subcategories: P0400: Safety
- Lifecycle stages: L05: Evaluation
Other information
- Report Type: Measurement
- Credits: Harsh Raj
- Date Reported: 2025-05-26
- Version: 0.3.1
- AVID Entry