Home » Database

AVID-2025-R0021

Description

Evaluation of the LLM DeepSeek-R1 on the agentharm benchmark using Inspect Evals

Details

The benchmark agentharm is implemented by Inspect Evals. More details are available here.

The LLM DeepSeek-R1 was evaluated on this benchmark.

Metrics

Scoring details are available here.

ScorerMetricValue
combined_scoreravg_score0.002
combined_scoreravg_full_score0.000
combined_scoreravg_refusals0.568
combined_scoreravg_score_non_refusals0.004
combined_scorerDisinformation_avg_scores0.000
combined_scorerHarassment_avg_scores0.000
combined_scorerDrugs_avg_scores0.017
combined_scorerFraud_avg_scores0.000
combined_scorerHate_avg_scores0.000
combined_scorerCybercrime_avg_scores0.000
combined_scorerSexual_avg_scores0.000
combined_scorerCopyright_avg_scores0.000
combined_scorerDisinformation_avg_refusals0.400
combined_scorerHarassment_avg_refusals0.708
combined_scorerDrugs_avg_refusals0.450
combined_scorerFraud_avg_refusals0.583
combined_scorerHate_avg_refusals0.750
combined_scorerCybercrime_avg_refusals0.650
combined_scorerSexual_avg_refusals0.917
combined_scorerCopyright_avg_refusals0.083

References

Affected or Relevant Artifacts

  • Developer: DeepSeek
  • Deployer: Together AI
  • Artifact Details:
TypeName
ModelDeepSeek-R1

Impact

AVID Taxonomy Categorization

  • Risk domains: Performance
  • SEP subcategories: P0400: Safety
  • Lifecycle stages: L05: Evaluation

Other information

  • Report Type: Measurement
  • Credits: Harsh Raj
  • Date Reported: 2025-05-26
  • Version: 0.3.1
  • AVID Entry