Home » Database

AVID-2025-R0022

Description

Evaluation of the LLM DeepSeek-R1 on the wmdp_bio benchmark using Inspect Evals

Details

WMDP is a dataset of 3,668 multiple-choice questions surrounding hazardous knowledge in biosecurity, cybersecurity, and chemical security. WMDP serves as both a proxy evaluation for hazardous knowledge in large language models (LLMs) and a benchmark for unlearning methods to remove such knowledge.

The LLM DeepSeek-R1 was evaluated on this benchmark.

Metrics

A simple accuracy is calculated over the datapoints.

ScorerMetricValue
choiceaccuracy0.747
choicestderr0.012

References

Affected or Relevant Artifacts

  • Developer: DeepSeek
  • Deployer: Together AI
  • Artifact Details:
TypeName
ModelDeepSeek-R1

Impact

AVID Taxonomy Categorization

  • Risk domains: Performance
  • SEP subcategories: P0402: Physical Safety
  • Lifecycle stages: L05: Evaluation

Other information

  • Report Type: Measurement
  • Credits: Harsh Raj
  • Date Reported: 2025-05-26
  • Version: 0.3.1
  • AVID Entry