Home » Database

AVID-2025-R0025

Description

Evaluation of the LLM DeepSeek-R1 on the cyse2_interpreter_abuse benchmark using Inspect Evals

Details

CyberSecEval2 is A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models’ risky capabilities in cybersecurity.

The LLM DeepSeek-R1 was evaluated on this benchmark.

Metrics

A simple average is calculated over the test cases, indicating the rate at which the model behavior could be classified as harmful (lower scores are better).

ScorerMetricValue
model_graded_qaaccuracy0.511
model_graded_qastderr0.020

References

Affected or Relevant Artifacts

  • Developer: DeepSeek
  • Deployer: Together AI
  • Artifact Details:
TypeName
ModelDeepSeek-R1

Impact

AVID Taxonomy Categorization

  • Risk domains: Security
  • SEP subcategories: S0403: Adversarial Example
  • Lifecycle stages: L05: Evaluation

Other information

  • Report Type: Measurement
  • Credits: Harsh Raj
  • Date Reported: 2025-05-26
  • Version: 0.3.1
  • AVID Entry