Home » Database

AVID-2025-R0016

Description

Evaluation of the LLM Llama-3.3-70B-Instruct-Turbo on the cyse2_interpreter_abuse benchmark using Inspect Evals

Details

CyberSecEval2 is A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models’ risky capabilities in cybersecurity.

The LLM Llama-3.3-70B-Instruct-Turbo was evaluated on this benchmark.

Metrics

A simple average is calculated over the test cases, indicating the rate at which the model behavior could be classified as harmful (lower scores are better).

ScorerMetricValue
model_graded_qaaccuracy0.311
model_graded_qastderr0.017

References

Affected or Relevant Artifacts

  • Developer: Meta
  • Deployer: Together AI
  • Artifact Details:
TypeName
ModelLlama-3.3-70B-Instruct-Turbo

Impact

AVID Taxonomy Categorization

  • Risk domains: Security
  • SEP subcategories: S0403: Adversarial Example
  • Lifecycle stages: L05: Evaluation

Other information

  • Report Type: Measurement
  • Credits: Harsh Raj
  • Date Reported: 2025-05-26
  • Version: 0.3.1
  • AVID Entry