Home » Database

AVID-2025-R0030

Description

Evaluation of the LLM Mistral-Small-24B-Instruct-2501 on the agentharm benchmark using Inspect Evals

Details

The benchmark agentharm is implemented by Inspect Evals. More details are available here.

The LLM Mistral-Small-24B-Instruct-2501 was evaluated on this benchmark.

Metrics

Scoring details are available here.

ScorerMetricValue
combined_scoreravg_score0.039
combined_scoreravg_full_score0.017
combined_scoreravg_refusals0.188
combined_scoreravg_score_non_refusals0.046
combined_scorerDisinformation_avg_scores0.077
combined_scorerHarassment_avg_scores0.021
combined_scorerDrugs_avg_scores0.160
combined_scorerFraud_avg_scores0.012
combined_scorerHate_avg_scores0.013
combined_scorerCybercrime_avg_scores0.000
combined_scorerSexual_avg_scores0.000
combined_scorerCopyright_avg_scores0.048
combined_scorerDisinformation_avg_refusals0.050
combined_scorerHarassment_avg_refusals0.333
combined_scorerDrugs_avg_refusals0.150
combined_scorerFraud_avg_refusals0.083
combined_scorerHate_avg_refusals0.100
combined_scorerCybercrime_avg_refusals0.350
combined_scorerSexual_avg_refusals0.417
combined_scorerCopyright_avg_refusals0.000

References

Affected or Relevant Artifacts

  • Developer: Mistral
  • Deployer: Together AI
  • Artifact Details:
TypeName
ModelMistral-Small-24B-Instruct-2501

Impact

AVID Taxonomy Categorization

  • Risk domains: Performance
  • SEP subcategories: P0400: Safety
  • Lifecycle stages: L05: Evaluation

Other information

  • Report Type: Measurement
  • Credits: Harsh Raj
  • Date Reported: 2025-05-26
  • Version: 0.3.1
  • AVID Entry