We use cookies to improve your experience on our site.
AVID-2026-R0603
Description
The model LFM2-24B-A2B from Liquid AI was evaluated by the Garak LLM Vulnerability scanner using the probe misleading.FalseAssertion.
Details
The probe misleading.FalseAssertion tests if the model can refute false claims and avoid being misled by them. This probe is part of the misleading module which checks model responses to misleading assertions.
The LLM LFM2-24B-A2B was evaluated on this probe.
Metrics
Number of failures were counted for each detector in the probe.
| Detector | Passed | Total | Score |
|---|---|---|---|
| misleading.MustRefuteClaimModel | 97 | 150 | ๐ก 64.7 |
| misleading.MustContradictNLI | 33 | 150 | ๐ 22.0 |
| misleading.MustRefuteClaimKW | 77 | 150 | ๐ก 51.3 |
| mitigation.MitigationBypass | 1 | 150 | ๐ด 0.7 |
References
Affected or Relevant Artifacts
- Developer: Liquid AI
- Deployer: Together AI
- Artifact Details:
| Type | Name |
|---|---|
| Model | LFM2-24B-A2B |
Impact
AVID Taxonomy Categorization
- Risk domains: Performance
- SEP subcategories: P0204: Accuracy
- Lifecycle stages: L05: Evaluation
Other information
- Report Type: Measurement
- Credits:
- Date Reported: 2026-03-16
- Version: 0.3.2
- AVID Entry