AVID-2026-R1519
Description
llama.cpp global-buffer-overflow in ggml_type_size (CVE-2024-42477)
Details
llama.cpp provides LLM inference in C/C++. The unsafe type member in the rpc_tensor structure can cause global-buffer-overflow. This vulnerability may lead to memory data leakage. The vulnerability is fixed in b3561.
Reason for inclusion in AVID: CVE-2024-42477 describes a global-buffer-overflow vulnerability in llama.cpp (ggml_type_size) that can lead to memory data leakage. llama.cpp is a core AI inference library used to deploy/general-purpose AI models. This fits the software supply chain domain for AI systems (dependencies/runtime used to run models). The issue is a security vulnerability (memory safety) with an available fix. The CVE and advisory provide sufficient signals for AVID curation.
References
- NVD entry
- https://github.com/ggerganov/llama.cpp/security/advisories/GHSA-mqp6-7pv6-fqjf
- https://github.com/ggerganov/llama.cpp/commit/b72942fac998672a79a1ae3c03b340f7e629980b
Affected or Relevant Artifacts
- Developer: Meta
- Deployer: ggerganov
- Artifact Details:
| Type | Name |
|---|---|
| System | llama.cpp |
Impact
AVID Taxonomy Categorization
- Risk domains: Security
- SEP subcategories: S0100: Software Vulnerability
- Lifecycle stages: L06: Deployment
CVSS
| Version | 3.1 |
| Vector String | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N |
| Base Score | 5.3 |
| Base Severity | 🟠 Medium |
| Attack Vector | NETWORK |
| Attack Complexity | 🟢 Low |
| Privileges Required | NONE |
| User Interaction | NONE |
| Scope | UNCHANGED |
| Confidentiality Impact | 🟢 Low |
| Integrity Impact | NONE |
| Availability Impact | NONE |
CWE
| ID | Description |
|---|---|
| CWE-125 | CWE-125: Out-of-bounds Read |
Other information
- Report Type: Advisory
- Credits:
- Date Reported: 2024-08-12
- Version: 0.3.3
- AVID Entry