We use cookies to improve your experience on our site.
AVID-2023-V026
Description
ChatGPT generates false or incomplete references to scientific literature
Details
When asked to recommend papers on explainability, privacy, adversarial ML, etc. ChatGPT recommends papers that (a) may not always exist, (b) mixes up correct and incorrect information, e.g. correct title but wrong authors, or (c) have incomplete information on authors.
Reports
ID | Type | Name |
---|---|---|
AVID-2023-R0002 | Issue | ChatGPT links wrong authors to papers |
References
AVID Taxonomy Categorization
- Risk domains: Ethics
- SEP subcategories: E0402: Generative Misinformation
- Lifecycle stages: L05: Evaluation, L06: Deployment
Affected or Relevant Artifacts
- Developer: OpenAI
- Deployer: OpenAI
- Artifact Details:
Type Name System ChatGPT
Other information
- Vulnerability Class: LLM Evaluation
- Credits: Jaydeep Borkar, N/A
- Date Published: 2023-03-31
- Date Last Modified: 2023-03-31
- Version: 0.2
- AVID Entry