Skip to content

πŸ“– A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).

Notifications You must be signed in to change notification settings

showlab/Awesome-MLLM-Hallucination

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

38 Commits
Β 
Β 
Β 
Β 

Repository files navigation

Awesome MLLM Hallucination Awesome

⭐ News! We have released a comprehensive survey of MLLM hallucination.

TAX


This is a repository for organizing papres, codes and other resources related to hallucination of Multimodal Large Language Models (MLLM), or called Large Vision-Language Models (LVLM).

Hallucination in LLM usually refers to the phenomenon that the generated content is nonsensical or unfaithful to the provided source content, such as violation of input instruction, or containing factual errors, etc. In the context of MLLM, hallucination refers to the phenomenon that the generated text is semantically coherent but inconsistent with the given visual content. The community has been constantly making progress on analyzing, detecting, and mitigating hallucination in MLLM.

πŸ“š How to read?

The main contribution of a specific paper is proposing either a new hallucination benchmark (metric) or proposing a hallucination mitigation method. The analysis and detection of hallucination are only part of the whole paper, serving as the basis of evaluation and mitigation. Therefore, we divide the papers into two categories: **hallucination evaluation & analysis ** and hallucination mitigation. In each category, the paper are listd in an order from new to old. Note that there might be some duplicated papers in the two categories. Those papers contain both evaluation benchmark and mitigation method.

πŸ”† This project is still on-going, pull requests are welcomed!!

If you have any suggestions (missing papers, new papers, key researchers or typos), please feel free to edit and pull a request. Just letting us know the title of papers can also be a great contribution to us. You can do this by open issue or contact us directly via email.

⭐ If you find this repo useful, please star it!!!

Table of Contents

Hallucination Survey

Hallucination Evaluation & Analysis

Hallucination Mitigation

About

πŸ“– A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published