高级检索

    视觉语言大模型的幻觉综述:成因、评估与治理

    A Survey of Hallucinations in Large Vision-Language Models: Causes, Evaluations and Mitigations

    • 摘要: 视觉语言大模型(large vision-language models,LVLMs)代表了自然语言处理与计算机视觉交叉领域的一项重要进展. 通过结合预训练的视觉编码器、视觉语言适配器和大语言模型,LVLMs能够同时理解图像与文本信息,并通过自然语言进行响应,适用于图像描述、视觉问答等多种视觉语言下游任务. 然而,这类模型普遍存在幻觉现象,即模型对于图像内容进行了错误感知,制约了其在医学图像诊断 、自动驾驶等高风险领域的赋能应用. 旨在系统梳理并深入分析幻觉成因、评估方法及治理策略,为LVLMs的可靠性研究提供指导. 首先,介绍LVLMs的基础概念及其幻觉现象的定义与分类;随后,从训练数据、训练任务、视觉编码、文本生成4方面分析LVLMs的幻觉成因,并讨论这些成因间的交互关系;接着,从任务形式、数据构建和评估指标3方面介绍LVLMs的幻觉评估策略;此外,从训练数据、视觉感知、训练策略、模型推理、事后修正5方面讨论LVLMs的幻觉治理技术;最后,为这类幻觉的成因分析、评估和治理3方面提供未来的研究方向.

       

      Abstract: LVLMs (Large Vision-Language Models) represent a significant advancement in the intersection of natural language processing and computer vision. By integrating pre-trained visual encoders, vision-language adapters, and large language models, LVLMs can understand both visual and textual information, and generate responses in natural language, making them suitable for a range of downstream vision-language tasks such as image captioning and visual question answering. However, these models commonly exhibit hallucinations — generating inaccurate perceptions of image contents. Such hallucinations significantly limit the application of LVLMs in high-stakes domains like medical image diagnosis and autonomous driving. This survey aims to systematically organize and analyze the causes, evaluations, and mitigation strategies of hallucinations to guide research in the field and enhance the safety and reliability of LVLMs in practical applications. It begins with an introduction to the basic concepts of LVLMs and the definition and classification of hallucinations within them. It then explores the causes of hallucinations from four perspectives: training data, training task, visual encoding, and text generation, while also discussing the interactions among these factors. Following this, it discusses mainstream benchmarks for assessing LVLM hallucinations in terms of task setting, data construction, and assessment metrics. Additionally, it examines hallucination mitigating techniques across five aspects: training data, visual perception, training strategy, model inference, and post-hoc corrections. Finally, the review provides directions for future research in the areas of cause analysis, evaluation, and mitigation of hallucinations in LVLMs.

       

    /

    返回文章
    返回