ESREAL: Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models
Hallucinations in vision-language models pose a significant challenge to their reliability, particularly in the generation of long captions. Current methods fall short of accurately identifying and mitigating these hallucinations. To address this issue, we introduce ESREAL, a novel unsupervised lear...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Kim, Minchan Kim, Minyeong Bae, Junik Choi, Suhwan Kim, Sungkyung Chang, Buru |
description | Hallucinations in vision-language models pose a significant challenge to
their reliability, particularly in the generation of long captions. Current
methods fall short of accurately identifying and mitigating these
hallucinations. To address this issue, we introduce ESREAL, a novel
unsupervised learning framework designed to suppress the generation of
hallucinations through accurate localization and penalization of hallucinated
tokens. Initially, ESREAL creates a reconstructed image based on the generated
caption and aligns its corresponding regions with those of the original image.
This semantic reconstruction aids in identifying both the presence and type of
token-level hallucinations within the generated caption. Subsequently, ESREAL
computes token-level hallucination scores by assessing the semantic similarity
of aligned regions based on the type of hallucination. Finally, ESREAL employs
a proximal policy optimization algorithm, where it selectively penalizes
hallucinated tokens according to their token-level hallucination scores. Our
framework notably reduces hallucinations in LLaVA, InstructBLIP, and mPLUG-Owl2
by 32.81%, 27.08%, and 7.46% on the CHAIR metric. This improvement is achieved
solely through signals derived from the image itself, without the need for any
image-text pairs. |
doi_str_mv | 10.48550/arxiv.2403.16167 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_16167</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_16167</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2403_161673</originalsourceid><addsrcrecordid>eNqFjr0OgjAURrs4GPUBnLwvAIL8aNyMwTDAAsa1ucFKblJaQovBtxeIu9N3ku8Mh7Gt77nhKYq8PXYDvd1D6AWuH_vxccl4UhbJJTtDMrRSkyVVQykaVJYqKESllbFdX1nSCqyGfDRqtAJSlLKvSOH0GCAFDzIjOhmqusdaQK6fQpo1W7xQGrH57Yrtbsn9mjpzCm87arD78CmJz0nBf-ML-kFCFQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>ESREAL: Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models</title><source>arXiv.org</source><creator>Kim, Minchan ; Kim, Minyeong ; Bae, Junik ; Choi, Suhwan ; Kim, Sungkyung ; Chang, Buru</creator><creatorcontrib>Kim, Minchan ; Kim, Minyeong ; Bae, Junik ; Choi, Suhwan ; Kim, Sungkyung ; Chang, Buru</creatorcontrib><description>Hallucinations in vision-language models pose a significant challenge to
their reliability, particularly in the generation of long captions. Current
methods fall short of accurately identifying and mitigating these
hallucinations. To address this issue, we introduce ESREAL, a novel
unsupervised learning framework designed to suppress the generation of
hallucinations through accurate localization and penalization of hallucinated
tokens. Initially, ESREAL creates a reconstructed image based on the generated
caption and aligns its corresponding regions with those of the original image.
This semantic reconstruction aids in identifying both the presence and type of
token-level hallucinations within the generated caption. Subsequently, ESREAL
computes token-level hallucination scores by assessing the semantic similarity
of aligned regions based on the type of hallucination. Finally, ESREAL employs
a proximal policy optimization algorithm, where it selectively penalizes
hallucinated tokens according to their token-level hallucination scores. Our
framework notably reduces hallucinations in LLaVA, InstructBLIP, and mPLUG-Owl2
by 32.81%, 27.08%, and 7.46% on the CHAIR metric. This improvement is achieved
solely through signals derived from the image itself, without the need for any
image-text pairs.</description><identifier>DOI: 10.48550/arxiv.2403.16167</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.16167$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.16167$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Minchan</creatorcontrib><creatorcontrib>Kim, Minyeong</creatorcontrib><creatorcontrib>Bae, Junik</creatorcontrib><creatorcontrib>Choi, Suhwan</creatorcontrib><creatorcontrib>Kim, Sungkyung</creatorcontrib><creatorcontrib>Chang, Buru</creatorcontrib><title>ESREAL: Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models</title><description>Hallucinations in vision-language models pose a significant challenge to
their reliability, particularly in the generation of long captions. Current
methods fall short of accurately identifying and mitigating these
hallucinations. To address this issue, we introduce ESREAL, a novel
unsupervised learning framework designed to suppress the generation of
hallucinations through accurate localization and penalization of hallucinated
tokens. Initially, ESREAL creates a reconstructed image based on the generated
caption and aligns its corresponding regions with those of the original image.
This semantic reconstruction aids in identifying both the presence and type of
token-level hallucinations within the generated caption. Subsequently, ESREAL
computes token-level hallucination scores by assessing the semantic similarity
of aligned regions based on the type of hallucination. Finally, ESREAL employs
a proximal policy optimization algorithm, where it selectively penalizes
hallucinated tokens according to their token-level hallucination scores. Our
framework notably reduces hallucinations in LLaVA, InstructBLIP, and mPLUG-Owl2
by 32.81%, 27.08%, and 7.46% on the CHAIR metric. This improvement is achieved
solely through signals derived from the image itself, without the need for any
image-text pairs.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjr0OgjAURrs4GPUBnLwvAIL8aNyMwTDAAsa1ucFKblJaQovBtxeIu9N3ku8Mh7Gt77nhKYq8PXYDvd1D6AWuH_vxccl4UhbJJTtDMrRSkyVVQykaVJYqKESllbFdX1nSCqyGfDRqtAJSlLKvSOH0GCAFDzIjOhmqusdaQK6fQpo1W7xQGrH57Yrtbsn9mjpzCm87arD78CmJz0nBf-ML-kFCFQ</recordid><startdate>20240324</startdate><enddate>20240324</enddate><creator>Kim, Minchan</creator><creator>Kim, Minyeong</creator><creator>Bae, Junik</creator><creator>Choi, Suhwan</creator><creator>Kim, Sungkyung</creator><creator>Chang, Buru</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240324</creationdate><title>ESREAL: Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models</title><author>Kim, Minchan ; Kim, Minyeong ; Bae, Junik ; Choi, Suhwan ; Kim, Sungkyung ; Chang, Buru</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2403_161673</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Minchan</creatorcontrib><creatorcontrib>Kim, Minyeong</creatorcontrib><creatorcontrib>Bae, Junik</creatorcontrib><creatorcontrib>Choi, Suhwan</creatorcontrib><creatorcontrib>Kim, Sungkyung</creatorcontrib><creatorcontrib>Chang, Buru</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Minchan</au><au>Kim, Minyeong</au><au>Bae, Junik</au><au>Choi, Suhwan</au><au>Kim, Sungkyung</au><au>Chang, Buru</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ESREAL: Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models</atitle><date>2024-03-24</date><risdate>2024</risdate><abstract>Hallucinations in vision-language models pose a significant challenge to
their reliability, particularly in the generation of long captions. Current
methods fall short of accurately identifying and mitigating these
hallucinations. To address this issue, we introduce ESREAL, a novel
unsupervised learning framework designed to suppress the generation of
hallucinations through accurate localization and penalization of hallucinated
tokens. Initially, ESREAL creates a reconstructed image based on the generated
caption and aligns its corresponding regions with those of the original image.
This semantic reconstruction aids in identifying both the presence and type of
token-level hallucinations within the generated caption. Subsequently, ESREAL
computes token-level hallucination scores by assessing the semantic similarity
of aligned regions based on the type of hallucination. Finally, ESREAL employs
a proximal policy optimization algorithm, where it selectively penalizes
hallucinated tokens according to their token-level hallucination scores. Our
framework notably reduces hallucinations in LLaVA, InstructBLIP, and mPLUG-Owl2
by 32.81%, 27.08%, and 7.46% on the CHAIR metric. This improvement is achieved
solely through signals derived from the image itself, without the need for any
image-text pairs.</abstract><doi>10.48550/arxiv.2403.16167</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2403.16167 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2403_16167 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Computer Vision and Pattern Recognition |
title | ESREAL: Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T19%3A47%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ESREAL:%20Exploiting%20Semantic%20Reconstruction%20to%20Mitigate%20Hallucinations%20in%20Vision-Language%20Models&rft.au=Kim,%20Minchan&rft.date=2024-03-24&rft_id=info:doi/10.48550/arxiv.2403.16167&rft_dat=%3Carxiv_GOX%3E2403_16167%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |