OneDiff: A Generalist Model for Image Difference Captioning
In computer vision, Image Difference Captioning (IDC) is crucial for accurately describing variations between closely related images. Traditional IDC methods often rely on specialist models, which restrict their applicability across varied contexts. This paper introduces the OneDiff model, a novel g...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Hu, Erdong Guo, Longteng Yue, Tongtian Zhao, Zijia Xue, Shuning Liu, Jing |
description | In computer vision, Image Difference Captioning (IDC) is crucial for
accurately describing variations between closely related images. Traditional
IDC methods often rely on specialist models, which restrict their applicability
across varied contexts. This paper introduces the OneDiff model, a novel
generalist approach that utilizes a robust vision-language model architecture,
integrating a siamese image encoder with a Visual Delta Module. This innovative
configuration allows for the precise detection and articulation of fine-grained
differences between image pairs. OneDiff is trained through a dual-phase
strategy, encompassing Coupled Sample Training and multi-task learning across a
diverse array of data types, supported by our newly developed DiffCap Dataset.
This dataset merges real-world and synthetic data, enhancing the training
process and bolstering the model's robustness. Extensive testing on diverse IDC
benchmarks, such as Spot-the-Diff, Image-Editing-Request, and Birds-to-Words,
shows that OneDiff consistently outperforms existing state-of-the-art models in
accuracy and adaptability, achieving improvements of up to 97% CIDEr points in
average. By setting a new benchmark in IDC, OneDiff paves the way for more
versatile and effective applications in detecting and describing visual
differences. The code, models, and data will be made publicly available. |
doi_str_mv | 10.48550/arxiv.2407.05645 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_05645</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_05645</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_056453</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwNTMx5WSw9s9LdclMS7NScFRwT81LLUrMySwuUfDNT0nNUUjLL1LwzE1MT1UAKUktSs1LTlVwTiwoyczPy8xL52FgTUvMKU7lhdLcDPJuriHOHrpga-ILijJzE4sq40HWxYOtMyasAgBF2TPo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>OneDiff: A Generalist Model for Image Difference Captioning</title><source>arXiv.org</source><creator>Hu, Erdong ; Guo, Longteng ; Yue, Tongtian ; Zhao, Zijia ; Xue, Shuning ; Liu, Jing</creator><creatorcontrib>Hu, Erdong ; Guo, Longteng ; Yue, Tongtian ; Zhao, Zijia ; Xue, Shuning ; Liu, Jing</creatorcontrib><description>In computer vision, Image Difference Captioning (IDC) is crucial for
accurately describing variations between closely related images. Traditional
IDC methods often rely on specialist models, which restrict their applicability
across varied contexts. This paper introduces the OneDiff model, a novel
generalist approach that utilizes a robust vision-language model architecture,
integrating a siamese image encoder with a Visual Delta Module. This innovative
configuration allows for the precise detection and articulation of fine-grained
differences between image pairs. OneDiff is trained through a dual-phase
strategy, encompassing Coupled Sample Training and multi-task learning across a
diverse array of data types, supported by our newly developed DiffCap Dataset.
This dataset merges real-world and synthetic data, enhancing the training
process and bolstering the model's robustness. Extensive testing on diverse IDC
benchmarks, such as Spot-the-Diff, Image-Editing-Request, and Birds-to-Words,
shows that OneDiff consistently outperforms existing state-of-the-art models in
accuracy and adaptability, achieving improvements of up to 97% CIDEr points in
average. By setting a new benchmark in IDC, OneDiff paves the way for more
versatile and effective applications in detecting and describing visual
differences. The code, models, and data will be made publicly available.</description><identifier>DOI: 10.48550/arxiv.2407.05645</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Multimedia</subject><creationdate>2024-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,782,887</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.05645$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.05645$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hu, Erdong</creatorcontrib><creatorcontrib>Guo, Longteng</creatorcontrib><creatorcontrib>Yue, Tongtian</creatorcontrib><creatorcontrib>Zhao, Zijia</creatorcontrib><creatorcontrib>Xue, Shuning</creatorcontrib><creatorcontrib>Liu, Jing</creatorcontrib><title>OneDiff: A Generalist Model for Image Difference Captioning</title><description>In computer vision, Image Difference Captioning (IDC) is crucial for
accurately describing variations between closely related images. Traditional
IDC methods often rely on specialist models, which restrict their applicability
across varied contexts. This paper introduces the OneDiff model, a novel
generalist approach that utilizes a robust vision-language model architecture,
integrating a siamese image encoder with a Visual Delta Module. This innovative
configuration allows for the precise detection and articulation of fine-grained
differences between image pairs. OneDiff is trained through a dual-phase
strategy, encompassing Coupled Sample Training and multi-task learning across a
diverse array of data types, supported by our newly developed DiffCap Dataset.
This dataset merges real-world and synthetic data, enhancing the training
process and bolstering the model's robustness. Extensive testing on diverse IDC
benchmarks, such as Spot-the-Diff, Image-Editing-Request, and Birds-to-Words,
shows that OneDiff consistently outperforms existing state-of-the-art models in
accuracy and adaptability, achieving improvements of up to 97% CIDEr points in
average. By setting a new benchmark in IDC, OneDiff paves the way for more
versatile and effective applications in detecting and describing visual
differences. The code, models, and data will be made publicly available.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Multimedia</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwNTMx5WSw9s9LdclMS7NScFRwT81LLUrMySwuUfDNT0nNUUjLL1LwzE1MT1UAKUktSs1LTlVwTiwoyczPy8xL52FgTUvMKU7lhdLcDPJuriHOHrpga-ILijJzE4sq40HWxYOtMyasAgBF2TPo</recordid><startdate>20240708</startdate><enddate>20240708</enddate><creator>Hu, Erdong</creator><creator>Guo, Longteng</creator><creator>Yue, Tongtian</creator><creator>Zhao, Zijia</creator><creator>Xue, Shuning</creator><creator>Liu, Jing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240708</creationdate><title>OneDiff: A Generalist Model for Image Difference Captioning</title><author>Hu, Erdong ; Guo, Longteng ; Yue, Tongtian ; Zhao, Zijia ; Xue, Shuning ; Liu, Jing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_056453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Multimedia</topic><toplevel>online_resources</toplevel><creatorcontrib>Hu, Erdong</creatorcontrib><creatorcontrib>Guo, Longteng</creatorcontrib><creatorcontrib>Yue, Tongtian</creatorcontrib><creatorcontrib>Zhao, Zijia</creatorcontrib><creatorcontrib>Xue, Shuning</creatorcontrib><creatorcontrib>Liu, Jing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hu, Erdong</au><au>Guo, Longteng</au><au>Yue, Tongtian</au><au>Zhao, Zijia</au><au>Xue, Shuning</au><au>Liu, Jing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>OneDiff: A Generalist Model for Image Difference Captioning</atitle><date>2024-07-08</date><risdate>2024</risdate><abstract>In computer vision, Image Difference Captioning (IDC) is crucial for
accurately describing variations between closely related images. Traditional
IDC methods often rely on specialist models, which restrict their applicability
across varied contexts. This paper introduces the OneDiff model, a novel
generalist approach that utilizes a robust vision-language model architecture,
integrating a siamese image encoder with a Visual Delta Module. This innovative
configuration allows for the precise detection and articulation of fine-grained
differences between image pairs. OneDiff is trained through a dual-phase
strategy, encompassing Coupled Sample Training and multi-task learning across a
diverse array of data types, supported by our newly developed DiffCap Dataset.
This dataset merges real-world and synthetic data, enhancing the training
process and bolstering the model's robustness. Extensive testing on diverse IDC
benchmarks, such as Spot-the-Diff, Image-Editing-Request, and Birds-to-Words,
shows that OneDiff consistently outperforms existing state-of-the-art models in
accuracy and adaptability, achieving improvements of up to 97% CIDEr points in
average. By setting a new benchmark in IDC, OneDiff paves the way for more
versatile and effective applications in detecting and describing visual
differences. The code, models, and data will be made publicly available.</abstract><doi>10.48550/arxiv.2407.05645</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2407.05645 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2407_05645 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Multimedia |
title | OneDiff: A Generalist Model for Image Difference Captioning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-02T00%3A06%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=OneDiff:%20A%20Generalist%20Model%20for%20Image%20Difference%20Captioning&rft.au=Hu,%20Erdong&rft.date=2024-07-08&rft_id=info:doi/10.48550/arxiv.2407.05645&rft_dat=%3Carxiv_GOX%3E2407_05645%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |