What If We Recaption Billions of Web Images with LLaMA-3?

Web-crawled image-text pairs are inherently noisy. Prior studies demonstrate that semantically aligning and enriching textual descriptions of these pairs can significantly enhance model training across various vision-language tasks, particularly text-to-image generation. However, large-scale investi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Li, Xianhang, Tu, Haoqin, Hui, Mude, Wang, Zeyu, Zhao, Bingchen, Xiao, Junfei, Ren, Sucheng, Mei, Jieru, Liu, Qing, Zheng, Huangjie, Zhou, Yuyin, Xie, Cihang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Li, Xianhang
Tu, Haoqin
Hui, Mude
Wang, Zeyu
Zhao, Bingchen
Xiao, Junfei
Ren, Sucheng
Mei, Jieru
Liu, Qing
Zheng, Huangjie
Zhou, Yuyin
Xie, Cihang
description Web-crawled image-text pairs are inherently noisy. Prior studies demonstrate that semantically aligning and enriching textual descriptions of these pairs can significantly enhance model training across various vision-language tasks, particularly text-to-image generation. However, large-scale investigations in this area remain predominantly closed-source. Our paper aims to bridge this community effort, leveraging the powerful and \textit{open-sourced} LLaMA-3, a GPT-4 level LLM. Our recaptioning pipeline is simple: first, we fine-tune a LLaMA-3-8B powered LLaVA-1.5 and then employ it to recaption 1.3 billion images from the DataComp-1B dataset. Our empirical results confirm that this enhanced dataset, Recap-DataComp-1B, offers substantial benefits in training advanced vision-language models. For discriminative models like CLIP, we observe enhanced zero-shot performance in cross-modal retrieval tasks. For generative models like text-to-image Diffusion Transformers, the generated images exhibit a significant improvement in alignment with users' text instructions, especially in following complex queries. Our project page is https://www.haqtu.me/Recap-Datacomp-1B/
doi_str_mv 10.48550/arxiv.2406.08478
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_08478</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_08478</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-28434baf8d497022c325214e5c048faabe9bf9ef4efaff0269cf39e8edf24da83</originalsourceid><addsrcrecordid>eNotj81qwkAURmfTRdE-QFfOCyQdZ26SOyux0tZARBDBZbhJ7tWBWCUJ_Xn71p_VWXzwcY5Sz1MTAyaJeaHuJ3zFFkwaG4QMH5XfHWjQuegd6w3XdB7C6VO_hrb9Z69Pl6HS-ZH23OvvMBx0UdBqHrnZWD0ItT0_3TlS2_e37WIZFeuPfDEvIkozjCyCg4oEG_CZsbZ2NrFT4KQ2gEJUsa_EswALiRib-lqcZ-RGLDSEbqQmt9ure3nuwpG63_LSUF4b3B-uaj__</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>What If We Recaption Billions of Web Images with LLaMA-3?</title><source>arXiv.org</source><creator>Li, Xianhang ; Tu, Haoqin ; Hui, Mude ; Wang, Zeyu ; Zhao, Bingchen ; Xiao, Junfei ; Ren, Sucheng ; Mei, Jieru ; Liu, Qing ; Zheng, Huangjie ; Zhou, Yuyin ; Xie, Cihang</creator><creatorcontrib>Li, Xianhang ; Tu, Haoqin ; Hui, Mude ; Wang, Zeyu ; Zhao, Bingchen ; Xiao, Junfei ; Ren, Sucheng ; Mei, Jieru ; Liu, Qing ; Zheng, Huangjie ; Zhou, Yuyin ; Xie, Cihang</creatorcontrib><description>Web-crawled image-text pairs are inherently noisy. Prior studies demonstrate that semantically aligning and enriching textual descriptions of these pairs can significantly enhance model training across various vision-language tasks, particularly text-to-image generation. However, large-scale investigations in this area remain predominantly closed-source. Our paper aims to bridge this community effort, leveraging the powerful and \textit{open-sourced} LLaMA-3, a GPT-4 level LLM. Our recaptioning pipeline is simple: first, we fine-tune a LLaMA-3-8B powered LLaVA-1.5 and then employ it to recaption 1.3 billion images from the DataComp-1B dataset. Our empirical results confirm that this enhanced dataset, Recap-DataComp-1B, offers substantial benefits in training advanced vision-language models. For discriminative models like CLIP, we observe enhanced zero-shot performance in cross-modal retrieval tasks. For generative models like text-to-image Diffusion Transformers, the generated images exhibit a significant improvement in alignment with users' text instructions, especially in following complex queries. Our project page is https://www.haqtu.me/Recap-Datacomp-1B/</description><identifier>DOI: 10.48550/arxiv.2406.08478</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.08478$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.08478$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Xianhang</creatorcontrib><creatorcontrib>Tu, Haoqin</creatorcontrib><creatorcontrib>Hui, Mude</creatorcontrib><creatorcontrib>Wang, Zeyu</creatorcontrib><creatorcontrib>Zhao, Bingchen</creatorcontrib><creatorcontrib>Xiao, Junfei</creatorcontrib><creatorcontrib>Ren, Sucheng</creatorcontrib><creatorcontrib>Mei, Jieru</creatorcontrib><creatorcontrib>Liu, Qing</creatorcontrib><creatorcontrib>Zheng, Huangjie</creatorcontrib><creatorcontrib>Zhou, Yuyin</creatorcontrib><creatorcontrib>Xie, Cihang</creatorcontrib><title>What If We Recaption Billions of Web Images with LLaMA-3?</title><description>Web-crawled image-text pairs are inherently noisy. Prior studies demonstrate that semantically aligning and enriching textual descriptions of these pairs can significantly enhance model training across various vision-language tasks, particularly text-to-image generation. However, large-scale investigations in this area remain predominantly closed-source. Our paper aims to bridge this community effort, leveraging the powerful and \textit{open-sourced} LLaMA-3, a GPT-4 level LLM. Our recaptioning pipeline is simple: first, we fine-tune a LLaMA-3-8B powered LLaVA-1.5 and then employ it to recaption 1.3 billion images from the DataComp-1B dataset. Our empirical results confirm that this enhanced dataset, Recap-DataComp-1B, offers substantial benefits in training advanced vision-language models. For discriminative models like CLIP, we observe enhanced zero-shot performance in cross-modal retrieval tasks. For generative models like text-to-image Diffusion Transformers, the generated images exhibit a significant improvement in alignment with users' text instructions, especially in following complex queries. Our project page is https://www.haqtu.me/Recap-Datacomp-1B/</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81qwkAURmfTRdE-QFfOCyQdZ26SOyux0tZARBDBZbhJ7tWBWCUJ_Xn71p_VWXzwcY5Sz1MTAyaJeaHuJ3zFFkwaG4QMH5XfHWjQuegd6w3XdB7C6VO_hrb9Z69Pl6HS-ZH23OvvMBx0UdBqHrnZWD0ItT0_3TlS2_e37WIZFeuPfDEvIkozjCyCg4oEG_CZsbZ2NrFT4KQ2gEJUsa_EswALiRib-lqcZ-RGLDSEbqQmt9ure3nuwpG63_LSUF4b3B-uaj__</recordid><startdate>20240612</startdate><enddate>20240612</enddate><creator>Li, Xianhang</creator><creator>Tu, Haoqin</creator><creator>Hui, Mude</creator><creator>Wang, Zeyu</creator><creator>Zhao, Bingchen</creator><creator>Xiao, Junfei</creator><creator>Ren, Sucheng</creator><creator>Mei, Jieru</creator><creator>Liu, Qing</creator><creator>Zheng, Huangjie</creator><creator>Zhou, Yuyin</creator><creator>Xie, Cihang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240612</creationdate><title>What If We Recaption Billions of Web Images with LLaMA-3?</title><author>Li, Xianhang ; Tu, Haoqin ; Hui, Mude ; Wang, Zeyu ; Zhao, Bingchen ; Xiao, Junfei ; Ren, Sucheng ; Mei, Jieru ; Liu, Qing ; Zheng, Huangjie ; Zhou, Yuyin ; Xie, Cihang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-28434baf8d497022c325214e5c048faabe9bf9ef4efaff0269cf39e8edf24da83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Xianhang</creatorcontrib><creatorcontrib>Tu, Haoqin</creatorcontrib><creatorcontrib>Hui, Mude</creatorcontrib><creatorcontrib>Wang, Zeyu</creatorcontrib><creatorcontrib>Zhao, Bingchen</creatorcontrib><creatorcontrib>Xiao, Junfei</creatorcontrib><creatorcontrib>Ren, Sucheng</creatorcontrib><creatorcontrib>Mei, Jieru</creatorcontrib><creatorcontrib>Liu, Qing</creatorcontrib><creatorcontrib>Zheng, Huangjie</creatorcontrib><creatorcontrib>Zhou, Yuyin</creatorcontrib><creatorcontrib>Xie, Cihang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Xianhang</au><au>Tu, Haoqin</au><au>Hui, Mude</au><au>Wang, Zeyu</au><au>Zhao, Bingchen</au><au>Xiao, Junfei</au><au>Ren, Sucheng</au><au>Mei, Jieru</au><au>Liu, Qing</au><au>Zheng, Huangjie</au><au>Zhou, Yuyin</au><au>Xie, Cihang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>What If We Recaption Billions of Web Images with LLaMA-3?</atitle><date>2024-06-12</date><risdate>2024</risdate><abstract>Web-crawled image-text pairs are inherently noisy. Prior studies demonstrate that semantically aligning and enriching textual descriptions of these pairs can significantly enhance model training across various vision-language tasks, particularly text-to-image generation. However, large-scale investigations in this area remain predominantly closed-source. Our paper aims to bridge this community effort, leveraging the powerful and \textit{open-sourced} LLaMA-3, a GPT-4 level LLM. Our recaptioning pipeline is simple: first, we fine-tune a LLaMA-3-8B powered LLaVA-1.5 and then employ it to recaption 1.3 billion images from the DataComp-1B dataset. Our empirical results confirm that this enhanced dataset, Recap-DataComp-1B, offers substantial benefits in training advanced vision-language models. For discriminative models like CLIP, we observe enhanced zero-shot performance in cross-modal retrieval tasks. For generative models like text-to-image Diffusion Transformers, the generated images exhibit a significant improvement in alignment with users' text instructions, especially in following complex queries. Our project page is https://www.haqtu.me/Recap-Datacomp-1B/</abstract><doi>10.48550/arxiv.2406.08478</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.08478
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_08478
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Computer Vision and Pattern Recognition
title What If We Recaption Billions of Web Images with LLaMA-3?
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T19%3A20%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=What%20If%20We%20Recaption%20Billions%20of%20Web%20Images%20with%20LLaMA-3?&rft.au=Li,%20Xianhang&rft.date=2024-06-12&rft_id=info:doi/10.48550/arxiv.2406.08478&rft_dat=%3Carxiv_GOX%3E2406_08478%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true