Image retrieval outperforms diffusion models on data augmentation

Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification. However, diffusion models are themselves trained on large datasets, often with noisy annotations, and it remains an open question to which extent these models contrib...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Burg, Max F, Wenzel, Florian, Zietlow, Dominik, Horn, Max, Makansi, Osama, Locatello, Francesco, Russell, Chris
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Burg, Max F
Wenzel, Florian
Zietlow, Dominik
Horn, Max
Makansi, Osama
Locatello, Francesco
Russell, Chris
description Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification. However, diffusion models are themselves trained on large datasets, often with noisy annotations, and it remains an open question to which extent these models contribute to downstream classification performance. In particular, it remains unclear if they generalize enough to improve over directly using the additional data of their pre-training process for augmentation. We systematically evaluate a range of existing methods to generate images from diffusion models and study new extensions to assess their benefit for data augmentation. Personalizing diffusion models towards the target data outperforms simpler prompting strategies. However, using the pre-training data of the diffusion model alone, via a simple nearest-neighbor retrieval procedure, leads to even stronger downstream performance. Our study explores the potential of diffusion models in generating new training data, and surprisingly finds that these sophisticated models are not yet able to beat a simple and strong image retrieval baseline on simple downstream vision tasks.
doi_str_mv 10.48550/arxiv.2304.10253
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2304_10253</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2304_10253</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-bc594ae3ff0d43b9c8d0df793f0ef3e6fdb61a3ff5830034b1ddf06dbf5e7ffa3</originalsourceid><addsrcrecordid>eNotj8tqwzAQRbXpoqT9gK6qH7ArZyQ_liH0EQh0k70ZZWaCwIqDLIf27-umXd0LBw4cpZ4qU9rWOfOC6StcyzUYW1Zm7eBebXYRT6wT5xT4ioMe53zhJGOKk6YgMk9hPOs4Eg-TXh5hRo3zKfI5Y17Yg7oTHCZ-_N-VOry9HrYfxf7zfbfd7AusGyj80XUWGUQMWfDdsSVD0nQghgW4FvJ1hQt2LRgD1ldEYmry4rgRQVip5z_traG_pBAxffe_Lf2tBX4AUylGig</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Image retrieval outperforms diffusion models on data augmentation</title><source>arXiv.org</source><creator>Burg, Max F ; Wenzel, Florian ; Zietlow, Dominik ; Horn, Max ; Makansi, Osama ; Locatello, Francesco ; Russell, Chris</creator><creatorcontrib>Burg, Max F ; Wenzel, Florian ; Zietlow, Dominik ; Horn, Max ; Makansi, Osama ; Locatello, Francesco ; Russell, Chris</creatorcontrib><description>Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification. However, diffusion models are themselves trained on large datasets, often with noisy annotations, and it remains an open question to which extent these models contribute to downstream classification performance. In particular, it remains unclear if they generalize enough to improve over directly using the additional data of their pre-training process for augmentation. We systematically evaluate a range of existing methods to generate images from diffusion models and study new extensions to assess their benefit for data augmentation. Personalizing diffusion models towards the target data outperforms simpler prompting strategies. However, using the pre-training data of the diffusion model alone, via a simple nearest-neighbor retrieval procedure, leads to even stronger downstream performance. Our study explores the potential of diffusion models in generating new training data, and surprisingly finds that these sophisticated models are not yet able to beat a simple and strong image retrieval baseline on simple downstream vision tasks.</description><identifier>DOI: 10.48550/arxiv.2304.10253</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2023-04</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2304.10253$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2304.10253$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Burg, Max F</creatorcontrib><creatorcontrib>Wenzel, Florian</creatorcontrib><creatorcontrib>Zietlow, Dominik</creatorcontrib><creatorcontrib>Horn, Max</creatorcontrib><creatorcontrib>Makansi, Osama</creatorcontrib><creatorcontrib>Locatello, Francesco</creatorcontrib><creatorcontrib>Russell, Chris</creatorcontrib><title>Image retrieval outperforms diffusion models on data augmentation</title><description>Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification. However, diffusion models are themselves trained on large datasets, often with noisy annotations, and it remains an open question to which extent these models contribute to downstream classification performance. In particular, it remains unclear if they generalize enough to improve over directly using the additional data of their pre-training process for augmentation. We systematically evaluate a range of existing methods to generate images from diffusion models and study new extensions to assess their benefit for data augmentation. Personalizing diffusion models towards the target data outperforms simpler prompting strategies. However, using the pre-training data of the diffusion model alone, via a simple nearest-neighbor retrieval procedure, leads to even stronger downstream performance. Our study explores the potential of diffusion models in generating new training data, and surprisingly finds that these sophisticated models are not yet able to beat a simple and strong image retrieval baseline on simple downstream vision tasks.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAQRbXpoqT9gK6qH7ArZyQ_liH0EQh0k70ZZWaCwIqDLIf27-umXd0LBw4cpZ4qU9rWOfOC6StcyzUYW1Zm7eBebXYRT6wT5xT4ioMe53zhJGOKk6YgMk9hPOs4Eg-TXh5hRo3zKfI5Y17Yg7oTHCZ-_N-VOry9HrYfxf7zfbfd7AusGyj80XUWGUQMWfDdsSVD0nQghgW4FvJ1hQt2LRgD1ldEYmry4rgRQVip5z_traG_pBAxffe_Lf2tBX4AUylGig</recordid><startdate>20230420</startdate><enddate>20230420</enddate><creator>Burg, Max F</creator><creator>Wenzel, Florian</creator><creator>Zietlow, Dominik</creator><creator>Horn, Max</creator><creator>Makansi, Osama</creator><creator>Locatello, Francesco</creator><creator>Russell, Chris</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230420</creationdate><title>Image retrieval outperforms diffusion models on data augmentation</title><author>Burg, Max F ; Wenzel, Florian ; Zietlow, Dominik ; Horn, Max ; Makansi, Osama ; Locatello, Francesco ; Russell, Chris</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-bc594ae3ff0d43b9c8d0df793f0ef3e6fdb61a3ff5830034b1ddf06dbf5e7ffa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Burg, Max F</creatorcontrib><creatorcontrib>Wenzel, Florian</creatorcontrib><creatorcontrib>Zietlow, Dominik</creatorcontrib><creatorcontrib>Horn, Max</creatorcontrib><creatorcontrib>Makansi, Osama</creatorcontrib><creatorcontrib>Locatello, Francesco</creatorcontrib><creatorcontrib>Russell, Chris</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Burg, Max F</au><au>Wenzel, Florian</au><au>Zietlow, Dominik</au><au>Horn, Max</au><au>Makansi, Osama</au><au>Locatello, Francesco</au><au>Russell, Chris</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Image retrieval outperforms diffusion models on data augmentation</atitle><date>2023-04-20</date><risdate>2023</risdate><abstract>Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification. However, diffusion models are themselves trained on large datasets, often with noisy annotations, and it remains an open question to which extent these models contribute to downstream classification performance. In particular, it remains unclear if they generalize enough to improve over directly using the additional data of their pre-training process for augmentation. We systematically evaluate a range of existing methods to generate images from diffusion models and study new extensions to assess their benefit for data augmentation. Personalizing diffusion models towards the target data outperforms simpler prompting strategies. However, using the pre-training data of the diffusion model alone, via a simple nearest-neighbor retrieval procedure, leads to even stronger downstream performance. Our study explores the potential of diffusion models in generating new training data, and surprisingly finds that these sophisticated models are not yet able to beat a simple and strong image retrieval baseline on simple downstream vision tasks.</abstract><doi>10.48550/arxiv.2304.10253</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2304.10253
ispartof
issn
language eng
recordid cdi_arxiv_primary_2304_10253
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Image retrieval outperforms diffusion models on data augmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T13%3A06%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Image%20retrieval%20outperforms%20diffusion%20models%20on%20data%20augmentation&rft.au=Burg,%20Max%20F&rft.date=2023-04-20&rft_id=info:doi/10.48550/arxiv.2304.10253&rft_dat=%3Carxiv_GOX%3E2304_10253%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true