CAT: Contrastive Adapter Training for Personalized Image Generation

The emergence of various adapters, including Low-Rank Adaptation (LoRA) applied from the field of natural language processing, has allowed diffusion models to personalize image generation at a low cost. However, due to the various challenges including limited datasets and shortage of regularization...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Park, Jae Wan, Park, Sang Hyun, Koh, Jun Young, Lee, Junha, Song, Min
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Park, Jae Wan
Park, Sang Hyun
Koh, Jun Young
Lee, Junha
Song, Min
description The emergence of various adapters, including Low-Rank Adaptation (LoRA) applied from the field of natural language processing, has allowed diffusion models to personalize image generation at a low cost. However, due to the various challenges including limited datasets and shortage of regularization and computation resources, adapter training often results in unsatisfactory outcomes, leading to the corruption of the backbone model's prior knowledge. One of the well known phenomena is the loss of diversity in object generation, especially within the same class which leads to generating almost identical objects with minor variations. This poses challenges in generation capabilities. To solve this issue, we present Contrastive Adapter Training (CAT), a simple yet effective strategy to enhance adapter training through the application of CAT loss. Our approach facilitates the preservation of the base model's original knowledge when the model initiates adapters. Furthermore, we introduce the Knowledge Preservation Score (KPS) to evaluate CAT's ability to keep the former information. We qualitatively and quantitatively compare CAT's improvement. Finally, we mention the possibility of CAT in the aspects of multi-concept adapter and optimization.
doi_str_mv 10.48550/arxiv.2404.07554
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2404_07554</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2404_07554</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-6fff1e5cf55e9ff6fbb534a0ed7e88fa9ed66210f149e655ad3066418a49bcf03</originalsourceid><addsrcrecordid>eNotz7FOwzAQgGEvDKjwAEz4BRLsxuckbJEFpVIlGLJHl_qustQ61SWqgKdHFKZ_-6VPqQdrStcAmCeUz3Qp18640tQA7laF0PXPOkx5EZyXdCHdRTwvJLoXTDnlg-ZJ9AfJPGU8pm-KenvCA-kNZRJc0pTv1A3jcab7_65U__rSh7di977Zhm5XoK9d4ZnZEuwZgFpmz-MIlUNDsaamYWwper-2hq1ryQNgrIz3zjbo2nHPplqpx7_tVTGcJZ1QvoZfzXDVVD8C5EVK</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CAT: Contrastive Adapter Training for Personalized Image Generation</title><source>arXiv.org</source><creator>Park, Jae Wan ; Park, Sang Hyun ; Koh, Jun Young ; Lee, Junha ; Song, Min</creator><creatorcontrib>Park, Jae Wan ; Park, Sang Hyun ; Koh, Jun Young ; Lee, Junha ; Song, Min</creatorcontrib><description>The emergence of various adapters, including Low-Rank Adaptation (LoRA) applied from the field of natural language processing, has allowed diffusion models to personalize image generation at a low cost. However, due to the various challenges including limited datasets and shortage of regularization and computation resources, adapter training often results in unsatisfactory outcomes, leading to the corruption of the backbone model's prior knowledge. One of the well known phenomena is the loss of diversity in object generation, especially within the same class which leads to generating almost identical objects with minor variations. This poses challenges in generation capabilities. To solve this issue, we present Contrastive Adapter Training (CAT), a simple yet effective strategy to enhance adapter training through the application of CAT loss. Our approach facilitates the preservation of the base model's original knowledge when the model initiates adapters. Furthermore, we introduce the Knowledge Preservation Score (KPS) to evaluate CAT's ability to keep the former information. We qualitatively and quantitatively compare CAT's improvement. Finally, we mention the possibility of CAT in the aspects of multi-concept adapter and optimization.</description><identifier>DOI: 10.48550/arxiv.2404.07554</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2404.07554$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2404.07554$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Park, Jae Wan</creatorcontrib><creatorcontrib>Park, Sang Hyun</creatorcontrib><creatorcontrib>Koh, Jun Young</creatorcontrib><creatorcontrib>Lee, Junha</creatorcontrib><creatorcontrib>Song, Min</creatorcontrib><title>CAT: Contrastive Adapter Training for Personalized Image Generation</title><description>The emergence of various adapters, including Low-Rank Adaptation (LoRA) applied from the field of natural language processing, has allowed diffusion models to personalize image generation at a low cost. However, due to the various challenges including limited datasets and shortage of regularization and computation resources, adapter training often results in unsatisfactory outcomes, leading to the corruption of the backbone model's prior knowledge. One of the well known phenomena is the loss of diversity in object generation, especially within the same class which leads to generating almost identical objects with minor variations. This poses challenges in generation capabilities. To solve this issue, we present Contrastive Adapter Training (CAT), a simple yet effective strategy to enhance adapter training through the application of CAT loss. Our approach facilitates the preservation of the base model's original knowledge when the model initiates adapters. Furthermore, we introduce the Knowledge Preservation Score (KPS) to evaluate CAT's ability to keep the former information. We qualitatively and quantitatively compare CAT's improvement. Finally, we mention the possibility of CAT in the aspects of multi-concept adapter and optimization.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7FOwzAQgGEvDKjwAEz4BRLsxuckbJEFpVIlGLJHl_qustQ61SWqgKdHFKZ_-6VPqQdrStcAmCeUz3Qp18640tQA7laF0PXPOkx5EZyXdCHdRTwvJLoXTDnlg-ZJ9AfJPGU8pm-KenvCA-kNZRJc0pTv1A3jcab7_65U__rSh7di977Zhm5XoK9d4ZnZEuwZgFpmz-MIlUNDsaamYWwper-2hq1ryQNgrIz3zjbo2nHPplqpx7_tVTGcJZ1QvoZfzXDVVD8C5EVK</recordid><startdate>20240411</startdate><enddate>20240411</enddate><creator>Park, Jae Wan</creator><creator>Park, Sang Hyun</creator><creator>Koh, Jun Young</creator><creator>Lee, Junha</creator><creator>Song, Min</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240411</creationdate><title>CAT: Contrastive Adapter Training for Personalized Image Generation</title><author>Park, Jae Wan ; Park, Sang Hyun ; Koh, Jun Young ; Lee, Junha ; Song, Min</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-6fff1e5cf55e9ff6fbb534a0ed7e88fa9ed66210f149e655ad3066418a49bcf03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Park, Jae Wan</creatorcontrib><creatorcontrib>Park, Sang Hyun</creatorcontrib><creatorcontrib>Koh, Jun Young</creatorcontrib><creatorcontrib>Lee, Junha</creatorcontrib><creatorcontrib>Song, Min</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Park, Jae Wan</au><au>Park, Sang Hyun</au><au>Koh, Jun Young</au><au>Lee, Junha</au><au>Song, Min</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CAT: Contrastive Adapter Training for Personalized Image Generation</atitle><date>2024-04-11</date><risdate>2024</risdate><abstract>The emergence of various adapters, including Low-Rank Adaptation (LoRA) applied from the field of natural language processing, has allowed diffusion models to personalize image generation at a low cost. However, due to the various challenges including limited datasets and shortage of regularization and computation resources, adapter training often results in unsatisfactory outcomes, leading to the corruption of the backbone model's prior knowledge. One of the well known phenomena is the loss of diversity in object generation, especially within the same class which leads to generating almost identical objects with minor variations. This poses challenges in generation capabilities. To solve this issue, we present Contrastive Adapter Training (CAT), a simple yet effective strategy to enhance adapter training through the application of CAT loss. Our approach facilitates the preservation of the base model's original knowledge when the model initiates adapters. Furthermore, we introduce the Knowledge Preservation Score (KPS) to evaluate CAT's ability to keep the former information. We qualitatively and quantitatively compare CAT's improvement. Finally, we mention the possibility of CAT in the aspects of multi-concept adapter and optimization.</abstract><doi>10.48550/arxiv.2404.07554</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2404.07554
ispartof
issn
language eng
recordid cdi_arxiv_primary_2404_07554
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title CAT: Contrastive Adapter Training for Personalized Image Generation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T12%3A56%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CAT:%20Contrastive%20Adapter%20Training%20for%20Personalized%20Image%20Generation&rft.au=Park,%20Jae%20Wan&rft.date=2024-04-11&rft_id=info:doi/10.48550/arxiv.2404.07554&rft_dat=%3Carxiv_GOX%3E2404_07554%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true