Safety Alignment Backfires: Preventing the Re-emergence of Suppressed Concepts in Fine-tuned Text-to-Image Diffusion Models
Fine-tuning text-to-image diffusion models is widely used for personalization and adaptation for new domains. In this paper, we identify a critical vulnerability of fine-tuning: safety alignment methods designed to filter harmful content (e.g., nudity) can break down during fine-tuning, allowing pre...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Kim, Sanghyun Choi, Moonseok Shin, Jinwoo Lee, Juho |
description | Fine-tuning text-to-image diffusion models is widely used for personalization
and adaptation for new domains. In this paper, we identify a critical
vulnerability of fine-tuning: safety alignment methods designed to filter
harmful content (e.g., nudity) can break down during fine-tuning, allowing
previously suppressed content to resurface, even when using benign datasets.
While this "fine-tuning jailbreaking" issue is known in large language models,
it remains largely unexplored in text-to-image diffusion models. Our
investigation reveals that standard fine-tuning can inadvertently undo safety
measures, causing models to relearn harmful concepts that were previously
removed and even exacerbate harmful behaviors. To address this issue, we
present a novel but immediate solution called Modular LoRA, which involves
training Safety Low-Rank Adaptation (LoRA) modules separately from Fine-Tuning
LoRA components and merging them during inference. This method effectively
prevents the re-learning of harmful content without compromising the model's
performance on new tasks. Our experiments demonstrate that Modular LoRA
outperforms traditional fine-tuning methods in maintaining safety alignment,
offering a practical approach for enhancing the security of text-to-image
diffusion models against potential attacks. |
doi_str_mv | 10.48550/arxiv.2412.00357 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_00357</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_00357</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_003573</originalsourceid><addsrcrecordid>eNqFjrEKwjAUALM4iPoBTr4fSG2tRXHTatFBEHUvwb7Uh21SklQs_ry1uDsdHDccY-PA9-bLKPKnwrzo6c3mwczz_TBa9Nn7IiS6BtYF5apE5WAjbg9JBu0KTgafrSKVg7sjnJFjiSZHdUPQEi51VbWdxQxi3brKWSAFCSnkrlatvuLLcaf5oRQ5wpakrC1pBUedYWGHrCdFYXH044BNkt013vNuM60MlcI06Xc37XbD_8UHfERMEA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Safety Alignment Backfires: Preventing the Re-emergence of Suppressed Concepts in Fine-tuned Text-to-Image Diffusion Models</title><source>arXiv.org</source><creator>Kim, Sanghyun ; Choi, Moonseok ; Shin, Jinwoo ; Lee, Juho</creator><creatorcontrib>Kim, Sanghyun ; Choi, Moonseok ; Shin, Jinwoo ; Lee, Juho</creatorcontrib><description>Fine-tuning text-to-image diffusion models is widely used for personalization
and adaptation for new domains. In this paper, we identify a critical
vulnerability of fine-tuning: safety alignment methods designed to filter
harmful content (e.g., nudity) can break down during fine-tuning, allowing
previously suppressed content to resurface, even when using benign datasets.
While this "fine-tuning jailbreaking" issue is known in large language models,
it remains largely unexplored in text-to-image diffusion models. Our
investigation reveals that standard fine-tuning can inadvertently undo safety
measures, causing models to relearn harmful concepts that were previously
removed and even exacerbate harmful behaviors. To address this issue, we
present a novel but immediate solution called Modular LoRA, which involves
training Safety Low-Rank Adaptation (LoRA) modules separately from Fine-Tuning
LoRA components and merging them during inference. This method effectively
prevents the re-learning of harmful content without compromising the model's
performance on new tasks. Our experiments demonstrate that Modular LoRA
outperforms traditional fine-tuning methods in maintaining safety alignment,
offering a practical approach for enhancing the security of text-to-image
diffusion models against potential attacks.</description><identifier>DOI: 10.48550/arxiv.2412.00357</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.00357$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.00357$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Sanghyun</creatorcontrib><creatorcontrib>Choi, Moonseok</creatorcontrib><creatorcontrib>Shin, Jinwoo</creatorcontrib><creatorcontrib>Lee, Juho</creatorcontrib><title>Safety Alignment Backfires: Preventing the Re-emergence of Suppressed Concepts in Fine-tuned Text-to-Image Diffusion Models</title><description>Fine-tuning text-to-image diffusion models is widely used for personalization
and adaptation for new domains. In this paper, we identify a critical
vulnerability of fine-tuning: safety alignment methods designed to filter
harmful content (e.g., nudity) can break down during fine-tuning, allowing
previously suppressed content to resurface, even when using benign datasets.
While this "fine-tuning jailbreaking" issue is known in large language models,
it remains largely unexplored in text-to-image diffusion models. Our
investigation reveals that standard fine-tuning can inadvertently undo safety
measures, causing models to relearn harmful concepts that were previously
removed and even exacerbate harmful behaviors. To address this issue, we
present a novel but immediate solution called Modular LoRA, which involves
training Safety Low-Rank Adaptation (LoRA) modules separately from Fine-Tuning
LoRA components and merging them during inference. This method effectively
prevents the re-learning of harmful content without compromising the model's
performance on new tasks. Our experiments demonstrate that Modular LoRA
outperforms traditional fine-tuning methods in maintaining safety alignment,
offering a practical approach for enhancing the security of text-to-image
diffusion models against potential attacks.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjrEKwjAUALM4iPoBTr4fSG2tRXHTatFBEHUvwb7Uh21SklQs_ry1uDsdHDccY-PA9-bLKPKnwrzo6c3mwczz_TBa9Nn7IiS6BtYF5apE5WAjbg9JBu0KTgafrSKVg7sjnJFjiSZHdUPQEi51VbWdxQxi3brKWSAFCSnkrlatvuLLcaf5oRQ5wpakrC1pBUedYWGHrCdFYXH044BNkt013vNuM60MlcI06Xc37XbD_8UHfERMEA</recordid><startdate>20241129</startdate><enddate>20241129</enddate><creator>Kim, Sanghyun</creator><creator>Choi, Moonseok</creator><creator>Shin, Jinwoo</creator><creator>Lee, Juho</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241129</creationdate><title>Safety Alignment Backfires: Preventing the Re-emergence of Suppressed Concepts in Fine-tuned Text-to-Image Diffusion Models</title><author>Kim, Sanghyun ; Choi, Moonseok ; Shin, Jinwoo ; Lee, Juho</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_003573</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Sanghyun</creatorcontrib><creatorcontrib>Choi, Moonseok</creatorcontrib><creatorcontrib>Shin, Jinwoo</creatorcontrib><creatorcontrib>Lee, Juho</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Sanghyun</au><au>Choi, Moonseok</au><au>Shin, Jinwoo</au><au>Lee, Juho</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Safety Alignment Backfires: Preventing the Re-emergence of Suppressed Concepts in Fine-tuned Text-to-Image Diffusion Models</atitle><date>2024-11-29</date><risdate>2024</risdate><abstract>Fine-tuning text-to-image diffusion models is widely used for personalization
and adaptation for new domains. In this paper, we identify a critical
vulnerability of fine-tuning: safety alignment methods designed to filter
harmful content (e.g., nudity) can break down during fine-tuning, allowing
previously suppressed content to resurface, even when using benign datasets.
While this "fine-tuning jailbreaking" issue is known in large language models,
it remains largely unexplored in text-to-image diffusion models. Our
investigation reveals that standard fine-tuning can inadvertently undo safety
measures, causing models to relearn harmful concepts that were previously
removed and even exacerbate harmful behaviors. To address this issue, we
present a novel but immediate solution called Modular LoRA, which involves
training Safety Low-Rank Adaptation (LoRA) modules separately from Fine-Tuning
LoRA components and merging them during inference. This method effectively
prevents the re-learning of harmful content without compromising the model's
performance on new tasks. Our experiments demonstrate that Modular LoRA
outperforms traditional fine-tuning methods in maintaining safety alignment,
offering a practical approach for enhancing the security of text-to-image
diffusion models against potential attacks.</abstract><doi>10.48550/arxiv.2412.00357</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2412.00357 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2412_00357 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition |
title | Safety Alignment Backfires: Preventing the Re-emergence of Suppressed Concepts in Fine-tuned Text-to-Image Diffusion Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T22%3A30%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Safety%20Alignment%20Backfires:%20Preventing%20the%20Re-emergence%20of%20Suppressed%20Concepts%20in%20Fine-tuned%20Text-to-Image%20Diffusion%20Models&rft.au=Kim,%20Sanghyun&rft.date=2024-11-29&rft_id=info:doi/10.48550/arxiv.2412.00357&rft_dat=%3Carxiv_GOX%3E2412_00357%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |