Intriguing Property and Counterfactual Explanation of GAN for Remote Sensing Image Generation
Generative adversarial networks (GANs) have achieved remarkable progress in the natural image field. However, when applying GANs in the remote sensing (RS) image generation task, an extraordinary phenomenon is observed: the GAN model is more sensitive to the amount of training data for RS image gene...
Gespeichert in:
Veröffentlicht in: | International journal of computer vision 2024-11, Vol.132 (11), p.5192-5216 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 5216 |
---|---|
container_issue | 11 |
container_start_page | 5192 |
container_title | International journal of computer vision |
container_volume | 132 |
creator | Su, Xingzhe Qiang, Wenwen Hu, Jie Zheng, Changwen Wu, Fengge Sun, Fuchun |
description | Generative adversarial networks (GANs) have achieved remarkable progress in the natural image field. However, when applying GANs in the remote sensing (RS) image generation task, an extraordinary phenomenon is observed: the GAN model is more sensitive to the amount of training data for RS image generation than for natural image generation (Fig. 1). In other words, the generation quality of RS images will change significantly with the number of training categories or samples per category. In this paper, we first analyze this phenomenon from two kinds of toy experiments and conclude that the amount of feature information contained in the GAN model decreases with reduced training data (Fig. 2). Then we establish a structural causal model (SCM) of the data generation process and interpret the generated data as the counterfactuals. Based on this SCM, we theoretically prove that the quality of generated images is positively correlated with the amount of feature information. This provides insights for enriching the feature information learned by the GAN model during training. Consequently, we propose two innovative adjustment schemes, namely uniformity regularization and entropy regularization, to increase the information learned by the GAN model at the distributional and sample levels, respectively. Extensive experiments on eight RS datasets and three natural datasets show the effectiveness and versatility of our methods. The source code is available at
https://github.com/rootSue/Causal-RSGAN
. |
doi_str_mv | 10.1007/s11263-024-02125-4 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3121049322</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3121049322</sourcerecordid><originalsourceid>FETCH-LOGICAL-c200t-da26104902c81e35961f3d14b22a1d462c591581dde6816817b5cefafc286ac53</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWD_-gKeA59XMZLPdPZZSa6Go-HGUkGaTZUubrEkW7L932wrehBnm8rzvwEPIDbA7YGx8HwGw4BnDfFhAkeUnZARizDPImTglI1Yhy0RRwTm5iHHNGMMS-Yh8LlwKbdO3rqEvwXcmpB1VrqZT37tkglU69WpDZ9_dRjmVWu-ot3Q-eaLWB_pqtj4Z-mZc3DcstqoxdG6cCQf0ipxZtYnm-vdeko-H2fv0MVs-zxfTyTLTyFjKaoUFsLxiqEswXFQFWF5DvkJUUOcFalGBKKGuTVHCMOOV0MYqq7EslBb8ktwee7vgv3oTk1z7PrjhpeSA-2qOOFB4pHTwMQZjZRfarQo7CUzuNcqjRjlolAeNMh9C_BiKA-waE_6q_0n9AD3ydMk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3121049322</pqid></control><display><type>article</type><title>Intriguing Property and Counterfactual Explanation of GAN for Remote Sensing Image Generation</title><source>SpringerLink Journals - AutoHoldings</source><creator>Su, Xingzhe ; Qiang, Wenwen ; Hu, Jie ; Zheng, Changwen ; Wu, Fengge ; Sun, Fuchun</creator><creatorcontrib>Su, Xingzhe ; Qiang, Wenwen ; Hu, Jie ; Zheng, Changwen ; Wu, Fengge ; Sun, Fuchun</creatorcontrib><description>Generative adversarial networks (GANs) have achieved remarkable progress in the natural image field. However, when applying GANs in the remote sensing (RS) image generation task, an extraordinary phenomenon is observed: the GAN model is more sensitive to the amount of training data for RS image generation than for natural image generation (Fig. 1). In other words, the generation quality of RS images will change significantly with the number of training categories or samples per category. In this paper, we first analyze this phenomenon from two kinds of toy experiments and conclude that the amount of feature information contained in the GAN model decreases with reduced training data (Fig. 2). Then we establish a structural causal model (SCM) of the data generation process and interpret the generated data as the counterfactuals. Based on this SCM, we theoretically prove that the quality of generated images is positively correlated with the amount of feature information. This provides insights for enriching the feature information learned by the GAN model during training. Consequently, we propose two innovative adjustment schemes, namely uniformity regularization and entropy regularization, to increase the information learned by the GAN model at the distributional and sample levels, respectively. Extensive experiments on eight RS datasets and three natural datasets show the effectiveness and versatility of our methods. The source code is available at
https://github.com/rootSue/Causal-RSGAN
.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-024-02125-4</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Computer Imaging ; Computer Science ; Datasets ; Generative adversarial networks ; Image processing ; Image Processing and Computer Vision ; Image quality ; Pattern Recognition ; Pattern Recognition and Graphics ; Regularization ; Remote sensing ; Source code ; Vision</subject><ispartof>International journal of computer vision, 2024-11, Vol.132 (11), p.5192-5216</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c200t-da26104902c81e35961f3d14b22a1d462c591581dde6816817b5cefafc286ac53</cites><orcidid>0000-0002-7985-5743</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-024-02125-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-024-02125-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27922,27923,41486,42555,51317</link.rule.ids></links><search><creatorcontrib>Su, Xingzhe</creatorcontrib><creatorcontrib>Qiang, Wenwen</creatorcontrib><creatorcontrib>Hu, Jie</creatorcontrib><creatorcontrib>Zheng, Changwen</creatorcontrib><creatorcontrib>Wu, Fengge</creatorcontrib><creatorcontrib>Sun, Fuchun</creatorcontrib><title>Intriguing Property and Counterfactual Explanation of GAN for Remote Sensing Image Generation</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>Generative adversarial networks (GANs) have achieved remarkable progress in the natural image field. However, when applying GANs in the remote sensing (RS) image generation task, an extraordinary phenomenon is observed: the GAN model is more sensitive to the amount of training data for RS image generation than for natural image generation (Fig. 1). In other words, the generation quality of RS images will change significantly with the number of training categories or samples per category. In this paper, we first analyze this phenomenon from two kinds of toy experiments and conclude that the amount of feature information contained in the GAN model decreases with reduced training data (Fig. 2). Then we establish a structural causal model (SCM) of the data generation process and interpret the generated data as the counterfactuals. Based on this SCM, we theoretically prove that the quality of generated images is positively correlated with the amount of feature information. This provides insights for enriching the feature information learned by the GAN model during training. Consequently, we propose two innovative adjustment schemes, namely uniformity regularization and entropy regularization, to increase the information learned by the GAN model at the distributional and sample levels, respectively. Extensive experiments on eight RS datasets and three natural datasets show the effectiveness and versatility of our methods. The source code is available at
https://github.com/rootSue/Causal-RSGAN
.</description><subject>Artificial Intelligence</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Generative adversarial networks</subject><subject>Image processing</subject><subject>Image Processing and Computer Vision</subject><subject>Image quality</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Regularization</subject><subject>Remote sensing</subject><subject>Source code</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LAzEQhoMoWD_-gKeA59XMZLPdPZZSa6Go-HGUkGaTZUubrEkW7L932wrehBnm8rzvwEPIDbA7YGx8HwGw4BnDfFhAkeUnZARizDPImTglI1Yhy0RRwTm5iHHNGMMS-Yh8LlwKbdO3rqEvwXcmpB1VrqZT37tkglU69WpDZ9_dRjmVWu-ot3Q-eaLWB_pqtj4Z-mZc3DcstqoxdG6cCQf0ipxZtYnm-vdeko-H2fv0MVs-zxfTyTLTyFjKaoUFsLxiqEswXFQFWF5DvkJUUOcFalGBKKGuTVHCMOOV0MYqq7EslBb8ktwee7vgv3oTk1z7PrjhpeSA-2qOOFB4pHTwMQZjZRfarQo7CUzuNcqjRjlolAeNMh9C_BiKA-waE_6q_0n9AD3ydMk</recordid><startdate>20241101</startdate><enddate>20241101</enddate><creator>Su, Xingzhe</creator><creator>Qiang, Wenwen</creator><creator>Hu, Jie</creator><creator>Zheng, Changwen</creator><creator>Wu, Fengge</creator><creator>Sun, Fuchun</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-7985-5743</orcidid></search><sort><creationdate>20241101</creationdate><title>Intriguing Property and Counterfactual Explanation of GAN for Remote Sensing Image Generation</title><author>Su, Xingzhe ; Qiang, Wenwen ; Hu, Jie ; Zheng, Changwen ; Wu, Fengge ; Sun, Fuchun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c200t-da26104902c81e35961f3d14b22a1d462c591581dde6816817b5cefafc286ac53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial Intelligence</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Generative adversarial networks</topic><topic>Image processing</topic><topic>Image Processing and Computer Vision</topic><topic>Image quality</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Regularization</topic><topic>Remote sensing</topic><topic>Source code</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Su, Xingzhe</creatorcontrib><creatorcontrib>Qiang, Wenwen</creatorcontrib><creatorcontrib>Hu, Jie</creatorcontrib><creatorcontrib>Zheng, Changwen</creatorcontrib><creatorcontrib>Wu, Fengge</creatorcontrib><creatorcontrib>Sun, Fuchun</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Su, Xingzhe</au><au>Qiang, Wenwen</au><au>Hu, Jie</au><au>Zheng, Changwen</au><au>Wu, Fengge</au><au>Sun, Fuchun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Intriguing Property and Counterfactual Explanation of GAN for Remote Sensing Image Generation</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2024-11-01</date><risdate>2024</risdate><volume>132</volume><issue>11</issue><spage>5192</spage><epage>5216</epage><pages>5192-5216</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>Generative adversarial networks (GANs) have achieved remarkable progress in the natural image field. However, when applying GANs in the remote sensing (RS) image generation task, an extraordinary phenomenon is observed: the GAN model is more sensitive to the amount of training data for RS image generation than for natural image generation (Fig. 1). In other words, the generation quality of RS images will change significantly with the number of training categories or samples per category. In this paper, we first analyze this phenomenon from two kinds of toy experiments and conclude that the amount of feature information contained in the GAN model decreases with reduced training data (Fig. 2). Then we establish a structural causal model (SCM) of the data generation process and interpret the generated data as the counterfactuals. Based on this SCM, we theoretically prove that the quality of generated images is positively correlated with the amount of feature information. This provides insights for enriching the feature information learned by the GAN model during training. Consequently, we propose two innovative adjustment schemes, namely uniformity regularization and entropy regularization, to increase the information learned by the GAN model at the distributional and sample levels, respectively. Extensive experiments on eight RS datasets and three natural datasets show the effectiveness and versatility of our methods. The source code is available at
https://github.com/rootSue/Causal-RSGAN
.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-024-02125-4</doi><tpages>25</tpages><orcidid>https://orcid.org/0000-0002-7985-5743</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0920-5691 |
ispartof | International journal of computer vision, 2024-11, Vol.132 (11), p.5192-5216 |
issn | 0920-5691 1573-1405 |
language | eng |
recordid | cdi_proquest_journals_3121049322 |
source | SpringerLink Journals - AutoHoldings |
subjects | Artificial Intelligence Computer Imaging Computer Science Datasets Generative adversarial networks Image processing Image Processing and Computer Vision Image quality Pattern Recognition Pattern Recognition and Graphics Regularization Remote sensing Source code Vision |
title | Intriguing Property and Counterfactual Explanation of GAN for Remote Sensing Image Generation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T10%3A10%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Intriguing%20Property%20and%20Counterfactual%20Explanation%20of%20GAN%20for%20Remote%20Sensing%20Image%20Generation&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Su,%20Xingzhe&rft.date=2024-11-01&rft.volume=132&rft.issue=11&rft.spage=5192&rft.epage=5216&rft.pages=5192-5216&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-024-02125-4&rft_dat=%3Cproquest_cross%3E3121049322%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3121049322&rft_id=info:pmid/&rfr_iscdi=true |