Conditional GAN-based Remote Sensing Target Image Generation Method
In view of the uncontrollable process of traditional GAN generating remote sensing target images, the shortcomings of generated samples are similar, and lack diversity. This paper proposes a generative confrontation network model based on background conditions. First, the computer vision attention m...
Gespeichert in:
Veröffentlicht in: | International Journal of Advanced Network, Monitoring, and Controls Monitoring, and Controls, 2020-12, Vol.5 (4), p.66-74 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 74 |
---|---|
container_issue | 4 |
container_start_page | 66 |
container_title | International Journal of Advanced Network, Monitoring, and Controls |
container_volume | 5 |
creator | Liu, Haoyang Hu, Zhiyi Yu, Jun Gao, Shouyi |
description | In view of the uncontrollable process of traditional GAN generating remote sensing target images, the shortcomings of generated samples are similar, and lack diversity. This paper proposes a generative confrontation network model based on background conditions. First, the computer vision attention mechanism is introduced into the generative confrontation network. Choose a learning target model so that the GAN network only learns the target model during training, and ignores other non-target information. Reduce the dependence on the number of samples in the GAN training process; secondly, use the U-net network as a generator to restore other non-target information when generating the remote sensing image of the target as much as possible; again, distinguish by different colors of the conditional mask The category of the generated target; at the same time, the L1 regularization loss is added to the loss term of the generator model, and finally, the remote sensing target image is generated. The experimental results show that the peak signal-to-noise ratio (PSNR) of the remote sensing image generation algorithm proposed in this paper reached 18.512, and the structural similarity (SSIM) reached 88.47%, which is better than the comparison test model where the generator is an ordinary autoencoder. |
doi_str_mv | 10.21307/ijanmc-2020-039 |
format | Article |
fullrecord | <record><control><sourceid>gale_cross</sourceid><recordid>TN_cdi_crossref_primary_10_21307_ijanmc_2020_039</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A657951327</galeid><sourcerecordid>A657951327</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2506-cf961493c82bfd3ab12c72df5be35ca2d26fe6a3699914a58d5dcc0305eff4ad3</originalsourceid><addsrcrecordid>eNp1kF1LwzAUhoMoOHT3XvYPdOajSVvwZhSdg6mg8zqkyUnNaBNJOmT_3s554Y2ci_PywnPgPAjdELyghOHy1u2UH3ROMcU5ZvUZmtGixHmFWXX-J1-ieUo7jDElouaEzVDTBG_c6IJXfbZaPuetSmCyVxjCCNkb-OR8l21V7GDM1oPqIFuBh6iOSPYE40cw1-jCqj7B_HdfofeH-23zmG9eVutmuck15Vjk2taCFDXTFW2tYaolVJfUWN4C41pRQ4UFoZio65oUileGG60xwxysLZRhV2hxutupHqTzNoxR6WkMDE4HD9ZN_VLw8vgbLScAnwAdQ0oRrPyMblDxIAmWP-LkSZw8ipOTuAm5OyFfqh8hGuji_jAFuQv7ODlK_6K8EIJ9A1o1deQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Conditional GAN-based Remote Sensing Target Image Generation Method</title><source>EZB-FREE-00999 freely available EZB journals</source><creator>Liu, Haoyang ; Hu, Zhiyi ; Yu, Jun ; Gao, Shouyi</creator><creatorcontrib>Liu, Haoyang ; Hu, Zhiyi ; Yu, Jun ; Gao, Shouyi</creatorcontrib><description>In view of the uncontrollable process of traditional GAN generating remote sensing target images, the shortcomings of generated samples are similar, and lack diversity. This paper proposes a generative confrontation network model based on background conditions. First, the computer vision attention mechanism is introduced into the generative confrontation network. Choose a learning target model so that the GAN network only learns the target model during training, and ignores other non-target information. Reduce the dependence on the number of samples in the GAN training process; secondly, use the U-net network as a generator to restore other non-target information when generating the remote sensing image of the target as much as possible; again, distinguish by different colors of the conditional mask The category of the generated target; at the same time, the L1 regularization loss is added to the loss term of the generator model, and finally, the remote sensing target image is generated. The experimental results show that the peak signal-to-noise ratio (PSNR) of the remote sensing image generation algorithm proposed in this paper reached 18.512, and the structural similarity (SSIM) reached 88.47%, which is better than the comparison test model where the generator is an ordinary autoencoder.</description><identifier>ISSN: 2470-8038</identifier><identifier>EISSN: 2470-8038</identifier><identifier>DOI: 10.21307/ijanmc-2020-039</identifier><language>eng</language><publisher>Sciendo</publisher><subject>Conditional GAN ; Conditional Mask ; Liquors ; Machine vision ; Methods ; Remote sensing ; Remote Sensing Target Image ; U-net</subject><ispartof>International Journal of Advanced Network, Monitoring, and Controls, 2020-12, Vol.5 (4), p.66-74</ispartof><rights>COPYRIGHT 2020 Exeley Inc.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2506-cf961493c82bfd3ab12c72df5be35ca2d26fe6a3699914a58d5dcc0305eff4ad3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Liu, Haoyang</creatorcontrib><creatorcontrib>Hu, Zhiyi</creatorcontrib><creatorcontrib>Yu, Jun</creatorcontrib><creatorcontrib>Gao, Shouyi</creatorcontrib><title>Conditional GAN-based Remote Sensing Target Image Generation Method</title><title>International Journal of Advanced Network, Monitoring, and Controls</title><description>In view of the uncontrollable process of traditional GAN generating remote sensing target images, the shortcomings of generated samples are similar, and lack diversity. This paper proposes a generative confrontation network model based on background conditions. First, the computer vision attention mechanism is introduced into the generative confrontation network. Choose a learning target model so that the GAN network only learns the target model during training, and ignores other non-target information. Reduce the dependence on the number of samples in the GAN training process; secondly, use the U-net network as a generator to restore other non-target information when generating the remote sensing image of the target as much as possible; again, distinguish by different colors of the conditional mask The category of the generated target; at the same time, the L1 regularization loss is added to the loss term of the generator model, and finally, the remote sensing target image is generated. The experimental results show that the peak signal-to-noise ratio (PSNR) of the remote sensing image generation algorithm proposed in this paper reached 18.512, and the structural similarity (SSIM) reached 88.47%, which is better than the comparison test model where the generator is an ordinary autoencoder.</description><subject>Conditional GAN</subject><subject>Conditional Mask</subject><subject>Liquors</subject><subject>Machine vision</subject><subject>Methods</subject><subject>Remote sensing</subject><subject>Remote Sensing Target Image</subject><subject>U-net</subject><issn>2470-8038</issn><issn>2470-8038</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp1kF1LwzAUhoMoOHT3XvYPdOajSVvwZhSdg6mg8zqkyUnNaBNJOmT_3s554Y2ci_PywnPgPAjdELyghOHy1u2UH3ROMcU5ZvUZmtGixHmFWXX-J1-ieUo7jDElouaEzVDTBG_c6IJXfbZaPuetSmCyVxjCCNkb-OR8l21V7GDM1oPqIFuBh6iOSPYE40cw1-jCqj7B_HdfofeH-23zmG9eVutmuck15Vjk2taCFDXTFW2tYaolVJfUWN4C41pRQ4UFoZio65oUileGG60xwxysLZRhV2hxutupHqTzNoxR6WkMDE4HD9ZN_VLw8vgbLScAnwAdQ0oRrPyMblDxIAmWP-LkSZw8ipOTuAm5OyFfqh8hGuji_jAFuQv7ODlK_6K8EIJ9A1o1deQ</recordid><startdate>20201201</startdate><enddate>20201201</enddate><creator>Liu, Haoyang</creator><creator>Hu, Zhiyi</creator><creator>Yu, Jun</creator><creator>Gao, Shouyi</creator><general>Sciendo</general><general>Exeley Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>IAO</scope></search><sort><creationdate>20201201</creationdate><title>Conditional GAN-based Remote Sensing Target Image Generation Method</title><author>Liu, Haoyang ; Hu, Zhiyi ; Yu, Jun ; Gao, Shouyi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2506-cf961493c82bfd3ab12c72df5be35ca2d26fe6a3699914a58d5dcc0305eff4ad3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Conditional GAN</topic><topic>Conditional Mask</topic><topic>Liquors</topic><topic>Machine vision</topic><topic>Methods</topic><topic>Remote sensing</topic><topic>Remote Sensing Target Image</topic><topic>U-net</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Haoyang</creatorcontrib><creatorcontrib>Hu, Zhiyi</creatorcontrib><creatorcontrib>Yu, Jun</creatorcontrib><creatorcontrib>Gao, Shouyi</creatorcontrib><collection>CrossRef</collection><collection>Gale Academic OneFile</collection><jtitle>International Journal of Advanced Network, Monitoring, and Controls</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Haoyang</au><au>Hu, Zhiyi</au><au>Yu, Jun</au><au>Gao, Shouyi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Conditional GAN-based Remote Sensing Target Image Generation Method</atitle><jtitle>International Journal of Advanced Network, Monitoring, and Controls</jtitle><date>2020-12-01</date><risdate>2020</risdate><volume>5</volume><issue>4</issue><spage>66</spage><epage>74</epage><pages>66-74</pages><issn>2470-8038</issn><eissn>2470-8038</eissn><abstract>In view of the uncontrollable process of traditional GAN generating remote sensing target images, the shortcomings of generated samples are similar, and lack diversity. This paper proposes a generative confrontation network model based on background conditions. First, the computer vision attention mechanism is introduced into the generative confrontation network. Choose a learning target model so that the GAN network only learns the target model during training, and ignores other non-target information. Reduce the dependence on the number of samples in the GAN training process; secondly, use the U-net network as a generator to restore other non-target information when generating the remote sensing image of the target as much as possible; again, distinguish by different colors of the conditional mask The category of the generated target; at the same time, the L1 regularization loss is added to the loss term of the generator model, and finally, the remote sensing target image is generated. The experimental results show that the peak signal-to-noise ratio (PSNR) of the remote sensing image generation algorithm proposed in this paper reached 18.512, and the structural similarity (SSIM) reached 88.47%, which is better than the comparison test model where the generator is an ordinary autoencoder.</abstract><pub>Sciendo</pub><doi>10.21307/ijanmc-2020-039</doi><tpages>9</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2470-8038 |
ispartof | International Journal of Advanced Network, Monitoring, and Controls, 2020-12, Vol.5 (4), p.66-74 |
issn | 2470-8038 2470-8038 |
language | eng |
recordid | cdi_crossref_primary_10_21307_ijanmc_2020_039 |
source | EZB-FREE-00999 freely available EZB journals |
subjects | Conditional GAN Conditional Mask Liquors Machine vision Methods Remote sensing Remote Sensing Target Image U-net |
title | Conditional GAN-based Remote Sensing Target Image Generation Method |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T08%3A48%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Conditional%20GAN-based%20Remote%20Sensing%20Target%20Image%20Generation%20Method&rft.jtitle=International%20Journal%20of%20Advanced%20Network,%20Monitoring,%20and%20Controls&rft.au=Liu,%20Haoyang&rft.date=2020-12-01&rft.volume=5&rft.issue=4&rft.spage=66&rft.epage=74&rft.pages=66-74&rft.issn=2470-8038&rft.eissn=2470-8038&rft_id=info:doi/10.21307/ijanmc-2020-039&rft_dat=%3Cgale_cross%3EA657951327%3C/gale_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_galeid=A657951327&rfr_iscdi=true |