DGattGAN: Cooperative Up-Sampling Based Dual Generator Attentional GAN on Text-to-Image Synthesis
Text-to-image synthesis task aims at generating images consistent with input text descriptions and is well developed by the Generative Adversarial Network (GAN). Although GAN based image generation approaches have achieved promising results, synthesizing quality is sometimes unsatisfied due to discu...
Gespeichert in:
Veröffentlicht in: | IEEE access 2021, Vol.9, p.29584-29598 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 29598 |
---|---|
container_issue | |
container_start_page | 29584 |
container_title | IEEE access |
container_volume | 9 |
creator | Zhang, Han Zhu, Hongqing Yang, Suyi Li, Wenhao |
description | Text-to-image synthesis task aims at generating images consistent with input text descriptions and is well developed by the Generative Adversarial Network (GAN). Although GAN based image generation approaches have achieved promising results, synthesizing quality is sometimes unsatisfied due to discursive generation of background and object. In this article, we propose a cooperative up-sampling based Dual Generator attentional GAN (DGattGAN) to generate high-quality images from text description. To achieve this, two generators with individual generation purpose are established to decouple object and background generation. In particular, we introduce a cooperative up-sampling mechanism to build cooperation between object and background generators during training. This strategy is potentially very useful as any dual generator architecture in GAN models can benefit from this mechanism. Furthermore, we propose an asymmetric information feeding scheme to distinguish two synthesis tasks, such that each generator only synthesizes based on semantic information they accept. Taking advantage of effective dual generator, the attention mechanism we incorporated on object generator could devote to fine-grained details generation on actual targeted objects. Experiments on Caltech-UCSD Bird (CUB) and Oxford-102 datasets suggest that generated images by the proposed model are more realistic and consistent with input text, and DGattGAN is competent compared to state-of-the-art methods according to Inception Score (IS) and R-precision metrics. Our codes are available at: https://github.com/ecfish/DGattGAN . |
doi_str_mv | 10.1109/ACCESS.2021.3058674 |
format | Article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_9352788</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9352788</ieee_id><doaj_id>oai_doaj_org_article_72f934762d4448c6a4947de92bce691f</doaj_id><sourcerecordid>2493599350</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-a580bbedae3e46a06cdbcc5490e43a0334e03ae9cb08959b7f27fb5bcc81d8e23</originalsourceid><addsrcrecordid>eNpNUcFKxDAQLaKgqF_gJeC5a5qkbeKtVl0XRA-r5zBNp2uX3aYmWdG_N2tFHBhmeLz3huElyUVGZ1lG1VVV13fL5YxRls04zWVRioPkhGWFSnnOi8N_-3Fy7v2axpIRysuTBG7nEMK8eromtbUjOgj9B5LXMV3Cdtz0w4rcgMeW3O5gQ-Y47BnWkSoEHEJvhz1aPRE7kBf8DGmw6WILKyTLryG8oe_9WXLUwcbj-e88TV7v717qh_Txeb6oq8fUCCpDCrmkTYMtIEdRAC1M2xiTC0VRcKCcC6QcUJmGSpWrpuxY2TV55Mislcj4abKYfFsLaz26fgvuS1vo9Q9g3UqDC73ZoC5Zp7goC9YKIaQpQChRtqhYY7BQWRe9Liev0dn3Hfqg13bn4q9eM6F4rmLTyOITyzjrvcPu72pG9T4aPUWj99Ho32ii6mJS9Yj4p4iGrJSSfwOohomO</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2493599350</pqid></control><display><type>article</type><title>DGattGAN: Cooperative Up-Sampling Based Dual Generator Attentional GAN on Text-to-Image Synthesis</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Zhang, Han ; Zhu, Hongqing ; Yang, Suyi ; Li, Wenhao</creator><creatorcontrib>Zhang, Han ; Zhu, Hongqing ; Yang, Suyi ; Li, Wenhao</creatorcontrib><description>Text-to-image synthesis task aims at generating images consistent with input text descriptions and is well developed by the Generative Adversarial Network (GAN). Although GAN based image generation approaches have achieved promising results, synthesizing quality is sometimes unsatisfied due to discursive generation of background and object. In this article, we propose a cooperative up-sampling based Dual Generator attentional GAN (DGattGAN) to generate high-quality images from text description. To achieve this, two generators with individual generation purpose are established to decouple object and background generation. In particular, we introduce a cooperative up-sampling mechanism to build cooperation between object and background generators during training. This strategy is potentially very useful as any dual generator architecture in GAN models can benefit from this mechanism. Furthermore, we propose an asymmetric information feeding scheme to distinguish two synthesis tasks, such that each generator only synthesizes based on semantic information they accept. Taking advantage of effective dual generator, the attention mechanism we incorporated on object generator could devote to fine-grained details generation on actual targeted objects. Experiments on Caltech-UCSD Bird (CUB) and Oxford-102 datasets suggest that generated images by the proposed model are more realistic and consistent with input text, and DGattGAN is competent compared to state-of-the-art methods according to Inception Score (IS) and R-precision metrics. Our codes are available at: https://github.com/ecfish/DGattGAN .</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2021.3058674</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Asymmetric information feeding ; cooperative up-sampling ; dual generator ; Gallium nitride ; Generative adversarial networks ; Generators ; Image processing ; Image quality ; Image resolution ; Image synthesis ; Sampling ; Synthesis ; Task analysis ; text-to-image synthesis ; Visualization</subject><ispartof>IEEE access, 2021, Vol.9, p.29584-29598</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-a580bbedae3e46a06cdbcc5490e43a0334e03ae9cb08959b7f27fb5bcc81d8e23</citedby><cites>FETCH-LOGICAL-c408t-a580bbedae3e46a06cdbcc5490e43a0334e03ae9cb08959b7f27fb5bcc81d8e23</cites><orcidid>0000-0002-2122-7066</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9352788$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2102,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Zhang, Han</creatorcontrib><creatorcontrib>Zhu, Hongqing</creatorcontrib><creatorcontrib>Yang, Suyi</creatorcontrib><creatorcontrib>Li, Wenhao</creatorcontrib><title>DGattGAN: Cooperative Up-Sampling Based Dual Generator Attentional GAN on Text-to-Image Synthesis</title><title>IEEE access</title><addtitle>Access</addtitle><description>Text-to-image synthesis task aims at generating images consistent with input text descriptions and is well developed by the Generative Adversarial Network (GAN). Although GAN based image generation approaches have achieved promising results, synthesizing quality is sometimes unsatisfied due to discursive generation of background and object. In this article, we propose a cooperative up-sampling based Dual Generator attentional GAN (DGattGAN) to generate high-quality images from text description. To achieve this, two generators with individual generation purpose are established to decouple object and background generation. In particular, we introduce a cooperative up-sampling mechanism to build cooperation between object and background generators during training. This strategy is potentially very useful as any dual generator architecture in GAN models can benefit from this mechanism. Furthermore, we propose an asymmetric information feeding scheme to distinguish two synthesis tasks, such that each generator only synthesizes based on semantic information they accept. Taking advantage of effective dual generator, the attention mechanism we incorporated on object generator could devote to fine-grained details generation on actual targeted objects. Experiments on Caltech-UCSD Bird (CUB) and Oxford-102 datasets suggest that generated images by the proposed model are more realistic and consistent with input text, and DGattGAN is competent compared to state-of-the-art methods according to Inception Score (IS) and R-precision metrics. Our codes are available at: https://github.com/ecfish/DGattGAN .</description><subject>Asymmetric information feeding</subject><subject>cooperative up-sampling</subject><subject>dual generator</subject><subject>Gallium nitride</subject><subject>Generative adversarial networks</subject><subject>Generators</subject><subject>Image processing</subject><subject>Image quality</subject><subject>Image resolution</subject><subject>Image synthesis</subject><subject>Sampling</subject><subject>Synthesis</subject><subject>Task analysis</subject><subject>text-to-image synthesis</subject><subject>Visualization</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUcFKxDAQLaKgqF_gJeC5a5qkbeKtVl0XRA-r5zBNp2uX3aYmWdG_N2tFHBhmeLz3huElyUVGZ1lG1VVV13fL5YxRls04zWVRioPkhGWFSnnOi8N_-3Fy7v2axpIRysuTBG7nEMK8eromtbUjOgj9B5LXMV3Cdtz0w4rcgMeW3O5gQ-Y47BnWkSoEHEJvhz1aPRE7kBf8DGmw6WILKyTLryG8oe_9WXLUwcbj-e88TV7v717qh_Txeb6oq8fUCCpDCrmkTYMtIEdRAC1M2xiTC0VRcKCcC6QcUJmGSpWrpuxY2TV55Mislcj4abKYfFsLaz26fgvuS1vo9Q9g3UqDC73ZoC5Zp7goC9YKIaQpQChRtqhYY7BQWRe9Liev0dn3Hfqg13bn4q9eM6F4rmLTyOITyzjrvcPu72pG9T4aPUWj99Ho32ii6mJS9Yj4p4iGrJSSfwOohomO</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Zhang, Han</creator><creator>Zhu, Hongqing</creator><creator>Yang, Suyi</creator><creator>Li, Wenhao</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-2122-7066</orcidid></search><sort><creationdate>2021</creationdate><title>DGattGAN: Cooperative Up-Sampling Based Dual Generator Attentional GAN on Text-to-Image Synthesis</title><author>Zhang, Han ; Zhu, Hongqing ; Yang, Suyi ; Li, Wenhao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-a580bbedae3e46a06cdbcc5490e43a0334e03ae9cb08959b7f27fb5bcc81d8e23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Asymmetric information feeding</topic><topic>cooperative up-sampling</topic><topic>dual generator</topic><topic>Gallium nitride</topic><topic>Generative adversarial networks</topic><topic>Generators</topic><topic>Image processing</topic><topic>Image quality</topic><topic>Image resolution</topic><topic>Image synthesis</topic><topic>Sampling</topic><topic>Synthesis</topic><topic>Task analysis</topic><topic>text-to-image synthesis</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Han</creatorcontrib><creatorcontrib>Zhu, Hongqing</creatorcontrib><creatorcontrib>Yang, Suyi</creatorcontrib><creatorcontrib>Li, Wenhao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Han</au><au>Zhu, Hongqing</au><au>Yang, Suyi</au><au>Li, Wenhao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DGattGAN: Cooperative Up-Sampling Based Dual Generator Attentional GAN on Text-to-Image Synthesis</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2021</date><risdate>2021</risdate><volume>9</volume><spage>29584</spage><epage>29598</epage><pages>29584-29598</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Text-to-image synthesis task aims at generating images consistent with input text descriptions and is well developed by the Generative Adversarial Network (GAN). Although GAN based image generation approaches have achieved promising results, synthesizing quality is sometimes unsatisfied due to discursive generation of background and object. In this article, we propose a cooperative up-sampling based Dual Generator attentional GAN (DGattGAN) to generate high-quality images from text description. To achieve this, two generators with individual generation purpose are established to decouple object and background generation. In particular, we introduce a cooperative up-sampling mechanism to build cooperation between object and background generators during training. This strategy is potentially very useful as any dual generator architecture in GAN models can benefit from this mechanism. Furthermore, we propose an asymmetric information feeding scheme to distinguish two synthesis tasks, such that each generator only synthesizes based on semantic information they accept. Taking advantage of effective dual generator, the attention mechanism we incorporated on object generator could devote to fine-grained details generation on actual targeted objects. Experiments on Caltech-UCSD Bird (CUB) and Oxford-102 datasets suggest that generated images by the proposed model are more realistic and consistent with input text, and DGattGAN is competent compared to state-of-the-art methods according to Inception Score (IS) and R-precision metrics. Our codes are available at: https://github.com/ecfish/DGattGAN .</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2021.3058674</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-2122-7066</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2021, Vol.9, p.29584-29598 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_ieee_primary_9352788 |
source | IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | Asymmetric information feeding cooperative up-sampling dual generator Gallium nitride Generative adversarial networks Generators Image processing Image quality Image resolution Image synthesis Sampling Synthesis Task analysis text-to-image synthesis Visualization |
title | DGattGAN: Cooperative Up-Sampling Based Dual Generator Attentional GAN on Text-to-Image Synthesis |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T05%3A27%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DGattGAN:%20Cooperative%20Up-Sampling%20Based%20Dual%20Generator%20Attentional%20GAN%20on%20Text-to-Image%20Synthesis&rft.jtitle=IEEE%20access&rft.au=Zhang,%20Han&rft.date=2021&rft.volume=9&rft.spage=29584&rft.epage=29598&rft.pages=29584-29598&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2021.3058674&rft_dat=%3Cproquest_ieee_%3E2493599350%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2493599350&rft_id=info:pmid/&rft_ieee_id=9352788&rft_doaj_id=oai_doaj_org_article_72f934762d4448c6a4947de92bce691f&rfr_iscdi=true |