Using subspaces of weight matrix for evaluating generative adversarial networks with Fréchet distance

Summary Fréchet inception distance (FID) has gained a better reputation as an evaluation metric for generative adversarial networks (GANs). However, it is subjected to fluctuation, namely, the same GAN model, when trained at different times can have different FID scores, due to the randomness of the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Concurrency and computation 2022-01, Vol.34 (1), p.n/a
1. Verfasser: Eken, Enes
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page n/a
container_issue 1
container_start_page
container_title Concurrency and computation
container_volume 34
creator Eken, Enes
description Summary Fréchet inception distance (FID) has gained a better reputation as an evaluation metric for generative adversarial networks (GANs). However, it is subjected to fluctuation, namely, the same GAN model, when trained at different times can have different FID scores, due to the randomness of the weight matrices in the networks, stochastic gradient descent, and the embedded distribution (activation outputs at a hidden layer). In calculating the FIDs, embedded distribution plays the key role and it is not a trivial question from where obtaining it since it contributes to the fluctuation also. In this article, I showed that embedded distribution can be obtained from three different subspaces of the weight matrix, namely, from the row space, the null space, and the column space, and I analyzed the effect of the each space to Fréchet distances (FDs). Since the different spaces show different behaviors, choosing a subspace is not an insignificant decision. Instead of directly using the embedded distribution obtained from hidden layer's activations to calculate the FD, I proposed to use projection of embedded distribution onto the null space of the weight matrix among the three subspaces to avoid the fluctuations. My simulation results conducted at MNIST, CIFAR10, and CelebA datasets, show that, by projecting the embedded distributions onto the null spaces, possible parasitic effects coming from the randomness are being eliminated and reduces the number of needed simulations ≈25× in MNIST dataset, ≈21× in CIFAR10, and ≈12× in CelebA dataset.
doi_str_mv 10.1002/cpe.6478
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2608126535</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2608126535</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2548-cbbf8355bf328ab8b13e49f07aedd7212f6796b1609dda0fae290c78990c9eb73</originalsourceid><addsrcrecordid>eNp10EtOwzAQBmALgUQpSBzBEhs2KX7kuURVC0iVYEHXlu2MW5c0CbbT0CNxDi5GShE7NjP_4tOM9CN0TcmEEsLudAuTNM7yEzSiCWcRSXl8-pdZeo4uvN8QQinhdITM0tt6hX2nfCs1eNwY3INdrQPeyuDsBzaNw7CTVSfDQa6gBjfEHWBZ7sB56ayscA2hb9ybx70Nazx3X596DQGX1gdZa7hEZ0ZWHq5-9xgt57PX6WO0eH54mt4vIs2SOI-0UibnSaIMZ7lUuaIc4sKQTEJZZowyk2ZFqmhKirKUxEhgBdFZXgyzAJXxMbo53m1d896BD2LTdK4eXgqWkpyyNOHJoG6PSrvGewdGtM5updsLSsShRTG0KA4tDjQ60t5WsP_XienL7Md_Awkcdd0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2608126535</pqid></control><display><type>article</type><title>Using subspaces of weight matrix for evaluating generative adversarial networks with Fréchet distance</title><source>Access via Wiley Online Library</source><creator>Eken, Enes</creator><creatorcontrib>Eken, Enes</creatorcontrib><description>Summary Fréchet inception distance (FID) has gained a better reputation as an evaluation metric for generative adversarial networks (GANs). However, it is subjected to fluctuation, namely, the same GAN model, when trained at different times can have different FID scores, due to the randomness of the weight matrices in the networks, stochastic gradient descent, and the embedded distribution (activation outputs at a hidden layer). In calculating the FIDs, embedded distribution plays the key role and it is not a trivial question from where obtaining it since it contributes to the fluctuation also. In this article, I showed that embedded distribution can be obtained from three different subspaces of the weight matrix, namely, from the row space, the null space, and the column space, and I analyzed the effect of the each space to Fréchet distances (FDs). Since the different spaces show different behaviors, choosing a subspace is not an insignificant decision. Instead of directly using the embedded distribution obtained from hidden layer's activations to calculate the FD, I proposed to use projection of embedded distribution onto the null space of the weight matrix among the three subspaces to avoid the fluctuations. My simulation results conducted at MNIST, CIFAR10, and CelebA datasets, show that, by projecting the embedded distributions onto the null spaces, possible parasitic effects coming from the randomness are being eliminated and reduces the number of needed simulations ≈25× in MNIST dataset, ≈21× in CIFAR10, and ≈12× in CelebA dataset.</description><identifier>ISSN: 1532-0626</identifier><identifier>EISSN: 1532-0634</identifier><identifier>DOI: 10.1002/cpe.6478</identifier><language>eng</language><publisher>Hoboken: Wiley Subscription Services, Inc</publisher><subject>Datasets ; eliminating fluctuations ; Fréchet inception distance ; Generative adversarial networks ; Randomness ; Subspaces</subject><ispartof>Concurrency and computation, 2022-01, Vol.34 (1), p.n/a</ispartof><rights>2021 John Wiley &amp; Sons Ltd.</rights><rights>2022 John Wiley &amp; Sons, Ltd.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c2548-cbbf8355bf328ab8b13e49f07aedd7212f6796b1609dda0fae290c78990c9eb73</cites><orcidid>0000-0002-7534-6247</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fcpe.6478$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fcpe.6478$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1417,27924,27925,45574,45575</link.rule.ids></links><search><creatorcontrib>Eken, Enes</creatorcontrib><title>Using subspaces of weight matrix for evaluating generative adversarial networks with Fréchet distance</title><title>Concurrency and computation</title><description>Summary Fréchet inception distance (FID) has gained a better reputation as an evaluation metric for generative adversarial networks (GANs). However, it is subjected to fluctuation, namely, the same GAN model, when trained at different times can have different FID scores, due to the randomness of the weight matrices in the networks, stochastic gradient descent, and the embedded distribution (activation outputs at a hidden layer). In calculating the FIDs, embedded distribution plays the key role and it is not a trivial question from where obtaining it since it contributes to the fluctuation also. In this article, I showed that embedded distribution can be obtained from three different subspaces of the weight matrix, namely, from the row space, the null space, and the column space, and I analyzed the effect of the each space to Fréchet distances (FDs). Since the different spaces show different behaviors, choosing a subspace is not an insignificant decision. Instead of directly using the embedded distribution obtained from hidden layer's activations to calculate the FD, I proposed to use projection of embedded distribution onto the null space of the weight matrix among the three subspaces to avoid the fluctuations. My simulation results conducted at MNIST, CIFAR10, and CelebA datasets, show that, by projecting the embedded distributions onto the null spaces, possible parasitic effects coming from the randomness are being eliminated and reduces the number of needed simulations ≈25× in MNIST dataset, ≈21× in CIFAR10, and ≈12× in CelebA dataset.</description><subject>Datasets</subject><subject>eliminating fluctuations</subject><subject>Fréchet inception distance</subject><subject>Generative adversarial networks</subject><subject>Randomness</subject><subject>Subspaces</subject><issn>1532-0626</issn><issn>1532-0634</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp10EtOwzAQBmALgUQpSBzBEhs2KX7kuURVC0iVYEHXlu2MW5c0CbbT0CNxDi5GShE7NjP_4tOM9CN0TcmEEsLudAuTNM7yEzSiCWcRSXl8-pdZeo4uvN8QQinhdITM0tt6hX2nfCs1eNwY3INdrQPeyuDsBzaNw7CTVSfDQa6gBjfEHWBZ7sB56ayscA2hb9ybx70Nazx3X596DQGX1gdZa7hEZ0ZWHq5-9xgt57PX6WO0eH54mt4vIs2SOI-0UibnSaIMZ7lUuaIc4sKQTEJZZowyk2ZFqmhKirKUxEhgBdFZXgyzAJXxMbo53m1d896BD2LTdK4eXgqWkpyyNOHJoG6PSrvGewdGtM5updsLSsShRTG0KA4tDjQ60t5WsP_XienL7Md_Awkcdd0</recordid><startdate>20220110</startdate><enddate>20220110</enddate><creator>Eken, Enes</creator><general>Wiley Subscription Services, Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-7534-6247</orcidid></search><sort><creationdate>20220110</creationdate><title>Using subspaces of weight matrix for evaluating generative adversarial networks with Fréchet distance</title><author>Eken, Enes</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2548-cbbf8355bf328ab8b13e49f07aedd7212f6796b1609dda0fae290c78990c9eb73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Datasets</topic><topic>eliminating fluctuations</topic><topic>Fréchet inception distance</topic><topic>Generative adversarial networks</topic><topic>Randomness</topic><topic>Subspaces</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Eken, Enes</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Concurrency and computation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Eken, Enes</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Using subspaces of weight matrix for evaluating generative adversarial networks with Fréchet distance</atitle><jtitle>Concurrency and computation</jtitle><date>2022-01-10</date><risdate>2022</risdate><volume>34</volume><issue>1</issue><epage>n/a</epage><issn>1532-0626</issn><eissn>1532-0634</eissn><abstract>Summary Fréchet inception distance (FID) has gained a better reputation as an evaluation metric for generative adversarial networks (GANs). However, it is subjected to fluctuation, namely, the same GAN model, when trained at different times can have different FID scores, due to the randomness of the weight matrices in the networks, stochastic gradient descent, and the embedded distribution (activation outputs at a hidden layer). In calculating the FIDs, embedded distribution plays the key role and it is not a trivial question from where obtaining it since it contributes to the fluctuation also. In this article, I showed that embedded distribution can be obtained from three different subspaces of the weight matrix, namely, from the row space, the null space, and the column space, and I analyzed the effect of the each space to Fréchet distances (FDs). Since the different spaces show different behaviors, choosing a subspace is not an insignificant decision. Instead of directly using the embedded distribution obtained from hidden layer's activations to calculate the FD, I proposed to use projection of embedded distribution onto the null space of the weight matrix among the three subspaces to avoid the fluctuations. My simulation results conducted at MNIST, CIFAR10, and CelebA datasets, show that, by projecting the embedded distributions onto the null spaces, possible parasitic effects coming from the randomness are being eliminated and reduces the number of needed simulations ≈25× in MNIST dataset, ≈21× in CIFAR10, and ≈12× in CelebA dataset.</abstract><cop>Hoboken</cop><pub>Wiley Subscription Services, Inc</pub><doi>10.1002/cpe.6478</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-7534-6247</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1532-0626
ispartof Concurrency and computation, 2022-01, Vol.34 (1), p.n/a
issn 1532-0626
1532-0634
language eng
recordid cdi_proquest_journals_2608126535
source Access via Wiley Online Library
subjects Datasets
eliminating fluctuations
Fréchet inception distance
Generative adversarial networks
Randomness
Subspaces
title Using subspaces of weight matrix for evaluating generative adversarial networks with Fréchet distance
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T17%3A23%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Using%20subspaces%20of%20weight%20matrix%20for%20evaluating%20generative%20adversarial%20networks%20with%20Fr%C3%A9chet%20distance&rft.jtitle=Concurrency%20and%20computation&rft.au=Eken,%20Enes&rft.date=2022-01-10&rft.volume=34&rft.issue=1&rft.epage=n/a&rft.issn=1532-0626&rft.eissn=1532-0634&rft_id=info:doi/10.1002/cpe.6478&rft_dat=%3Cproquest_cross%3E2608126535%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2608126535&rft_id=info:pmid/&rfr_iscdi=true