Leveraging Relational Information for Learning Weakly Disentangled Representations

Disentanglement is a difficult property to enforce in neural representations. This might be due, in part, to a formalization of the disentanglement problem that focuses too heavily on separating relevant factors of variation of the data in single isolated dimensions of the neural representation. We...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Valenti, Andrea, Bacciu, Davide
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Valenti, Andrea
Bacciu, Davide
description Disentanglement is a difficult property to enforce in neural representations. This might be due, in part, to a formalization of the disentanglement problem that focuses too heavily on separating relevant factors of variation of the data in single isolated dimensions of the neural representation. We argue that such a definition might be too restrictive and not necessarily beneficial in terms of downstream tasks. In this work, we present an alternative view over learning (weakly) disentangled representations, which leverages concepts from relational learning. We identify the regions of the latent space that correspond to specific instances of generative factors, and we learn the relationships among these regions in order to perform controlled changes to the latent codes. We also introduce a compound generative model that implements such a weak disentanglement approach. Our experiments shows that the learned representations can separate the relevant factors of variation in the data, while preserving the information needed for effectively generating high quality data samples.
doi_str_mv 10.48550/arxiv.2205.10056
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2205_10056</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2205_10056</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-45e6a97baad201dbd5da56a234b385f1b024d16ad0a62600f1562958dde2995b3</originalsourceid><addsrcrecordid>eNotj8FqwzAQRHXJoST9gJ6qH7C7kr2KfQxJmwQMhRDo0azQ2ogqSpBDaP6-sdvTzMDMwBPiRUFeVojwRunH33KtAXMFgOZJHBq-caLex14eONDVnyMFuY_dOZ2mJB9ONkwpjp0vpu9wlxs_cLxS7AO7x-6SeMpjf1iIWUdh4Od_nYvjx_txvcuaz-1-vWoyMkuTlciG6qUlchqUsw4doSFdlLaosFMWdOmUIQdktAHoFBpdY-Uc67pGW8zF69_tBNVekj9RurcjXDvBFb-Y3Esf</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Leveraging Relational Information for Learning Weakly Disentangled Representations</title><source>arXiv.org</source><creator>Valenti, Andrea ; Bacciu, Davide</creator><creatorcontrib>Valenti, Andrea ; Bacciu, Davide</creatorcontrib><description>Disentanglement is a difficult property to enforce in neural representations. This might be due, in part, to a formalization of the disentanglement problem that focuses too heavily on separating relevant factors of variation of the data in single isolated dimensions of the neural representation. We argue that such a definition might be too restrictive and not necessarily beneficial in terms of downstream tasks. In this work, we present an alternative view over learning (weakly) disentangled representations, which leverages concepts from relational learning. We identify the regions of the latent space that correspond to specific instances of generative factors, and we learn the relationships among these regions in order to perform controlled changes to the latent codes. We also introduce a compound generative model that implements such a weak disentanglement approach. Our experiments shows that the learned representations can separate the relevant factors of variation in the data, while preserving the information needed for effectively generating high quality data samples.</description><identifier>DOI: 10.48550/arxiv.2205.10056</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2022-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2205.10056$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2205.10056$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Valenti, Andrea</creatorcontrib><creatorcontrib>Bacciu, Davide</creatorcontrib><title>Leveraging Relational Information for Learning Weakly Disentangled Representations</title><description>Disentanglement is a difficult property to enforce in neural representations. This might be due, in part, to a formalization of the disentanglement problem that focuses too heavily on separating relevant factors of variation of the data in single isolated dimensions of the neural representation. We argue that such a definition might be too restrictive and not necessarily beneficial in terms of downstream tasks. In this work, we present an alternative view over learning (weakly) disentangled representations, which leverages concepts from relational learning. We identify the regions of the latent space that correspond to specific instances of generative factors, and we learn the relationships among these regions in order to perform controlled changes to the latent codes. We also introduce a compound generative model that implements such a weak disentanglement approach. Our experiments shows that the learned representations can separate the relevant factors of variation in the data, while preserving the information needed for effectively generating high quality data samples.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FqwzAQRHXJoST9gJ6qH7C7kr2KfQxJmwQMhRDo0azQ2ogqSpBDaP6-sdvTzMDMwBPiRUFeVojwRunH33KtAXMFgOZJHBq-caLex14eONDVnyMFuY_dOZ2mJB9ONkwpjp0vpu9wlxs_cLxS7AO7x-6SeMpjf1iIWUdh4Od_nYvjx_txvcuaz-1-vWoyMkuTlciG6qUlchqUsw4doSFdlLaosFMWdOmUIQdktAHoFBpdY-Uc67pGW8zF69_tBNVekj9RurcjXDvBFb-Y3Esf</recordid><startdate>20220520</startdate><enddate>20220520</enddate><creator>Valenti, Andrea</creator><creator>Bacciu, Davide</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220520</creationdate><title>Leveraging Relational Information for Learning Weakly Disentangled Representations</title><author>Valenti, Andrea ; Bacciu, Davide</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-45e6a97baad201dbd5da56a234b385f1b024d16ad0a62600f1562958dde2995b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Valenti, Andrea</creatorcontrib><creatorcontrib>Bacciu, Davide</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Valenti, Andrea</au><au>Bacciu, Davide</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Leveraging Relational Information for Learning Weakly Disentangled Representations</atitle><date>2022-05-20</date><risdate>2022</risdate><abstract>Disentanglement is a difficult property to enforce in neural representations. This might be due, in part, to a formalization of the disentanglement problem that focuses too heavily on separating relevant factors of variation of the data in single isolated dimensions of the neural representation. We argue that such a definition might be too restrictive and not necessarily beneficial in terms of downstream tasks. In this work, we present an alternative view over learning (weakly) disentangled representations, which leverages concepts from relational learning. We identify the regions of the latent space that correspond to specific instances of generative factors, and we learn the relationships among these regions in order to perform controlled changes to the latent codes. We also introduce a compound generative model that implements such a weak disentanglement approach. Our experiments shows that the learned representations can separate the relevant factors of variation in the data, while preserving the information needed for effectively generating high quality data samples.</abstract><doi>10.48550/arxiv.2205.10056</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2205.10056
ispartof
issn
language eng
recordid cdi_arxiv_primary_2205_10056
source arXiv.org
subjects Computer Science - Learning
title Leveraging Relational Information for Learning Weakly Disentangled Representations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T22%3A57%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Leveraging%20Relational%20Information%20for%20Learning%20Weakly%20Disentangled%20Representations&rft.au=Valenti,%20Andrea&rft.date=2022-05-20&rft_id=info:doi/10.48550/arxiv.2205.10056&rft_dat=%3Carxiv_GOX%3E2205_10056%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true