Are We Using Autoencoders in a Wrong Way?
Autoencoders are certainly among the most studied and used Deep Learning models: the idea behind them is to train a model in order to reconstruct the same input data. The peculiarity of these models is to compress the information through a bottleneck, creating what is called Latent Space. Autoencode...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Martino, Gabriele Moroni, Davide Martinelli, Massimo |
description | Autoencoders are certainly among the most studied and used Deep Learning
models: the idea behind them is to train a model in order to reconstruct the
same input data. The peculiarity of these models is to compress the information
through a bottleneck, creating what is called Latent Space. Autoencoders are
generally used for dimensionality reduction, anomaly detection and feature
extraction. These models have been extensively studied and updated, given their
high simplicity and power. Examples are (i) the Denoising Autoencoder, where
the model is trained to reconstruct an image from a noisy one; (ii) Sparse
Autoencoder, where the bottleneck is created by a regularization term in the
loss function; (iii) Variational Autoencoder, where the latent space is used to
generate new consistent data. In this article, we revisited the standard
training for the undercomplete Autoencoder modifying the shape of the latent
space without using any explicit regularization term in the loss function. We
forced the model to reconstruct not the same observation in input, but another
one sampled from the same class distribution. We also explored the behaviour of
the latent space in the case of reconstruction of a random sample from the
whole dataset. |
doi_str_mv | 10.48550/arxiv.2309.01532 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_01532</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_01532</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-8d93fed90e98b31a4f348e0d21242df42c45e5031a33377a743dfec9171565863</originalsourceid><addsrcrecordid>eNotzj2rwjAYBeAsDqL3BziZ1aG9Sd6kSSYpcv0AwUXpWGLzRgraSqpy_fd-TgfOgcNDyIizVBql2K-L__UtFcBsyrgC0SeTPCItkO66ujnQ_Hppsalaj7GjdUMdLWL77At3nw5JL7hjhz_fHJDt_G87WybrzWI1y9eJy7RIjLcQ0FuG1uyBOxlAGmRecCGFD1JUUqFizwUAtHZagg9YWa65ypTJYEDGn9u3tTzH-uTivXyZy7cZHki8OUY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Are We Using Autoencoders in a Wrong Way?</title><source>arXiv.org</source><creator>Martino, Gabriele ; Moroni, Davide ; Martinelli, Massimo</creator><creatorcontrib>Martino, Gabriele ; Moroni, Davide ; Martinelli, Massimo</creatorcontrib><description>Autoencoders are certainly among the most studied and used Deep Learning
models: the idea behind them is to train a model in order to reconstruct the
same input data. The peculiarity of these models is to compress the information
through a bottleneck, creating what is called Latent Space. Autoencoders are
generally used for dimensionality reduction, anomaly detection and feature
extraction. These models have been extensively studied and updated, given their
high simplicity and power. Examples are (i) the Denoising Autoencoder, where
the model is trained to reconstruct an image from a noisy one; (ii) Sparse
Autoencoder, where the bottleneck is created by a regularization term in the
loss function; (iii) Variational Autoencoder, where the latent space is used to
generate new consistent data. In this article, we revisited the standard
training for the undercomplete Autoencoder modifying the shape of the latent
space without using any explicit regularization term in the loss function. We
forced the model to reconstruct not the same observation in input, but another
one sampled from the same class distribution. We also explored the behaviour of
the latent space in the case of reconstruction of a random sample from the
whole dataset.</description><identifier>DOI: 10.48550/arxiv.2309.01532</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2023-09</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.01532$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.01532$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Martino, Gabriele</creatorcontrib><creatorcontrib>Moroni, Davide</creatorcontrib><creatorcontrib>Martinelli, Massimo</creatorcontrib><title>Are We Using Autoencoders in a Wrong Way?</title><description>Autoencoders are certainly among the most studied and used Deep Learning
models: the idea behind them is to train a model in order to reconstruct the
same input data. The peculiarity of these models is to compress the information
through a bottleneck, creating what is called Latent Space. Autoencoders are
generally used for dimensionality reduction, anomaly detection and feature
extraction. These models have been extensively studied and updated, given their
high simplicity and power. Examples are (i) the Denoising Autoencoder, where
the model is trained to reconstruct an image from a noisy one; (ii) Sparse
Autoencoder, where the bottleneck is created by a regularization term in the
loss function; (iii) Variational Autoencoder, where the latent space is used to
generate new consistent data. In this article, we revisited the standard
training for the undercomplete Autoencoder modifying the shape of the latent
space without using any explicit regularization term in the loss function. We
forced the model to reconstruct not the same observation in input, but another
one sampled from the same class distribution. We also explored the behaviour of
the latent space in the case of reconstruction of a random sample from the
whole dataset.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzj2rwjAYBeAsDqL3BziZ1aG9Sd6kSSYpcv0AwUXpWGLzRgraSqpy_fd-TgfOgcNDyIizVBql2K-L__UtFcBsyrgC0SeTPCItkO66ujnQ_Hppsalaj7GjdUMdLWL77At3nw5JL7hjhz_fHJDt_G87WybrzWI1y9eJy7RIjLcQ0FuG1uyBOxlAGmRecCGFD1JUUqFizwUAtHZagg9YWa65ypTJYEDGn9u3tTzH-uTivXyZy7cZHki8OUY</recordid><startdate>20230904</startdate><enddate>20230904</enddate><creator>Martino, Gabriele</creator><creator>Moroni, Davide</creator><creator>Martinelli, Massimo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230904</creationdate><title>Are We Using Autoencoders in a Wrong Way?</title><author>Martino, Gabriele ; Moroni, Davide ; Martinelli, Massimo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-8d93fed90e98b31a4f348e0d21242df42c45e5031a33377a743dfec9171565863</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Martino, Gabriele</creatorcontrib><creatorcontrib>Moroni, Davide</creatorcontrib><creatorcontrib>Martinelli, Massimo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Martino, Gabriele</au><au>Moroni, Davide</au><au>Martinelli, Massimo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Are We Using Autoencoders in a Wrong Way?</atitle><date>2023-09-04</date><risdate>2023</risdate><abstract>Autoencoders are certainly among the most studied and used Deep Learning
models: the idea behind them is to train a model in order to reconstruct the
same input data. The peculiarity of these models is to compress the information
through a bottleneck, creating what is called Latent Space. Autoencoders are
generally used for dimensionality reduction, anomaly detection and feature
extraction. These models have been extensively studied and updated, given their
high simplicity and power. Examples are (i) the Denoising Autoencoder, where
the model is trained to reconstruct an image from a noisy one; (ii) Sparse
Autoencoder, where the bottleneck is created by a regularization term in the
loss function; (iii) Variational Autoencoder, where the latent space is used to
generate new consistent data. In this article, we revisited the standard
training for the undercomplete Autoencoder modifying the shape of the latent
space without using any explicit regularization term in the loss function. We
forced the model to reconstruct not the same observation in input, but another
one sampled from the same class distribution. We also explored the behaviour of
the latent space in the case of reconstruction of a random sample from the
whole dataset.</abstract><doi>10.48550/arxiv.2309.01532</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2309.01532 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2309_01532 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | Are We Using Autoencoders in a Wrong Way? |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T09%3A30%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Are%20We%20Using%20Autoencoders%20in%20a%20Wrong%20Way?&rft.au=Martino,%20Gabriele&rft.date=2023-09-04&rft_id=info:doi/10.48550/arxiv.2309.01532&rft_dat=%3Carxiv_GOX%3E2309_01532%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |