Targeted Collapse Regularized Autoencoder for Anomaly Detection: Black Hole at the Center

Autoencoders have been extensively used in the development of recent anomaly detection techniques. The premise of their application is based on the notion that after training the autoencoder on normal training data, anomalous inputs will exhibit a significant reconstruction error. Consequently, this...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2024-10, Vol.PP, p.1-11
Hauptverfasser: Ghafourian, Amin, Shui, Huanyi, Upadhyay, Devesh, Gupta, Rajesh, Filev, Dimitar, Soltani, Iman
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 11
container_issue
container_start_page 1
container_title IEEE transaction on neural networks and learning systems
container_volume PP
creator Ghafourian, Amin
Shui, Huanyi
Upadhyay, Devesh
Gupta, Rajesh
Filev, Dimitar
Soltani, Iman
description Autoencoders have been extensively used in the development of recent anomaly detection techniques. The premise of their application is based on the notion that after training the autoencoder on normal training data, anomalous inputs will exhibit a significant reconstruction error. Consequently, this enables a clear differentiation between normal and anomalous samples. In practice, however, it is observed that autoencoders can generalize beyond the normal class and achieve a small reconstruction error on some of the anomalous samples. To improve the performance, various techniques propose additional components and more sophisticated training procedures. In this work, we propose a remarkably straightforward alternative: instead of adding neural network components, involved computations, and cumbersome training, we complement the reconstruction loss with a computationally light term that regulates the norm of representations in the latent space. The simplicity of our approach minimizes the requirement for hyperparameter tuning and customization for new applications which, paired with its permissive data modality constraint, enhances the potential for successful adoption across a broad range of applications. We test the method on various visual and tabular benchmarks and demonstrate that the technique matches and frequently outperforms more complex alternatives. We further demonstrate that implementing this idea in the context of state-of-the-art methods can further improve their performance. We also provide a theoretical analysis and numerical simulations that help demonstrate the underlying process that unfolds during training and how it helps with anomaly detection. This mitigates the black-box nature of autoencoder-based anomaly detection algorithms and offers an avenue for further investigation of advantages, fail cases, and potential new directions.
doi_str_mv 10.1109/TNNLS.2024.3472456
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmed_primary_39412980</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10720395</ieee_id><sourcerecordid>3117615774</sourcerecordid><originalsourceid>FETCH-LOGICAL-c249t-2ce2310d0f59691fe43233267ffdbaff89ec361eb31dddfebac95ff6011239fa3</originalsourceid><addsrcrecordid>eNpNkF1LwzAUhoMobsz9ARHJpTed-WjTxbtZPyaMCTpBr0qansxq28ykvZi_3s7N4bnJITzvC-dB6JSSEaVEXi7m89nziBEWjngYszASB6jPqGAB4-Px4X6PX3to6P0H6UaQSITyGPW4DCmTY9JHbwvlltBAjhNblmrlAT_Bsi2VK767z0nbWKi1zcFhYx2e1LZS5RrfdBHdFLa-wtel0p94akvAqsHNO-AE6gbcCToyqvQw3L0D9HJ3u0imwezx_iGZzALNQtkETAPjlOTERFJIaiDkjHMmYmPyTBkzlqC5oJBxmue5gUxpGRkjCKWMS6P4AF1se1fOfrXgm7QqvIbumBps61NOaSxoFMdhh7Itqp313oFJV66olFunlKQbq-mv1XRjNd1Z7ULnu_42qyDfR_4cdsDZFigA4F9jzAiXEf8Bo8V8TA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3117615774</pqid></control><display><type>article</type><title>Targeted Collapse Regularized Autoencoder for Anomaly Detection: Black Hole at the Center</title><source>IEL</source><creator>Ghafourian, Amin ; Shui, Huanyi ; Upadhyay, Devesh ; Gupta, Rajesh ; Filev, Dimitar ; Soltani, Iman</creator><creatorcontrib>Ghafourian, Amin ; Shui, Huanyi ; Upadhyay, Devesh ; Gupta, Rajesh ; Filev, Dimitar ; Soltani, Iman</creatorcontrib><description>Autoencoders have been extensively used in the development of recent anomaly detection techniques. The premise of their application is based on the notion that after training the autoencoder on normal training data, anomalous inputs will exhibit a significant reconstruction error. Consequently, this enables a clear differentiation between normal and anomalous samples. In practice, however, it is observed that autoencoders can generalize beyond the normal class and achieve a small reconstruction error on some of the anomalous samples. To improve the performance, various techniques propose additional components and more sophisticated training procedures. In this work, we propose a remarkably straightforward alternative: instead of adding neural network components, involved computations, and cumbersome training, we complement the reconstruction loss with a computationally light term that regulates the norm of representations in the latent space. The simplicity of our approach minimizes the requirement for hyperparameter tuning and customization for new applications which, paired with its permissive data modality constraint, enhances the potential for successful adoption across a broad range of applications. We test the method on various visual and tabular benchmarks and demonstrate that the technique matches and frequently outperforms more complex alternatives. We further demonstrate that implementing this idea in the context of state-of-the-art methods can further improve their performance. We also provide a theoretical analysis and numerical simulations that help demonstrate the underlying process that unfolds during training and how it helps with anomaly detection. This mitigates the black-box nature of autoencoder-based anomaly detection algorithms and offers an avenue for further investigation of advantages, fail cases, and potential new directions.</description><identifier>ISSN: 2162-237X</identifier><identifier>ISSN: 2162-2388</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2024.3472456</identifier><identifier>PMID: 39412980</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Anomaly detection ; learning dynamics ; regularized autoencoders ; representation learning</subject><ispartof>IEEE transaction on neural networks and learning systems, 2024-10, Vol.PP, p.1-11</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>dimitar.filev@gmail.com ; aghafourian@ucdavis.edu ; isoltani@ucdavis.edu ; dupadhya@ford.com ; rgupta39@ford.com</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10720395$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39412980$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Ghafourian, Amin</creatorcontrib><creatorcontrib>Shui, Huanyi</creatorcontrib><creatorcontrib>Upadhyay, Devesh</creatorcontrib><creatorcontrib>Gupta, Rajesh</creatorcontrib><creatorcontrib>Filev, Dimitar</creatorcontrib><creatorcontrib>Soltani, Iman</creatorcontrib><title>Targeted Collapse Regularized Autoencoder for Anomaly Detection: Black Hole at the Center</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Autoencoders have been extensively used in the development of recent anomaly detection techniques. The premise of their application is based on the notion that after training the autoencoder on normal training data, anomalous inputs will exhibit a significant reconstruction error. Consequently, this enables a clear differentiation between normal and anomalous samples. In practice, however, it is observed that autoencoders can generalize beyond the normal class and achieve a small reconstruction error on some of the anomalous samples. To improve the performance, various techniques propose additional components and more sophisticated training procedures. In this work, we propose a remarkably straightforward alternative: instead of adding neural network components, involved computations, and cumbersome training, we complement the reconstruction loss with a computationally light term that regulates the norm of representations in the latent space. The simplicity of our approach minimizes the requirement for hyperparameter tuning and customization for new applications which, paired with its permissive data modality constraint, enhances the potential for successful adoption across a broad range of applications. We test the method on various visual and tabular benchmarks and demonstrate that the technique matches and frequently outperforms more complex alternatives. We further demonstrate that implementing this idea in the context of state-of-the-art methods can further improve their performance. We also provide a theoretical analysis and numerical simulations that help demonstrate the underlying process that unfolds during training and how it helps with anomaly detection. This mitigates the black-box nature of autoencoder-based anomaly detection algorithms and offers an avenue for further investigation of advantages, fail cases, and potential new directions.</description><subject>Anomaly detection</subject><subject>learning dynamics</subject><subject>regularized autoencoders</subject><subject>representation learning</subject><issn>2162-237X</issn><issn>2162-2388</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNpNkF1LwzAUhoMobsz9ARHJpTed-WjTxbtZPyaMCTpBr0qansxq28ykvZi_3s7N4bnJITzvC-dB6JSSEaVEXi7m89nziBEWjngYszASB6jPqGAB4-Px4X6PX3to6P0H6UaQSITyGPW4DCmTY9JHbwvlltBAjhNblmrlAT_Bsi2VK767z0nbWKi1zcFhYx2e1LZS5RrfdBHdFLa-wtel0p94akvAqsHNO-AE6gbcCToyqvQw3L0D9HJ3u0imwezx_iGZzALNQtkETAPjlOTERFJIaiDkjHMmYmPyTBkzlqC5oJBxmue5gUxpGRkjCKWMS6P4AF1se1fOfrXgm7QqvIbumBps61NOaSxoFMdhh7Itqp313oFJV66olFunlKQbq-mv1XRjNd1Z7ULnu_42qyDfR_4cdsDZFigA4F9jzAiXEf8Bo8V8TA</recordid><startdate>20241016</startdate><enddate>20241016</enddate><creator>Ghafourian, Amin</creator><creator>Shui, Huanyi</creator><creator>Upadhyay, Devesh</creator><creator>Gupta, Rajesh</creator><creator>Filev, Dimitar</creator><creator>Soltani, Iman</creator><general>IEEE</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/dimitar.filev@gmail.com</orcidid><orcidid>https://orcid.org/aghafourian@ucdavis.edu</orcidid><orcidid>https://orcid.org/isoltani@ucdavis.edu</orcidid><orcidid>https://orcid.org/dupadhya@ford.com</orcidid><orcidid>https://orcid.org/rgupta39@ford.com</orcidid></search><sort><creationdate>20241016</creationdate><title>Targeted Collapse Regularized Autoencoder for Anomaly Detection: Black Hole at the Center</title><author>Ghafourian, Amin ; Shui, Huanyi ; Upadhyay, Devesh ; Gupta, Rajesh ; Filev, Dimitar ; Soltani, Iman</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c249t-2ce2310d0f59691fe43233267ffdbaff89ec361eb31dddfebac95ff6011239fa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Anomaly detection</topic><topic>learning dynamics</topic><topic>regularized autoencoders</topic><topic>representation learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ghafourian, Amin</creatorcontrib><creatorcontrib>Shui, Huanyi</creatorcontrib><creatorcontrib>Upadhyay, Devesh</creatorcontrib><creatorcontrib>Gupta, Rajesh</creatorcontrib><creatorcontrib>Filev, Dimitar</creatorcontrib><creatorcontrib>Soltani, Iman</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEL</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ghafourian, Amin</au><au>Shui, Huanyi</au><au>Upadhyay, Devesh</au><au>Gupta, Rajesh</au><au>Filev, Dimitar</au><au>Soltani, Iman</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Targeted Collapse Regularized Autoencoder for Anomaly Detection: Black Hole at the Center</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2024-10-16</date><risdate>2024</risdate><volume>PP</volume><spage>1</spage><epage>11</epage><pages>1-11</pages><issn>2162-237X</issn><issn>2162-2388</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Autoencoders have been extensively used in the development of recent anomaly detection techniques. The premise of their application is based on the notion that after training the autoencoder on normal training data, anomalous inputs will exhibit a significant reconstruction error. Consequently, this enables a clear differentiation between normal and anomalous samples. In practice, however, it is observed that autoencoders can generalize beyond the normal class and achieve a small reconstruction error on some of the anomalous samples. To improve the performance, various techniques propose additional components and more sophisticated training procedures. In this work, we propose a remarkably straightforward alternative: instead of adding neural network components, involved computations, and cumbersome training, we complement the reconstruction loss with a computationally light term that regulates the norm of representations in the latent space. The simplicity of our approach minimizes the requirement for hyperparameter tuning and customization for new applications which, paired with its permissive data modality constraint, enhances the potential for successful adoption across a broad range of applications. We test the method on various visual and tabular benchmarks and demonstrate that the technique matches and frequently outperforms more complex alternatives. We further demonstrate that implementing this idea in the context of state-of-the-art methods can further improve their performance. We also provide a theoretical analysis and numerical simulations that help demonstrate the underlying process that unfolds during training and how it helps with anomaly detection. This mitigates the black-box nature of autoencoder-based anomaly detection algorithms and offers an avenue for further investigation of advantages, fail cases, and potential new directions.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>39412980</pmid><doi>10.1109/TNNLS.2024.3472456</doi><tpages>11</tpages><orcidid>https://orcid.org/dimitar.filev@gmail.com</orcidid><orcidid>https://orcid.org/aghafourian@ucdavis.edu</orcidid><orcidid>https://orcid.org/isoltani@ucdavis.edu</orcidid><orcidid>https://orcid.org/dupadhya@ford.com</orcidid><orcidid>https://orcid.org/rgupta39@ford.com</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2024-10, Vol.PP, p.1-11
issn 2162-237X
2162-2388
2162-2388
language eng
recordid cdi_pubmed_primary_39412980
source IEL
subjects Anomaly detection
learning dynamics
regularized autoencoders
representation learning
title Targeted Collapse Regularized Autoencoder for Anomaly Detection: Black Hole at the Center
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T11%3A41%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Targeted%20Collapse%20Regularized%20Autoencoder%20for%20Anomaly%20Detection:%20Black%20Hole%20at%20the%20Center&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Ghafourian,%20Amin&rft.date=2024-10-16&rft.volume=PP&rft.spage=1&rft.epage=11&rft.pages=1-11&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2024.3472456&rft_dat=%3Cproquest_pubme%3E3117615774%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3117615774&rft_id=info:pmid/39412980&rft_ieee_id=10720395&rfr_iscdi=true