Adaptive Memory Networks With Self-Supervised Learning for Unsupervised Anomaly Detection

Unsupervised anomaly detection aims to build models to effectively detect unseen anomalies by only training on the normal data. Although previous reconstruction-based methods have made fruitful progress, their generalization ability is limited due to two critical challenges. First, the training data...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2023-12, Vol.35 (12), p.12068-12080
Hauptverfasser: Zhang, Yuxin, Wang, Jindong, Chen, Yiqiang, Yu, Han, Qin, Tao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 12080
container_issue 12
container_start_page 12068
container_title IEEE transactions on knowledge and data engineering
container_volume 35
creator Zhang, Yuxin
Wang, Jindong
Chen, Yiqiang
Yu, Han
Qin, Tao
description Unsupervised anomaly detection aims to build models to effectively detect unseen anomalies by only training on the normal data. Although previous reconstruction-based methods have made fruitful progress, their generalization ability is limited due to two critical challenges. First, the training dataset only contains normal patterns, which limits the model generalization ability. Second, the feature representations learned by existing models often lack representativeness which hampers the ability to preserve the diversity of normal patterns. In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection. Based on the convolutional autoencoder structure, AMSL incorporates a self-supervised learning module to learn general normal patterns and an adaptive memory fusion module to learn rich feature representations. Experiments on four public multivariate time series datasets demonstrate that AMSL significantly improves the performance compared to other state-of-the-art methods. Specifically, on the largest CAP sleep stage detection dataset with 900 million samples, AMSL outperforms the second-best baseline by 4 %+ in both accuracy and F1 score. Apart from the enhanced generalization ability, AMSL is also more robust against input noise.
doi_str_mv 10.1109/TKDE.2021.3139916
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9669068</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9669068</ieee_id><sourcerecordid>2887113986</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-cb407910e6bbbdf654a15bab380b9ebdb05c97b77bd8763074bdaf2438c72ddf3</originalsourceid><addsrcrecordid>eNpFkMtOwzAQRS0EEqXwAYiNJdYpdpz4saxaXqLAoq0QK8uOJ5DSxsFOi_r3pGoFq7nSnDsjHYQuKRlQStTN7Gl8O0hJSgeMMqUoP0I9mucySamix10mGU0ylolTdBbjghAihaQ99D50pmmrDeBnWPmwxS_Q_vjwFfFb1X7iKSzLZLpuIGyqCA5PwIS6qj9w6QOe1_F_M6z9yiy3eAwtFG3l63N0UpplhIvD7KP53e1s9JBMXu8fR8NJUqSKtUlhMyIUJcCtta7keWZobo1lklgF1lmSF0pYIayTgjMiMutMmWZMFiJ1rmR9dL2_2wT_vYbY6oVfh7p7qVMpBe10SN5RdE8VwccYoNRNqFYmbDUlemdQ7wzqnUF9MNh1rvadCgD-eMW5IlyyX3wDbhY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2887113986</pqid></control><display><type>article</type><title>Adaptive Memory Networks With Self-Supervised Learning for Unsupervised Anomaly Detection</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Yuxin ; Wang, Jindong ; Chen, Yiqiang ; Yu, Han ; Qin, Tao</creator><creatorcontrib>Zhang, Yuxin ; Wang, Jindong ; Chen, Yiqiang ; Yu, Han ; Qin, Tao</creatorcontrib><description>Unsupervised anomaly detection aims to build models to effectively detect unseen anomalies by only training on the normal data. Although previous reconstruction-based methods have made fruitful progress, their generalization ability is limited due to two critical challenges. First, the training dataset only contains normal patterns, which limits the model generalization ability. Second, the feature representations learned by existing models often lack representativeness which hampers the ability to preserve the diversity of normal patterns. In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection. Based on the convolutional autoencoder structure, AMSL incorporates a self-supervised learning module to learn general normal patterns and an adaptive memory fusion module to learn rich feature representations. Experiments on four public multivariate time series datasets demonstrate that AMSL significantly improves the performance compared to other state-of-the-art methods. Specifically, on the largest CAP sleep stage detection dataset with 900 million samples, AMSL outperforms the second-best baseline by 4 %+ in both accuracy and F1 score. Apart from the enhanced generalization ability, AMSL is also more robust against input noise.</description><identifier>ISSN: 1041-4347</identifier><identifier>EISSN: 1558-2191</identifier><identifier>DOI: 10.1109/TKDE.2021.3139916</identifier><identifier>CODEN: ITKEEH</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Anomalies ; Anomaly detection ; Data models ; Datasets ; Feature extraction ; Image reconstruction ; memory network ; Modules ; Representations ; Self-supervised learning ; time series ; Time series analysis ; Training ; Training data ; Unsupervised anomaly detection</subject><ispartof>IEEE transactions on knowledge and data engineering, 2023-12, Vol.35 (12), p.12068-12080</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-cb407910e6bbbdf654a15bab380b9ebdb05c97b77bd8763074bdaf2438c72ddf3</citedby><cites>FETCH-LOGICAL-c293t-cb407910e6bbbdf654a15bab380b9ebdb05c97b77bd8763074bdaf2438c72ddf3</cites><orcidid>0000-0001-6893-8650 ; 0000-0002-8403-0893 ; 0000-0002-4833-0880 ; 0000-0002-8407-0780 ; 0000-0002-9095-0776</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9669068$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9669068$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhang, Yuxin</creatorcontrib><creatorcontrib>Wang, Jindong</creatorcontrib><creatorcontrib>Chen, Yiqiang</creatorcontrib><creatorcontrib>Yu, Han</creatorcontrib><creatorcontrib>Qin, Tao</creatorcontrib><title>Adaptive Memory Networks With Self-Supervised Learning for Unsupervised Anomaly Detection</title><title>IEEE transactions on knowledge and data engineering</title><addtitle>TKDE</addtitle><description>Unsupervised anomaly detection aims to build models to effectively detect unseen anomalies by only training on the normal data. Although previous reconstruction-based methods have made fruitful progress, their generalization ability is limited due to two critical challenges. First, the training dataset only contains normal patterns, which limits the model generalization ability. Second, the feature representations learned by existing models often lack representativeness which hampers the ability to preserve the diversity of normal patterns. In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection. Based on the convolutional autoencoder structure, AMSL incorporates a self-supervised learning module to learn general normal patterns and an adaptive memory fusion module to learn rich feature representations. Experiments on four public multivariate time series datasets demonstrate that AMSL significantly improves the performance compared to other state-of-the-art methods. Specifically, on the largest CAP sleep stage detection dataset with 900 million samples, AMSL outperforms the second-best baseline by 4 %+ in both accuracy and F1 score. Apart from the enhanced generalization ability, AMSL is also more robust against input noise.</description><subject>Anomalies</subject><subject>Anomaly detection</subject><subject>Data models</subject><subject>Datasets</subject><subject>Feature extraction</subject><subject>Image reconstruction</subject><subject>memory network</subject><subject>Modules</subject><subject>Representations</subject><subject>Self-supervised learning</subject><subject>time series</subject><subject>Time series analysis</subject><subject>Training</subject><subject>Training data</subject><subject>Unsupervised anomaly detection</subject><issn>1041-4347</issn><issn>1558-2191</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpFkMtOwzAQRS0EEqXwAYiNJdYpdpz4saxaXqLAoq0QK8uOJ5DSxsFOi_r3pGoFq7nSnDsjHYQuKRlQStTN7Gl8O0hJSgeMMqUoP0I9mucySamix10mGU0ylolTdBbjghAihaQ99D50pmmrDeBnWPmwxS_Q_vjwFfFb1X7iKSzLZLpuIGyqCA5PwIS6qj9w6QOe1_F_M6z9yiy3eAwtFG3l63N0UpplhIvD7KP53e1s9JBMXu8fR8NJUqSKtUlhMyIUJcCtta7keWZobo1lklgF1lmSF0pYIayTgjMiMutMmWZMFiJ1rmR9dL2_2wT_vYbY6oVfh7p7qVMpBe10SN5RdE8VwccYoNRNqFYmbDUlemdQ7wzqnUF9MNh1rvadCgD-eMW5IlyyX3wDbhY</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Zhang, Yuxin</creator><creator>Wang, Jindong</creator><creator>Chen, Yiqiang</creator><creator>Yu, Han</creator><creator>Qin, Tao</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-6893-8650</orcidid><orcidid>https://orcid.org/0000-0002-8403-0893</orcidid><orcidid>https://orcid.org/0000-0002-4833-0880</orcidid><orcidid>https://orcid.org/0000-0002-8407-0780</orcidid><orcidid>https://orcid.org/0000-0002-9095-0776</orcidid></search><sort><creationdate>20231201</creationdate><title>Adaptive Memory Networks With Self-Supervised Learning for Unsupervised Anomaly Detection</title><author>Zhang, Yuxin ; Wang, Jindong ; Chen, Yiqiang ; Yu, Han ; Qin, Tao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-cb407910e6bbbdf654a15bab380b9ebdb05c97b77bd8763074bdaf2438c72ddf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Anomalies</topic><topic>Anomaly detection</topic><topic>Data models</topic><topic>Datasets</topic><topic>Feature extraction</topic><topic>Image reconstruction</topic><topic>memory network</topic><topic>Modules</topic><topic>Representations</topic><topic>Self-supervised learning</topic><topic>time series</topic><topic>Time series analysis</topic><topic>Training</topic><topic>Training data</topic><topic>Unsupervised anomaly detection</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Yuxin</creatorcontrib><creatorcontrib>Wang, Jindong</creatorcontrib><creatorcontrib>Chen, Yiqiang</creatorcontrib><creatorcontrib>Yu, Han</creatorcontrib><creatorcontrib>Qin, Tao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on knowledge and data engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Yuxin</au><au>Wang, Jindong</au><au>Chen, Yiqiang</au><au>Yu, Han</au><au>Qin, Tao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive Memory Networks With Self-Supervised Learning for Unsupervised Anomaly Detection</atitle><jtitle>IEEE transactions on knowledge and data engineering</jtitle><stitle>TKDE</stitle><date>2023-12-01</date><risdate>2023</risdate><volume>35</volume><issue>12</issue><spage>12068</spage><epage>12080</epage><pages>12068-12080</pages><issn>1041-4347</issn><eissn>1558-2191</eissn><coden>ITKEEH</coden><abstract>Unsupervised anomaly detection aims to build models to effectively detect unseen anomalies by only training on the normal data. Although previous reconstruction-based methods have made fruitful progress, their generalization ability is limited due to two critical challenges. First, the training dataset only contains normal patterns, which limits the model generalization ability. Second, the feature representations learned by existing models often lack representativeness which hampers the ability to preserve the diversity of normal patterns. In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection. Based on the convolutional autoencoder structure, AMSL incorporates a self-supervised learning module to learn general normal patterns and an adaptive memory fusion module to learn rich feature representations. Experiments on four public multivariate time series datasets demonstrate that AMSL significantly improves the performance compared to other state-of-the-art methods. Specifically, on the largest CAP sleep stage detection dataset with 900 million samples, AMSL outperforms the second-best baseline by 4 %+ in both accuracy and F1 score. Apart from the enhanced generalization ability, AMSL is also more robust against input noise.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TKDE.2021.3139916</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0001-6893-8650</orcidid><orcidid>https://orcid.org/0000-0002-8403-0893</orcidid><orcidid>https://orcid.org/0000-0002-4833-0880</orcidid><orcidid>https://orcid.org/0000-0002-8407-0780</orcidid><orcidid>https://orcid.org/0000-0002-9095-0776</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1041-4347
ispartof IEEE transactions on knowledge and data engineering, 2023-12, Vol.35 (12), p.12068-12080
issn 1041-4347
1558-2191
language eng
recordid cdi_ieee_primary_9669068
source IEEE Electronic Library (IEL)
subjects Anomalies
Anomaly detection
Data models
Datasets
Feature extraction
Image reconstruction
memory network
Modules
Representations
Self-supervised learning
time series
Time series analysis
Training
Training data
Unsupervised anomaly detection
title Adaptive Memory Networks With Self-Supervised Learning for Unsupervised Anomaly Detection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T08%3A28%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20Memory%20Networks%20With%20Self-Supervised%20Learning%20for%20Unsupervised%20Anomaly%20Detection&rft.jtitle=IEEE%20transactions%20on%20knowledge%20and%20data%20engineering&rft.au=Zhang,%20Yuxin&rft.date=2023-12-01&rft.volume=35&rft.issue=12&rft.spage=12068&rft.epage=12080&rft.pages=12068-12080&rft.issn=1041-4347&rft.eissn=1558-2191&rft.coden=ITKEEH&rft_id=info:doi/10.1109/TKDE.2021.3139916&rft_dat=%3Cproquest_RIE%3E2887113986%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2887113986&rft_id=info:pmid/&rft_ieee_id=9669068&rfr_iscdi=true