M ^ D-VAE: Self-Supervised Probabilistic Temporal- Spatial Latent Representation Learning for Unsupervised Industrial Operational Applications Under Missing Value Interference
Due to sensor malfunctions and data transmission corruptions, the industrial process data collected commonly contain missing values. It poses a significant challenge for data-driven approaches in aggregating temporal-spatial correlations that reflect dependencies across both variables and times, whi...
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2024-10, p.1-14 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 14 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE transaction on neural networks and learning systems |
container_volume | |
creator | Dai, Qingyang Zhao, Chunhui Huang, Biao |
description | Due to sensor malfunctions and data transmission corruptions, the industrial process data collected commonly contain missing values. It poses a significant challenge for data-driven approaches in aggregating temporal-spatial correlations that reflect dependencies across both variables and times, which makes it difficult to directly carry out downstream industrial operational applications. In this study, a self-supervised representation learning model is proposed to extract probabilistic temporal-spatial latent variables (LVs) from sequential data under missing value interference. The extracted LVs can be utilized for typical industrial operational applications through a unified framework. First, a novel deep dynamic probabilistic latent variable model, named Markov dynamic variational autoencoder (MD-VAE), is proposed to explicitly model the temporal-spatial dependencies between LVs. The latent posteriors are Bayesian smoothed by global sequence information for effective variational inference (VI). Second, a self-supervised learning approach, termed masked MD-VAE (), is proposed to address the challenge of directly extracting temporal-spatial LVs under missing value interference. Controllable constraints with practical interpretations are introduced to balance the latent bottleneck capacity with reconstruction accuracy during model optimization. A unified framework is proposed to utilize the latent representations for typical industrial downstream tasks. Case studies conducted on a real-world multiphase flow process demonstrate the superiority of in unsupervised industrial operational applications including missing value imputation and dynamic process monitoring under missing value interference. |
doi_str_mv | 10.1109/TNNLS.2024.3477968 |
format | Article |
fullrecord | <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_10731982</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10731982</ieee_id><sourcerecordid>10_1109_TNNLS_2024_3477968</sourcerecordid><originalsourceid>FETCH-LOGICAL-c642-ae05231582d45eb24e0a41688d83661bd32c08dc40a73da4d3bdc1bd91d03be93</originalsourceid><addsrcrecordid>eNpNkN1Kw0AQhYMoWKovIF7sC6TuX5KNd6VWLaStmFi8MmyyE1lJk7CbCj6Vr-imLerczJnDfOfieN4VwRNCcHyTrVZJOqGY8gnjURSH4sQbURJSnzIhTn919HruXVr7gd2EOAh5PPK-l-gN3fmb6fwWpVBXfrrrwHxqCwo9mbaQha617XWJMth2rZG1j9JO9lrWKJE9ND16hs6Adcq5bYMSkKbRzTuqWoNeGvuXt2jUzvZmQNfO3L87Pe26Wpf7yzpAgUFLbe0QsZH1DhzXg6nAQFPChXdWydrC5XGPvex-ns0e_WT9sJhNE78MOfUl4IAyEgiqeAAF5YAlJ6EQSrAwJIVitMRClRzLiCnJFStU6eyYKMwKiNnYo4fY0rTWGqjyzuitNF85wflQer4vPR9Kz4-lO-j6AGkA-AdEjMSCsh_dZIJp</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>M ^ D-VAE: Self-Supervised Probabilistic Temporal- Spatial Latent Representation Learning for Unsupervised Industrial Operational Applications Under Missing Value Interference</title><source>IEEE Electronic Library (IEL)</source><creator>Dai, Qingyang ; Zhao, Chunhui ; Huang, Biao</creator><creatorcontrib>Dai, Qingyang ; Zhao, Chunhui ; Huang, Biao</creatorcontrib><description>Due to sensor malfunctions and data transmission corruptions, the industrial process data collected commonly contain missing values. It poses a significant challenge for data-driven approaches in aggregating temporal-spatial correlations that reflect dependencies across both variables and times, which makes it difficult to directly carry out downstream industrial operational applications. In this study, a self-supervised representation learning model is proposed to extract probabilistic temporal-spatial latent variables (LVs) from sequential data under missing value interference. The extracted LVs can be utilized for typical industrial operational applications through a unified framework. First, a novel deep dynamic probabilistic latent variable model, named Markov dynamic variational autoencoder (MD-VAE), is proposed to explicitly model the temporal-spatial dependencies between LVs. The latent posteriors are Bayesian smoothed by global sequence information for effective variational inference (VI). Second, a self-supervised learning approach, termed masked MD-VAE (), is proposed to address the challenge of directly extracting temporal-spatial LVs under missing value interference. Controllable constraints with practical interpretations are introduced to balance the latent bottleneck capacity with reconstruction accuracy during model optimization. A unified framework is proposed to utilize the latent representations for typical industrial downstream tasks. Case studies conducted on a real-world multiphase flow process demonstrate the superiority of in unsupervised industrial operational applications including missing value imputation and dynamic process monitoring under missing value interference.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2024.3477968</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>IEEE</publisher><subject>Data mining ; Data models ; Dynamic data modeling ; Feature extraction ; Hidden Markov models ; Imputation ; industrial process monitoring ; Interference ; missing data ; Optimization ; Probabilistic logic ; Representation learning ; Self-supervised learning ; variational autoencoders (VAEs)</subject><ispartof>IEEE transaction on neural networks and learning systems, 2024-10, p.1-14</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>biao.huang@ualberta.ca ; chhzhao@zju.edu.cn ; qydai@zju.edu.cn</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10731982$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,778,782,794,27911,27912,54745</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10731982$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Dai, Qingyang</creatorcontrib><creatorcontrib>Zhao, Chunhui</creatorcontrib><creatorcontrib>Huang, Biao</creatorcontrib><title>M ^ D-VAE: Self-Supervised Probabilistic Temporal- Spatial Latent Representation Learning for Unsupervised Industrial Operational Applications Under Missing Value Interference</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><description>Due to sensor malfunctions and data transmission corruptions, the industrial process data collected commonly contain missing values. It poses a significant challenge for data-driven approaches in aggregating temporal-spatial correlations that reflect dependencies across both variables and times, which makes it difficult to directly carry out downstream industrial operational applications. In this study, a self-supervised representation learning model is proposed to extract probabilistic temporal-spatial latent variables (LVs) from sequential data under missing value interference. The extracted LVs can be utilized for typical industrial operational applications through a unified framework. First, a novel deep dynamic probabilistic latent variable model, named Markov dynamic variational autoencoder (MD-VAE), is proposed to explicitly model the temporal-spatial dependencies between LVs. The latent posteriors are Bayesian smoothed by global sequence information for effective variational inference (VI). Second, a self-supervised learning approach, termed masked MD-VAE (), is proposed to address the challenge of directly extracting temporal-spatial LVs under missing value interference. Controllable constraints with practical interpretations are introduced to balance the latent bottleneck capacity with reconstruction accuracy during model optimization. A unified framework is proposed to utilize the latent representations for typical industrial downstream tasks. Case studies conducted on a real-world multiphase flow process demonstrate the superiority of in unsupervised industrial operational applications including missing value imputation and dynamic process monitoring under missing value interference.</description><subject>Data mining</subject><subject>Data models</subject><subject>Dynamic data modeling</subject><subject>Feature extraction</subject><subject>Hidden Markov models</subject><subject>Imputation</subject><subject>industrial process monitoring</subject><subject>Interference</subject><subject>missing data</subject><subject>Optimization</subject><subject>Probabilistic logic</subject><subject>Representation learning</subject><subject>Self-supervised learning</subject><subject>variational autoencoders (VAEs)</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkN1Kw0AQhYMoWKovIF7sC6TuX5KNd6VWLaStmFi8MmyyE1lJk7CbCj6Vr-imLerczJnDfOfieN4VwRNCcHyTrVZJOqGY8gnjURSH4sQbURJSnzIhTn919HruXVr7gd2EOAh5PPK-l-gN3fmb6fwWpVBXfrrrwHxqCwo9mbaQha617XWJMth2rZG1j9JO9lrWKJE9ND16hs6Adcq5bYMSkKbRzTuqWoNeGvuXt2jUzvZmQNfO3L87Pe26Wpf7yzpAgUFLbe0QsZH1DhzXg6nAQFPChXdWydrC5XGPvex-ns0e_WT9sJhNE78MOfUl4IAyEgiqeAAF5YAlJ6EQSrAwJIVitMRClRzLiCnJFStU6eyYKMwKiNnYo4fY0rTWGqjyzuitNF85wflQer4vPR9Kz4-lO-j6AGkA-AdEjMSCsh_dZIJp</recordid><startdate>20241022</startdate><enddate>20241022</enddate><creator>Dai, Qingyang</creator><creator>Zhao, Chunhui</creator><creator>Huang, Biao</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/biao.huang@ualberta.ca</orcidid><orcidid>https://orcid.org/chhzhao@zju.edu.cn</orcidid><orcidid>https://orcid.org/qydai@zju.edu.cn</orcidid></search><sort><creationdate>20241022</creationdate><title>M ^ D-VAE: Self-Supervised Probabilistic Temporal- Spatial Latent Representation Learning for Unsupervised Industrial Operational Applications Under Missing Value Interference</title><author>Dai, Qingyang ; Zhao, Chunhui ; Huang, Biao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c642-ae05231582d45eb24e0a41688d83661bd32c08dc40a73da4d3bdc1bd91d03be93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Data mining</topic><topic>Data models</topic><topic>Dynamic data modeling</topic><topic>Feature extraction</topic><topic>Hidden Markov models</topic><topic>Imputation</topic><topic>industrial process monitoring</topic><topic>Interference</topic><topic>missing data</topic><topic>Optimization</topic><topic>Probabilistic logic</topic><topic>Representation learning</topic><topic>Self-supervised learning</topic><topic>variational autoencoders (VAEs)</topic><toplevel>online_resources</toplevel><creatorcontrib>Dai, Qingyang</creatorcontrib><creatorcontrib>Zhao, Chunhui</creatorcontrib><creatorcontrib>Huang, Biao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dai, Qingyang</au><au>Zhao, Chunhui</au><au>Huang, Biao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>M ^ D-VAE: Self-Supervised Probabilistic Temporal- Spatial Latent Representation Learning for Unsupervised Industrial Operational Applications Under Missing Value Interference</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><date>2024-10-22</date><risdate>2024</risdate><spage>1</spage><epage>14</epage><pages>1-14</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Due to sensor malfunctions and data transmission corruptions, the industrial process data collected commonly contain missing values. It poses a significant challenge for data-driven approaches in aggregating temporal-spatial correlations that reflect dependencies across both variables and times, which makes it difficult to directly carry out downstream industrial operational applications. In this study, a self-supervised representation learning model is proposed to extract probabilistic temporal-spatial latent variables (LVs) from sequential data under missing value interference. The extracted LVs can be utilized for typical industrial operational applications through a unified framework. First, a novel deep dynamic probabilistic latent variable model, named Markov dynamic variational autoencoder (MD-VAE), is proposed to explicitly model the temporal-spatial dependencies between LVs. The latent posteriors are Bayesian smoothed by global sequence information for effective variational inference (VI). Second, a self-supervised learning approach, termed masked MD-VAE (), is proposed to address the challenge of directly extracting temporal-spatial LVs under missing value interference. Controllable constraints with practical interpretations are introduced to balance the latent bottleneck capacity with reconstruction accuracy during model optimization. A unified framework is proposed to utilize the latent representations for typical industrial downstream tasks. Case studies conducted on a real-world multiphase flow process demonstrate the superiority of in unsupervised industrial operational applications including missing value imputation and dynamic process monitoring under missing value interference.</abstract><pub>IEEE</pub><doi>10.1109/TNNLS.2024.3477968</doi><tpages>14</tpages><orcidid>https://orcid.org/biao.huang@ualberta.ca</orcidid><orcidid>https://orcid.org/chhzhao@zju.edu.cn</orcidid><orcidid>https://orcid.org/qydai@zju.edu.cn</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2162-237X |
ispartof | IEEE transaction on neural networks and learning systems, 2024-10, p.1-14 |
issn | 2162-237X 2162-2388 |
language | eng |
recordid | cdi_ieee_primary_10731982 |
source | IEEE Electronic Library (IEL) |
subjects | Data mining Data models Dynamic data modeling Feature extraction Hidden Markov models Imputation industrial process monitoring Interference missing data Optimization Probabilistic logic Representation learning Self-supervised learning variational autoencoders (VAEs) |
title | M ^ D-VAE: Self-Supervised Probabilistic Temporal- Spatial Latent Representation Learning for Unsupervised Industrial Operational Applications Under Missing Value Interference |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T18%3A04%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=M%20%5E%20D-VAE:%20Self-Supervised%20Probabilistic%20Temporal-%20Spatial%20Latent%20Representation%20Learning%20for%20Unsupervised%20Industrial%20Operational%20Applications%20Under%20Missing%20Value%20Interference&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Dai,%20Qingyang&rft.date=2024-10-22&rft.spage=1&rft.epage=14&rft.pages=1-14&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2024.3477968&rft_dat=%3Ccrossref_RIE%3E10_1109_TNNLS_2024_3477968%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10731982&rfr_iscdi=true |