Attention-Aware Encoder-Decoder Neural Networks for Heterogeneous Graphs of Things

Recent trend focuses on using heterogeneous graph of things (HGoT) to represent things and their relations in the Internet of Things, thereby facilitating the applying of advanced learning frameworks, i.e., deep learning (DL). Nevertheless, this is a challenging task since the existing DL models are...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on industrial informatics 2021-04, Vol.17 (4), p.2890-2898
Hauptverfasser: Li, Yangfan, Chen, Cen, Duan, Mingxing, Zeng, Zeng, Li, Kenli
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2898
container_issue 4
container_start_page 2890
container_title IEEE transactions on industrial informatics
container_volume 17
creator Li, Yangfan
Chen, Cen
Duan, Mingxing
Zeng, Zeng
Li, Kenli
description Recent trend focuses on using heterogeneous graph of things (HGoT) to represent things and their relations in the Internet of Things, thereby facilitating the applying of advanced learning frameworks, i.e., deep learning (DL). Nevertheless, this is a challenging task since the existing DL models are hard to accurately express the complex semantics and attributes for those heterogeneous nodes and links in HGoT. To address this issue, we develop attention-aware encoder-decoder graph neural networks for HGoT, termed as HGAED. Specifically, we utilize the attention-based separate-and-merge method to improve the accuracy, and leverage the encoder-decoder architecture for implementation. In the heart of HGAED, the separate-and-merge processes can be encapsulated into encoding and decoding blocks. Then, blocks are stacked for constructing an encoder-decoder architecture to jointly and hierarchically fuse heterogeneous structures and contents of nodes. Extensive experiments on three real-world datasets demonstrate the superior performance of HGAED over state-of-the-art baselines.
doi_str_mv 10.1109/TII.2020.3025592
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9204386</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9204386</ieee_id><sourcerecordid>2478147638</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-8931db8a4b820d31d61841af693c2e2a881036a76fb9b9b6794e69e8f06afb783</originalsourceid><addsrcrecordid>eNo9kMFLwzAYxYMoOKd3wUvBc-uXpE2T45hzGwwFmeeQdl-2ztnMJGX439u5Id_hvcN774MfIfcUMkpBPS3n84wBg4wDKwrFLsiAqpymAAVc9r4oaMoZ8GtyE8IWgJfA1YC8j2LENjauTUcH4zGZtLVboU-f8U-TV-y82fUSD85_hsQ6n8wwondrbNF1IZl6s9-ExNlkuWnadbglV9bsAt6ddUg-XibL8SxdvE3n49EirZmiMZWK01UlTV5JBqveCypzaqxQvGbIjJQUuDClsJXqT5QqR6FQWhDGVqXkQ_J42t17991hiHrrOt_2LzXLS0nzUvBjCk6p2rsQPFq9982X8T-agj6S0z05fSSnz-T6ysOp0iDif1wxyLkU_BfXgGkb</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2478147638</pqid></control><display><type>article</type><title>Attention-Aware Encoder-Decoder Neural Networks for Heterogeneous Graphs of Things</title><source>IEEE Electronic Library (IEL)</source><creator>Li, Yangfan ; Chen, Cen ; Duan, Mingxing ; Zeng, Zeng ; Li, Kenli</creator><creatorcontrib>Li, Yangfan ; Chen, Cen ; Duan, Mingxing ; Zeng, Zeng ; Li, Kenli</creatorcontrib><description>Recent trend focuses on using heterogeneous graph of things (HGoT) to represent things and their relations in the Internet of Things, thereby facilitating the applying of advanced learning frameworks, i.e., deep learning (DL). Nevertheless, this is a challenging task since the existing DL models are hard to accurately express the complex semantics and attributes for those heterogeneous nodes and links in HGoT. To address this issue, we develop attention-aware encoder-decoder graph neural networks for HGoT, termed as HGAED. Specifically, we utilize the attention-based separate-and-merge method to improve the accuracy, and leverage the encoder-decoder architecture for implementation. In the heart of HGAED, the separate-and-merge processes can be encapsulated into encoding and decoding blocks. Then, blocks are stacked for constructing an encoder-decoder architecture to jointly and hierarchically fuse heterogeneous structures and contents of nodes. Extensive experiments on three real-world datasets demonstrate the superior performance of HGAED over state-of-the-art baselines.</description><identifier>ISSN: 1551-3203</identifier><identifier>EISSN: 1941-0050</identifier><identifier>DOI: 10.1109/TII.2020.3025592</identifier><identifier>CODEN: ITIICH</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Coders ; Computer architecture ; Decoding ; Encoding ; Fuses ; Graph neural network (GNN) ; Graph neural networks ; graph of things ; heterogeneous graph ; Informatics ; Internet of Things ; Internet of Things (IoT) ; Neural networks ; Nodes ; Semantics</subject><ispartof>IEEE transactions on industrial informatics, 2021-04, Vol.17 (4), p.2890-2898</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-8931db8a4b820d31d61841af693c2e2a881036a76fb9b9b6794e69e8f06afb783</citedby><cites>FETCH-LOGICAL-c291t-8931db8a4b820d31d61841af693c2e2a881036a76fb9b9b6794e69e8f06afb783</cites><orcidid>0000-0002-1049-6244 ; 0000-0002-2405-0323 ; 0000-0003-3640-5088 ; 0000-0003-1389-0148 ; 0000-0002-2635-7716</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9204386$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9204386$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Li, Yangfan</creatorcontrib><creatorcontrib>Chen, Cen</creatorcontrib><creatorcontrib>Duan, Mingxing</creatorcontrib><creatorcontrib>Zeng, Zeng</creatorcontrib><creatorcontrib>Li, Kenli</creatorcontrib><title>Attention-Aware Encoder-Decoder Neural Networks for Heterogeneous Graphs of Things</title><title>IEEE transactions on industrial informatics</title><addtitle>TII</addtitle><description>Recent trend focuses on using heterogeneous graph of things (HGoT) to represent things and their relations in the Internet of Things, thereby facilitating the applying of advanced learning frameworks, i.e., deep learning (DL). Nevertheless, this is a challenging task since the existing DL models are hard to accurately express the complex semantics and attributes for those heterogeneous nodes and links in HGoT. To address this issue, we develop attention-aware encoder-decoder graph neural networks for HGoT, termed as HGAED. Specifically, we utilize the attention-based separate-and-merge method to improve the accuracy, and leverage the encoder-decoder architecture for implementation. In the heart of HGAED, the separate-and-merge processes can be encapsulated into encoding and decoding blocks. Then, blocks are stacked for constructing an encoder-decoder architecture to jointly and hierarchically fuse heterogeneous structures and contents of nodes. Extensive experiments on three real-world datasets demonstrate the superior performance of HGAED over state-of-the-art baselines.</description><subject>Coders</subject><subject>Computer architecture</subject><subject>Decoding</subject><subject>Encoding</subject><subject>Fuses</subject><subject>Graph neural network (GNN)</subject><subject>Graph neural networks</subject><subject>graph of things</subject><subject>heterogeneous graph</subject><subject>Informatics</subject><subject>Internet of Things</subject><subject>Internet of Things (IoT)</subject><subject>Neural networks</subject><subject>Nodes</subject><subject>Semantics</subject><issn>1551-3203</issn><issn>1941-0050</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kMFLwzAYxYMoOKd3wUvBc-uXpE2T45hzGwwFmeeQdl-2ztnMJGX439u5Id_hvcN774MfIfcUMkpBPS3n84wBg4wDKwrFLsiAqpymAAVc9r4oaMoZ8GtyE8IWgJfA1YC8j2LENjauTUcH4zGZtLVboU-f8U-TV-y82fUSD85_hsQ6n8wwondrbNF1IZl6s9-ExNlkuWnadbglV9bsAt6ddUg-XibL8SxdvE3n49EirZmiMZWK01UlTV5JBqveCypzaqxQvGbIjJQUuDClsJXqT5QqR6FQWhDGVqXkQ_J42t17991hiHrrOt_2LzXLS0nzUvBjCk6p2rsQPFq9982X8T-agj6S0z05fSSnz-T6ysOp0iDif1wxyLkU_BfXgGkb</recordid><startdate>20210401</startdate><enddate>20210401</enddate><creator>Li, Yangfan</creator><creator>Chen, Cen</creator><creator>Duan, Mingxing</creator><creator>Zeng, Zeng</creator><creator>Li, Kenli</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-1049-6244</orcidid><orcidid>https://orcid.org/0000-0002-2405-0323</orcidid><orcidid>https://orcid.org/0000-0003-3640-5088</orcidid><orcidid>https://orcid.org/0000-0003-1389-0148</orcidid><orcidid>https://orcid.org/0000-0002-2635-7716</orcidid></search><sort><creationdate>20210401</creationdate><title>Attention-Aware Encoder-Decoder Neural Networks for Heterogeneous Graphs of Things</title><author>Li, Yangfan ; Chen, Cen ; Duan, Mingxing ; Zeng, Zeng ; Li, Kenli</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-8931db8a4b820d31d61841af693c2e2a881036a76fb9b9b6794e69e8f06afb783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Coders</topic><topic>Computer architecture</topic><topic>Decoding</topic><topic>Encoding</topic><topic>Fuses</topic><topic>Graph neural network (GNN)</topic><topic>Graph neural networks</topic><topic>graph of things</topic><topic>heterogeneous graph</topic><topic>Informatics</topic><topic>Internet of Things</topic><topic>Internet of Things (IoT)</topic><topic>Neural networks</topic><topic>Nodes</topic><topic>Semantics</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Yangfan</creatorcontrib><creatorcontrib>Chen, Cen</creatorcontrib><creatorcontrib>Duan, Mingxing</creatorcontrib><creatorcontrib>Zeng, Zeng</creatorcontrib><creatorcontrib>Li, Kenli</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on industrial informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Yangfan</au><au>Chen, Cen</au><au>Duan, Mingxing</au><au>Zeng, Zeng</au><au>Li, Kenli</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Attention-Aware Encoder-Decoder Neural Networks for Heterogeneous Graphs of Things</atitle><jtitle>IEEE transactions on industrial informatics</jtitle><stitle>TII</stitle><date>2021-04-01</date><risdate>2021</risdate><volume>17</volume><issue>4</issue><spage>2890</spage><epage>2898</epage><pages>2890-2898</pages><issn>1551-3203</issn><eissn>1941-0050</eissn><coden>ITIICH</coden><abstract>Recent trend focuses on using heterogeneous graph of things (HGoT) to represent things and their relations in the Internet of Things, thereby facilitating the applying of advanced learning frameworks, i.e., deep learning (DL). Nevertheless, this is a challenging task since the existing DL models are hard to accurately express the complex semantics and attributes for those heterogeneous nodes and links in HGoT. To address this issue, we develop attention-aware encoder-decoder graph neural networks for HGoT, termed as HGAED. Specifically, we utilize the attention-based separate-and-merge method to improve the accuracy, and leverage the encoder-decoder architecture for implementation. In the heart of HGAED, the separate-and-merge processes can be encapsulated into encoding and decoding blocks. Then, blocks are stacked for constructing an encoder-decoder architecture to jointly and hierarchically fuse heterogeneous structures and contents of nodes. Extensive experiments on three real-world datasets demonstrate the superior performance of HGAED over state-of-the-art baselines.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TII.2020.3025592</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0002-1049-6244</orcidid><orcidid>https://orcid.org/0000-0002-2405-0323</orcidid><orcidid>https://orcid.org/0000-0003-3640-5088</orcidid><orcidid>https://orcid.org/0000-0003-1389-0148</orcidid><orcidid>https://orcid.org/0000-0002-2635-7716</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1551-3203
ispartof IEEE transactions on industrial informatics, 2021-04, Vol.17 (4), p.2890-2898
issn 1551-3203
1941-0050
language eng
recordid cdi_ieee_primary_9204386
source IEEE Electronic Library (IEL)
subjects Coders
Computer architecture
Decoding
Encoding
Fuses
Graph neural network (GNN)
Graph neural networks
graph of things
heterogeneous graph
Informatics
Internet of Things
Internet of Things (IoT)
Neural networks
Nodes
Semantics
title Attention-Aware Encoder-Decoder Neural Networks for Heterogeneous Graphs of Things
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T13%3A00%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Attention-Aware%20Encoder-Decoder%20Neural%20Networks%20for%20Heterogeneous%20Graphs%20of%20Things&rft.jtitle=IEEE%20transactions%20on%20industrial%20informatics&rft.au=Li,%20Yangfan&rft.date=2021-04-01&rft.volume=17&rft.issue=4&rft.spage=2890&rft.epage=2898&rft.pages=2890-2898&rft.issn=1551-3203&rft.eissn=1941-0050&rft.coden=ITIICH&rft_id=info:doi/10.1109/TII.2020.3025592&rft_dat=%3Cproquest_RIE%3E2478147638%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2478147638&rft_id=info:pmid/&rft_ieee_id=9204386&rfr_iscdi=true