Reverse Graph Learning for Graph Neural Network
Graph neural networks (GNNs) conduct feature learning by taking into account the local structure preservation of the data to produce discriminative features, but need to address the following issues, i.e., 1) the initial graph containing faulty and missing edges often affect feature learning and 2)...
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2024-04, Vol.35 (4), p.4530-4541 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 4541 |
---|---|
container_issue | 4 |
container_start_page | 4530 |
container_title | IEEE transaction on neural networks and learning systems |
container_volume | 35 |
creator | Peng, Liang Hu, Rongyao Kong, Fei Gan, Jiangzhang Mo, Yujie Shi, Xiaoshuang Zhu, Xiaofeng |
description | Graph neural networks (GNNs) conduct feature learning by taking into account the local structure preservation of the data to produce discriminative features, but need to address the following issues, i.e., 1) the initial graph containing faulty and missing edges often affect feature learning and 2) most GNN methods suffer from the issue of out-of-example since their training processes do not directly generate a prediction model to predict unseen data points. In this work, we propose a reverse GNN model to learn the graph from the intrinsic space of the original data points as well as to investigate a new out-of-sample extension method. As a result, the proposed method can output a high-quality graph to improve the quality of feature learning, while the new method of out-of-sample extension makes our reverse GNN method available for conducting supervised learning and semi-supervised learning. Experimental results on real-world datasets show that our method outputs competitive classification performance, compared to state-of-the-art methods, in terms of semi-supervised node classification, out-of-sample extension, random edge attack, link prediction, and image retrieval. |
doi_str_mv | 10.1109/TNNLS.2022.3161030 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_35380973</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9749776</ieee_id><sourcerecordid>2647655771</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-a75d5194f16a0a48c87d1258faac701f61757e12e755f47dd194462a9b03523f3</originalsourceid><addsrcrecordid>eNpdkFFLwzAQgIMobsz9AQUZ-OJLZy5pkuZRhk6hTNAJvoWsvWpn185kVfz3Zq7uwXu54-674_gIOQU6BqD6aj6bpU9jRhkbc5BAOT0gfQaSRYwnyeG-Vi89MvR-SUNIKmSsj0mPC55QrXifXD3iJzqPo6mz67dRitbVZf06KhrXtWbYOluFtPlq3PsJOSps5XHY5QF5vr2ZT-6i9GF6P7lOo4wL2ERWiVyAjguQlto4yRKVAxNJYW2mKBQSlFAIDJUQRazyPLCxZFYvKBeMF3xALnd31675aNFvzKr0GVaVrbFpvWEyVlIIpSCgF__QZdO6OnxnOOVcMgCuA8V2VOYa7x0WZu3KlXXfBqjZGjW_Rs3WqOmMhqXz7nS7WGG-X_nzF4CzHVAi4n6sVayVkvwHUCh2pA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3033621139</pqid></control><display><type>article</type><title>Reverse Graph Learning for Graph Neural Network</title><source>IEEE Electronic Library (IEL)</source><creator>Peng, Liang ; Hu, Rongyao ; Kong, Fei ; Gan, Jiangzhang ; Mo, Yujie ; Shi, Xiaoshuang ; Zhu, Xiaofeng</creator><creatorcontrib>Peng, Liang ; Hu, Rongyao ; Kong, Fei ; Gan, Jiangzhang ; Mo, Yujie ; Shi, Xiaoshuang ; Zhu, Xiaofeng</creatorcontrib><description>Graph neural networks (GNNs) conduct feature learning by taking into account the local structure preservation of the data to produce discriminative features, but need to address the following issues, i.e., 1) the initial graph containing faulty and missing edges often affect feature learning and 2) most GNN methods suffer from the issue of out-of-example since their training processes do not directly generate a prediction model to predict unseen data points. In this work, we propose a reverse GNN model to learn the graph from the intrinsic space of the original data points as well as to investigate a new out-of-sample extension method. As a result, the proposed method can output a high-quality graph to improve the quality of feature learning, while the new method of out-of-sample extension makes our reverse GNN method available for conducting supervised learning and semi-supervised learning. Experimental results on real-world datasets show that our method outputs competitive classification performance, compared to state-of-the-art methods, in terms of semi-supervised node classification, out-of-sample extension, random edge attack, link prediction, and image retrieval.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2022.3161030</identifier><identifier>PMID: 35380973</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Classification ; Data models ; Data points ; Graph learning ; graph neural network ; Graph neural networks ; Graph theory ; Image edge detection ; Image retrieval ; Learning ; Machine learning ; Neural networks ; out-of-sample extension ; Prediction models ; Predictive models ; Representation learning ; Retrieval ; robust learning ; Semi-supervised learning ; Task analysis ; Training</subject><ispartof>IEEE transaction on neural networks and learning systems, 2024-04, Vol.35 (4), p.4530-4541</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-a75d5194f16a0a48c87d1258faac701f61757e12e755f47dd194462a9b03523f3</citedby><cites>FETCH-LOGICAL-c351t-a75d5194f16a0a48c87d1258faac701f61757e12e755f47dd194462a9b03523f3</cites><orcidid>0000-0002-9831-2787 ; 0000-0001-6840-0578 ; 0000-0003-1888-2091 ; 0000-0003-4934-0850</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9749776$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27929,27930,54763</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9749776$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35380973$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Peng, Liang</creatorcontrib><creatorcontrib>Hu, Rongyao</creatorcontrib><creatorcontrib>Kong, Fei</creatorcontrib><creatorcontrib>Gan, Jiangzhang</creatorcontrib><creatorcontrib>Mo, Yujie</creatorcontrib><creatorcontrib>Shi, Xiaoshuang</creatorcontrib><creatorcontrib>Zhu, Xiaofeng</creatorcontrib><title>Reverse Graph Learning for Graph Neural Network</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Graph neural networks (GNNs) conduct feature learning by taking into account the local structure preservation of the data to produce discriminative features, but need to address the following issues, i.e., 1) the initial graph containing faulty and missing edges often affect feature learning and 2) most GNN methods suffer from the issue of out-of-example since their training processes do not directly generate a prediction model to predict unseen data points. In this work, we propose a reverse GNN model to learn the graph from the intrinsic space of the original data points as well as to investigate a new out-of-sample extension method. As a result, the proposed method can output a high-quality graph to improve the quality of feature learning, while the new method of out-of-sample extension makes our reverse GNN method available for conducting supervised learning and semi-supervised learning. Experimental results on real-world datasets show that our method outputs competitive classification performance, compared to state-of-the-art methods, in terms of semi-supervised node classification, out-of-sample extension, random edge attack, link prediction, and image retrieval.</description><subject>Classification</subject><subject>Data models</subject><subject>Data points</subject><subject>Graph learning</subject><subject>graph neural network</subject><subject>Graph neural networks</subject><subject>Graph theory</subject><subject>Image edge detection</subject><subject>Image retrieval</subject><subject>Learning</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>out-of-sample extension</subject><subject>Prediction models</subject><subject>Predictive models</subject><subject>Representation learning</subject><subject>Retrieval</subject><subject>robust learning</subject><subject>Semi-supervised learning</subject><subject>Task analysis</subject><subject>Training</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkFFLwzAQgIMobsz9AQUZ-OJLZy5pkuZRhk6hTNAJvoWsvWpn185kVfz3Zq7uwXu54-674_gIOQU6BqD6aj6bpU9jRhkbc5BAOT0gfQaSRYwnyeG-Vi89MvR-SUNIKmSsj0mPC55QrXifXD3iJzqPo6mz67dRitbVZf06KhrXtWbYOluFtPlq3PsJOSps5XHY5QF5vr2ZT-6i9GF6P7lOo4wL2ERWiVyAjguQlto4yRKVAxNJYW2mKBQSlFAIDJUQRazyPLCxZFYvKBeMF3xALnd31675aNFvzKr0GVaVrbFpvWEyVlIIpSCgF__QZdO6OnxnOOVcMgCuA8V2VOYa7x0WZu3KlXXfBqjZGjW_Rs3WqOmMhqXz7nS7WGG-X_nzF4CzHVAi4n6sVayVkvwHUCh2pA</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Peng, Liang</creator><creator>Hu, Rongyao</creator><creator>Kong, Fei</creator><creator>Gan, Jiangzhang</creator><creator>Mo, Yujie</creator><creator>Shi, Xiaoshuang</creator><creator>Zhu, Xiaofeng</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-9831-2787</orcidid><orcidid>https://orcid.org/0000-0001-6840-0578</orcidid><orcidid>https://orcid.org/0000-0003-1888-2091</orcidid><orcidid>https://orcid.org/0000-0003-4934-0850</orcidid></search><sort><creationdate>20240401</creationdate><title>Reverse Graph Learning for Graph Neural Network</title><author>Peng, Liang ; Hu, Rongyao ; Kong, Fei ; Gan, Jiangzhang ; Mo, Yujie ; Shi, Xiaoshuang ; Zhu, Xiaofeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-a75d5194f16a0a48c87d1258faac701f61757e12e755f47dd194462a9b03523f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Classification</topic><topic>Data models</topic><topic>Data points</topic><topic>Graph learning</topic><topic>graph neural network</topic><topic>Graph neural networks</topic><topic>Graph theory</topic><topic>Image edge detection</topic><topic>Image retrieval</topic><topic>Learning</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>out-of-sample extension</topic><topic>Prediction models</topic><topic>Predictive models</topic><topic>Representation learning</topic><topic>Retrieval</topic><topic>robust learning</topic><topic>Semi-supervised learning</topic><topic>Task analysis</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Peng, Liang</creatorcontrib><creatorcontrib>Hu, Rongyao</creatorcontrib><creatorcontrib>Kong, Fei</creatorcontrib><creatorcontrib>Gan, Jiangzhang</creatorcontrib><creatorcontrib>Mo, Yujie</creatorcontrib><creatorcontrib>Shi, Xiaoshuang</creatorcontrib><creatorcontrib>Zhu, Xiaofeng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Peng, Liang</au><au>Hu, Rongyao</au><au>Kong, Fei</au><au>Gan, Jiangzhang</au><au>Mo, Yujie</au><au>Shi, Xiaoshuang</au><au>Zhu, Xiaofeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Reverse Graph Learning for Graph Neural Network</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2024-04-01</date><risdate>2024</risdate><volume>35</volume><issue>4</issue><spage>4530</spage><epage>4541</epage><pages>4530-4541</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Graph neural networks (GNNs) conduct feature learning by taking into account the local structure preservation of the data to produce discriminative features, but need to address the following issues, i.e., 1) the initial graph containing faulty and missing edges often affect feature learning and 2) most GNN methods suffer from the issue of out-of-example since their training processes do not directly generate a prediction model to predict unseen data points. In this work, we propose a reverse GNN model to learn the graph from the intrinsic space of the original data points as well as to investigate a new out-of-sample extension method. As a result, the proposed method can output a high-quality graph to improve the quality of feature learning, while the new method of out-of-sample extension makes our reverse GNN method available for conducting supervised learning and semi-supervised learning. Experimental results on real-world datasets show that our method outputs competitive classification performance, compared to state-of-the-art methods, in terms of semi-supervised node classification, out-of-sample extension, random edge attack, link prediction, and image retrieval.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>35380973</pmid><doi>10.1109/TNNLS.2022.3161030</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-9831-2787</orcidid><orcidid>https://orcid.org/0000-0001-6840-0578</orcidid><orcidid>https://orcid.org/0000-0003-1888-2091</orcidid><orcidid>https://orcid.org/0000-0003-4934-0850</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2162-237X |
ispartof | IEEE transaction on neural networks and learning systems, 2024-04, Vol.35 (4), p.4530-4541 |
issn | 2162-237X 2162-2388 |
language | eng |
recordid | cdi_pubmed_primary_35380973 |
source | IEEE Electronic Library (IEL) |
subjects | Classification Data models Data points Graph learning graph neural network Graph neural networks Graph theory Image edge detection Image retrieval Learning Machine learning Neural networks out-of-sample extension Prediction models Predictive models Representation learning Retrieval robust learning Semi-supervised learning Task analysis Training |
title | Reverse Graph Learning for Graph Neural Network |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T23%3A42%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Reverse%20Graph%20Learning%20for%20Graph%20Neural%20Network&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Peng,%20Liang&rft.date=2024-04-01&rft.volume=35&rft.issue=4&rft.spage=4530&rft.epage=4541&rft.pages=4530-4541&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2022.3161030&rft_dat=%3Cproquest_RIE%3E2647655771%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3033621139&rft_id=info:pmid/35380973&rft_ieee_id=9749776&rfr_iscdi=true |