Deep Neural Network for DrawiNg Networks, (DNN)^2

By leveraging recent progress of stochastic gradient descent methods, several works have shown that graphs could be efficiently laid out through the optimization of a tailored objective function. In the meantime, Deep Learning (DL) techniques achieved great performances in many applications. We demo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-08
Hauptverfasser: Giovannangeli, Loann, Lalanne, Frederic, Auber, David, Giot, Romain, Bourqui, Romain
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Giovannangeli, Loann
Lalanne, Frederic
Auber, David
Giot, Romain
Bourqui, Romain
description By leveraging recent progress of stochastic gradient descent methods, several works have shown that graphs could be efficiently laid out through the optimization of a tailored objective function. In the meantime, Deep Learning (DL) techniques achieved great performances in many applications. We demonstrate that it is possible to use DL techniques to learn a graph-to-layout sequence of operations thanks to a graph-related objective function. In this paper, we present a novel graph drawing framework called (DNN)^2: Deep Neural Network for DrawiNg Networks. Our method uses Graph Convolution Networks to learn a model. Learning is achieved by optimizing a graph topology related loss function that evaluates (DNN)^2 generated layouts during training. Once trained, the (DNN)^ model is able to quickly lay any input graph out. We experiment (DNN)^2 and statistically compare it to optimization-based and regular graph layout algorithms. The results show that (DNN)^2 performs well and are encouraging as the Deep Learning approach to Graph Drawing is novel and many leads for future works are identified.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2560163594</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2560163594</sourcerecordid><originalsourceid>FETCH-proquest_journals_25601635943</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwdElNLVDwSy0tSswBUiXl-UXZCmn5RQouRYnlmX7pMLFiHQUNFz8_zTgjHgbWtMSc4lReKM3NoOzmGuLsoVtQlF9YmlpcEp-VX1qUB5SKNzI1MzA0Mza1NDEmThUA9ucx1A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2560163594</pqid></control><display><type>article</type><title>Deep Neural Network for DrawiNg Networks, (DNN)^2</title><source>Free E- Journals</source><creator>Giovannangeli, Loann ; Lalanne, Frederic ; Auber, David ; Giot, Romain ; Bourqui, Romain</creator><creatorcontrib>Giovannangeli, Loann ; Lalanne, Frederic ; Auber, David ; Giot, Romain ; Bourqui, Romain</creatorcontrib><description>By leveraging recent progress of stochastic gradient descent methods, several works have shown that graphs could be efficiently laid out through the optimization of a tailored objective function. In the meantime, Deep Learning (DL) techniques achieved great performances in many applications. We demonstrate that it is possible to use DL techniques to learn a graph-to-layout sequence of operations thanks to a graph-related objective function. In this paper, we present a novel graph drawing framework called (DNN)^2: Deep Neural Network for DrawiNg Networks. Our method uses Graph Convolution Networks to learn a model. Learning is achieved by optimizing a graph topology related loss function that evaluates (DNN)^2 generated layouts during training. Once trained, the (DNN)^ model is able to quickly lay any input graph out. We experiment (DNN)^2 and statistically compare it to optimization-based and regular graph layout algorithms. The results show that (DNN)^2 performs well and are encouraging as the Deep Learning approach to Graph Drawing is novel and many leads for future works are identified.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Artificial neural networks ; Convolution ; Deep learning ; Layouts ; Machine learning ; Neural networks ; Topology optimization</subject><ispartof>arXiv.org, 2021-08</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Giovannangeli, Loann</creatorcontrib><creatorcontrib>Lalanne, Frederic</creatorcontrib><creatorcontrib>Auber, David</creatorcontrib><creatorcontrib>Giot, Romain</creatorcontrib><creatorcontrib>Bourqui, Romain</creatorcontrib><title>Deep Neural Network for DrawiNg Networks, (DNN)^2</title><title>arXiv.org</title><description>By leveraging recent progress of stochastic gradient descent methods, several works have shown that graphs could be efficiently laid out through the optimization of a tailored objective function. In the meantime, Deep Learning (DL) techniques achieved great performances in many applications. We demonstrate that it is possible to use DL techniques to learn a graph-to-layout sequence of operations thanks to a graph-related objective function. In this paper, we present a novel graph drawing framework called (DNN)^2: Deep Neural Network for DrawiNg Networks. Our method uses Graph Convolution Networks to learn a model. Learning is achieved by optimizing a graph topology related loss function that evaluates (DNN)^2 generated layouts during training. Once trained, the (DNN)^ model is able to quickly lay any input graph out. We experiment (DNN)^2 and statistically compare it to optimization-based and regular graph layout algorithms. The results show that (DNN)^2 performs well and are encouraging as the Deep Learning approach to Graph Drawing is novel and many leads for future works are identified.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Convolution</subject><subject>Deep learning</subject><subject>Layouts</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Topology optimization</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwdElNLVDwSy0tSswBUiXl-UXZCmn5RQouRYnlmX7pMLFiHQUNFz8_zTgjHgbWtMSc4lReKM3NoOzmGuLsoVtQlF9YmlpcEp-VX1qUB5SKNzI1MzA0Mza1NDEmThUA9ucx1A</recordid><startdate>20210810</startdate><enddate>20210810</enddate><creator>Giovannangeli, Loann</creator><creator>Lalanne, Frederic</creator><creator>Auber, David</creator><creator>Giot, Romain</creator><creator>Bourqui, Romain</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210810</creationdate><title>Deep Neural Network for DrawiNg Networks, (DNN)^2</title><author>Giovannangeli, Loann ; Lalanne, Frederic ; Auber, David ; Giot, Romain ; Bourqui, Romain</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25601635943</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Convolution</topic><topic>Deep learning</topic><topic>Layouts</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Topology optimization</topic><toplevel>online_resources</toplevel><creatorcontrib>Giovannangeli, Loann</creatorcontrib><creatorcontrib>Lalanne, Frederic</creatorcontrib><creatorcontrib>Auber, David</creatorcontrib><creatorcontrib>Giot, Romain</creatorcontrib><creatorcontrib>Bourqui, Romain</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Giovannangeli, Loann</au><au>Lalanne, Frederic</au><au>Auber, David</au><au>Giot, Romain</au><au>Bourqui, Romain</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Deep Neural Network for DrawiNg Networks, (DNN)^2</atitle><jtitle>arXiv.org</jtitle><date>2021-08-10</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>By leveraging recent progress of stochastic gradient descent methods, several works have shown that graphs could be efficiently laid out through the optimization of a tailored objective function. In the meantime, Deep Learning (DL) techniques achieved great performances in many applications. We demonstrate that it is possible to use DL techniques to learn a graph-to-layout sequence of operations thanks to a graph-related objective function. In this paper, we present a novel graph drawing framework called (DNN)^2: Deep Neural Network for DrawiNg Networks. Our method uses Graph Convolution Networks to learn a model. Learning is achieved by optimizing a graph topology related loss function that evaluates (DNN)^2 generated layouts during training. Once trained, the (DNN)^ model is able to quickly lay any input graph out. We experiment (DNN)^2 and statistically compare it to optimization-based and regular graph layout algorithms. The results show that (DNN)^2 performs well and are encouraging as the Deep Learning approach to Graph Drawing is novel and many leads for future works are identified.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-08
issn 2331-8422
language eng
recordid cdi_proquest_journals_2560163594
source Free E- Journals
subjects Algorithms
Artificial neural networks
Convolution
Deep learning
Layouts
Machine learning
Neural networks
Topology optimization
title Deep Neural Network for DrawiNg Networks, (DNN)^2
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T20%3A15%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Deep%20Neural%20Network%20for%20DrawiNg%20Networks,%20(DNN)%5E2&rft.jtitle=arXiv.org&rft.au=Giovannangeli,%20Loann&rft.date=2021-08-10&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2560163594%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2560163594&rft_id=info:pmid/&rfr_iscdi=true