Interactive Neural Painting

In the last few years, Neural Painting (NP) techniques became capable of producing extremely realistic artworks. This paper advances the state of the art in this emerging research domain by proposing the first approach for Interactive NP. Considering a setting where a user looks at a scene and tries...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-07
Hauptverfasser: Peruzzo, Elia, Menapace, Willi, Goel, Vidit, Arrigoni, Federica, Tang, Hao, Xu, Xingqian, Chopikyan, Arman, Orlov, Nikita, Hu, Yuxiao, Shi, Humphrey, Sebe, Nicu, Ricci, Elisa
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Peruzzo, Elia
Menapace, Willi
Goel, Vidit
Arrigoni, Federica
Tang, Hao
Xu, Xingqian
Chopikyan, Arman
Orlov, Nikita
Hu, Yuxiao
Shi, Humphrey
Sebe, Nicu
Ricci, Elisa
description In the last few years, Neural Painting (NP) techniques became capable of producing extremely realistic artworks. This paper advances the state of the art in this emerging research domain by proposing the first approach for Interactive NP. Considering a setting where a user looks at a scene and tries to reproduce it on a painting, our objective is to develop a computational framework to assist the users creativity by suggesting the next strokes to paint, that can be possibly used to complete the artwork. To accomplish such a task, we propose I-Paint, a novel method based on a conditional transformer Variational AutoEncoder (VAE) architecture with a two-stage decoder. To evaluate the proposed approach and stimulate research in this area, we also introduce two novel datasets. Our experiments show that our approach provides good stroke suggestions and compares favorably to the state of the art. Additional details, code and examples are available at https://helia95.github.io/inp-website.
doi_str_mv 10.48550/arxiv.2307.16441
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2307_16441</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2844452107</sourcerecordid><originalsourceid>FETCH-LOGICAL-a527-6f9b5041b51b1fd08dcd92b9105ee736a56d956c5e4d1b5944c972d01361cb2e3</originalsourceid><addsrcrecordid>eNotj8FKw0AURQdBsNR-gLiw4DrxvTfzZjJLKWoLRV10P0wyE0mpaZ0kRf_e2Lq6m8PlHCFuEHJVMMODT9_NMScJJketFF6ICUmJWaGIrsSs67YAQNoQs5yI21Xbx-SrvjnG-Wsckt_N333T9k37cS0ua7_r4ux_p2Lz_LRZLLP128tq8bjOPJPJdG1LBoUlY4l1gCJUwVJpEThGI7VnHSzriqMKI2SVqqyhACg1ViVFORV359uTuTuk5tOnH_dX4E4FI3F_Jg5p_zXErnfb_ZDa0clRoZRiQjDyFz1BRqU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2844452107</pqid></control><display><type>article</type><title>Interactive Neural Painting</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Peruzzo, Elia ; Menapace, Willi ; Goel, Vidit ; Arrigoni, Federica ; Tang, Hao ; Xu, Xingqian ; Chopikyan, Arman ; Orlov, Nikita ; Hu, Yuxiao ; Shi, Humphrey ; Sebe, Nicu ; Ricci, Elisa</creator><creatorcontrib>Peruzzo, Elia ; Menapace, Willi ; Goel, Vidit ; Arrigoni, Federica ; Tang, Hao ; Xu, Xingqian ; Chopikyan, Arman ; Orlov, Nikita ; Hu, Yuxiao ; Shi, Humphrey ; Sebe, Nicu ; Ricci, Elisa</creatorcontrib><description>In the last few years, Neural Painting (NP) techniques became capable of producing extremely realistic artworks. This paper advances the state of the art in this emerging research domain by proposing the first approach for Interactive NP. Considering a setting where a user looks at a scene and tries to reproduce it on a painting, our objective is to develop a computational framework to assist the users creativity by suggesting the next strokes to paint, that can be possibly used to complete the artwork. To accomplish such a task, we propose I-Paint, a novel method based on a conditional transformer Variational AutoEncoder (VAE) architecture with a two-stage decoder. To evaluate the proposed approach and stimulate research in this area, we also introduce two novel datasets. Our experiments show that our approach provides good stroke suggestions and compares favorably to the state of the art. Additional details, code and examples are available at https://helia95.github.io/inp-website.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2307.16441</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Computer Vision and Pattern Recognition</subject><ispartof>arXiv.org, 2023-07</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2307.16441$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1016/j.cviu.2023.103778$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Peruzzo, Elia</creatorcontrib><creatorcontrib>Menapace, Willi</creatorcontrib><creatorcontrib>Goel, Vidit</creatorcontrib><creatorcontrib>Arrigoni, Federica</creatorcontrib><creatorcontrib>Tang, Hao</creatorcontrib><creatorcontrib>Xu, Xingqian</creatorcontrib><creatorcontrib>Chopikyan, Arman</creatorcontrib><creatorcontrib>Orlov, Nikita</creatorcontrib><creatorcontrib>Hu, Yuxiao</creatorcontrib><creatorcontrib>Shi, Humphrey</creatorcontrib><creatorcontrib>Sebe, Nicu</creatorcontrib><creatorcontrib>Ricci, Elisa</creatorcontrib><title>Interactive Neural Painting</title><title>arXiv.org</title><description>In the last few years, Neural Painting (NP) techniques became capable of producing extremely realistic artworks. This paper advances the state of the art in this emerging research domain by proposing the first approach for Interactive NP. Considering a setting where a user looks at a scene and tries to reproduce it on a painting, our objective is to develop a computational framework to assist the users creativity by suggesting the next strokes to paint, that can be possibly used to complete the artwork. To accomplish such a task, we propose I-Paint, a novel method based on a conditional transformer Variational AutoEncoder (VAE) architecture with a two-stage decoder. To evaluate the proposed approach and stimulate research in this area, we also introduce two novel datasets. Our experiments show that our approach provides good stroke suggestions and compares favorably to the state of the art. Additional details, code and examples are available at https://helia95.github.io/inp-website.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj8FKw0AURQdBsNR-gLiw4DrxvTfzZjJLKWoLRV10P0wyE0mpaZ0kRf_e2Lq6m8PlHCFuEHJVMMODT9_NMScJJketFF6ICUmJWaGIrsSs67YAQNoQs5yI21Xbx-SrvjnG-Wsckt_N333T9k37cS0ua7_r4ux_p2Lz_LRZLLP128tq8bjOPJPJdG1LBoUlY4l1gCJUwVJpEThGI7VnHSzriqMKI2SVqqyhACg1ViVFORV359uTuTuk5tOnH_dX4E4FI3F_Jg5p_zXErnfb_ZDa0clRoZRiQjDyFz1BRqU</recordid><startdate>20230731</startdate><enddate>20230731</enddate><creator>Peruzzo, Elia</creator><creator>Menapace, Willi</creator><creator>Goel, Vidit</creator><creator>Arrigoni, Federica</creator><creator>Tang, Hao</creator><creator>Xu, Xingqian</creator><creator>Chopikyan, Arman</creator><creator>Orlov, Nikita</creator><creator>Hu, Yuxiao</creator><creator>Shi, Humphrey</creator><creator>Sebe, Nicu</creator><creator>Ricci, Elisa</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230731</creationdate><title>Interactive Neural Painting</title><author>Peruzzo, Elia ; Menapace, Willi ; Goel, Vidit ; Arrigoni, Federica ; Tang, Hao ; Xu, Xingqian ; Chopikyan, Arman ; Orlov, Nikita ; Hu, Yuxiao ; Shi, Humphrey ; Sebe, Nicu ; Ricci, Elisa</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a527-6f9b5041b51b1fd08dcd92b9105ee736a56d956c5e4d1b5944c972d01361cb2e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Peruzzo, Elia</creatorcontrib><creatorcontrib>Menapace, Willi</creatorcontrib><creatorcontrib>Goel, Vidit</creatorcontrib><creatorcontrib>Arrigoni, Federica</creatorcontrib><creatorcontrib>Tang, Hao</creatorcontrib><creatorcontrib>Xu, Xingqian</creatorcontrib><creatorcontrib>Chopikyan, Arman</creatorcontrib><creatorcontrib>Orlov, Nikita</creatorcontrib><creatorcontrib>Hu, Yuxiao</creatorcontrib><creatorcontrib>Shi, Humphrey</creatorcontrib><creatorcontrib>Sebe, Nicu</creatorcontrib><creatorcontrib>Ricci, Elisa</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Peruzzo, Elia</au><au>Menapace, Willi</au><au>Goel, Vidit</au><au>Arrigoni, Federica</au><au>Tang, Hao</au><au>Xu, Xingqian</au><au>Chopikyan, Arman</au><au>Orlov, Nikita</au><au>Hu, Yuxiao</au><au>Shi, Humphrey</au><au>Sebe, Nicu</au><au>Ricci, Elisa</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Interactive Neural Painting</atitle><jtitle>arXiv.org</jtitle><date>2023-07-31</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>In the last few years, Neural Painting (NP) techniques became capable of producing extremely realistic artworks. This paper advances the state of the art in this emerging research domain by proposing the first approach for Interactive NP. Considering a setting where a user looks at a scene and tries to reproduce it on a painting, our objective is to develop a computational framework to assist the users creativity by suggesting the next strokes to paint, that can be possibly used to complete the artwork. To accomplish such a task, we propose I-Paint, a novel method based on a conditional transformer Variational AutoEncoder (VAE) architecture with a two-stage decoder. To evaluate the proposed approach and stimulate research in this area, we also introduce two novel datasets. Our experiments show that our approach provides good stroke suggestions and compares favorably to the state of the art. Additional details, code and examples are available at https://helia95.github.io/inp-website.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2307.16441</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-07
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2307_16441
source arXiv.org; Free E- Journals
subjects Computer Science - Computer Vision and Pattern Recognition
title Interactive Neural Painting
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T08%3A49%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Interactive%20Neural%20Painting&rft.jtitle=arXiv.org&rft.au=Peruzzo,%20Elia&rft.date=2023-07-31&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2307.16441&rft_dat=%3Cproquest_arxiv%3E2844452107%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2844452107&rft_id=info:pmid/&rfr_iscdi=true