Saliency-dependent adaptive remeshing for cloth simulation

We propose a method for simulating cloth with meshes dynamically refined according to visual saliency. It is a common belief that it is preferable for the regions of an image being viewed to have more details than others. For a certain scene, a low-resolution cloth mesh is first simulated and render...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Textile research journal 2021-03, Vol.91 (5-6), p.480-495
Hauptverfasser: Shi, Min, Ming, Hou, Liu, Yaning, Mao, Tianlu, Zhu, Dengming, Wang, Zhaoqi, Zhang, Fan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 495
container_issue 5-6
container_start_page 480
container_title Textile research journal
container_volume 91
creator Shi, Min
Ming, Hou
Liu, Yaning
Mao, Tianlu
Zhu, Dengming
Wang, Zhaoqi
Zhang, Fan
description We propose a method for simulating cloth with meshes dynamically refined according to visual saliency. It is a common belief that it is preferable for the regions of an image being viewed to have more details than others. For a certain scene, a low-resolution cloth mesh is first simulated and rendered into images in the preview stage. Pixel saliency values of these images are predicted according to a pre-trained saliency prediction model. These pixel saliencies are then translated to a vertex saliency of the corresponding meshes. Vertex saliency, together with camera positions and a number of geometric features of surfaces, guides the dynamic remeshing for simulation in the production stage. To build the saliency prediction model, images extracted from various videos of clothing scenes were used as training data. Participants were asked to watch these videos and their eye motion was tracked. A saliency map is generated from the eye motion data for each extracted video frame image. Image feature vectors and map labels are sent to a Support Vector Machine for training to obtain a saliency prediction model. Our method greatly reduces the number of vertices and faces in the clothing model, and generates a speed-up of more than 3 × for scenes with single dressed character, while for multi-character scenes the speed-up is increased to more than 5×. The proposed technique can work together with view-dependency for offline simulation.
doi_str_mv 10.1177/0040517520944248
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2503130738</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_0040517520944248</sage_id><sourcerecordid>2503130738</sourcerecordid><originalsourceid>FETCH-LOGICAL-c309t-bcc54a27b66baed1eeb2c05b34ffd30e316a2596ab92e1fcec4431827064754b3</originalsourceid><addsrcrecordid>eNp1kE1Lw0AYhBdRsFbvHgOeV9_9ym68SdEqFDyo57C7edOmpEnc3Qj997ZEEARPc5hnZmAIuWZwy5jWdwASFNOKQyEll-aEzJiWOdVamlMyO9r06J-Tixi3AGCMNjNy_2bbBju_pxUO2FXYpcxWdkjNF2YBdxg3TbfO6j5kvu3TJovNbmxtavrukpzVto149aNz8vH0-L54pqvX5cviYUW9gCJR572SlmuX585ixRAd96CckHVdCUDBcstVkVtXcGS1Ry-lYIZryKVW0ok5uZl6h9B_jhhTue3H0B0mS65AMAFamAMFE-VDH2PAuhxCs7NhXzIojw-Vfx86ROgUiXaNv6X_8t-a42Uu</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2503130738</pqid></control><display><type>article</type><title>Saliency-dependent adaptive remeshing for cloth simulation</title><source>Access via SAGE</source><creator>Shi, Min ; Ming, Hou ; Liu, Yaning ; Mao, Tianlu ; Zhu, Dengming ; Wang, Zhaoqi ; Zhang, Fan</creator><creatorcontrib>Shi, Min ; Ming, Hou ; Liu, Yaning ; Mao, Tianlu ; Zhu, Dengming ; Wang, Zhaoqi ; Zhang, Fan</creatorcontrib><description>We propose a method for simulating cloth with meshes dynamically refined according to visual saliency. It is a common belief that it is preferable for the regions of an image being viewed to have more details than others. For a certain scene, a low-resolution cloth mesh is first simulated and rendered into images in the preview stage. Pixel saliency values of these images are predicted according to a pre-trained saliency prediction model. These pixel saliencies are then translated to a vertex saliency of the corresponding meshes. Vertex saliency, together with camera positions and a number of geometric features of surfaces, guides the dynamic remeshing for simulation in the production stage. To build the saliency prediction model, images extracted from various videos of clothing scenes were used as training data. Participants were asked to watch these videos and their eye motion was tracked. A saliency map is generated from the eye motion data for each extracted video frame image. Image feature vectors and map labels are sent to a Support Vector Machine for training to obtain a saliency prediction model. Our method greatly reduces the number of vertices and faces in the clothing model, and generates a speed-up of more than 3 × for scenes with single dressed character, while for multi-character scenes the speed-up is increased to more than 5×. The proposed technique can work together with view-dependency for offline simulation.</description><identifier>ISSN: 0040-5175</identifier><identifier>EISSN: 1746-7748</identifier><identifier>DOI: 10.1177/0040517520944248</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><subject>Apexes ; Cloth ; Eye ; Feature extraction ; Finite element method ; Pixels ; Prediction models ; Predictions ; Salience ; Simulation ; Support vector machines ; Training ; Video</subject><ispartof>Textile research journal, 2021-03, Vol.91 (5-6), p.480-495</ispartof><rights>The Author(s) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c309t-bcc54a27b66baed1eeb2c05b34ffd30e316a2596ab92e1fcec4431827064754b3</citedby><cites>FETCH-LOGICAL-c309t-bcc54a27b66baed1eeb2c05b34ffd30e316a2596ab92e1fcec4431827064754b3</cites><orcidid>0000-0001-6539-028X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/0040517520944248$$EPDF$$P50$$Gsage$$H</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/0040517520944248$$EHTML$$P50$$Gsage$$H</linktohtml><link.rule.ids>314,780,784,21819,27924,27925,43621,43622</link.rule.ids></links><search><creatorcontrib>Shi, Min</creatorcontrib><creatorcontrib>Ming, Hou</creatorcontrib><creatorcontrib>Liu, Yaning</creatorcontrib><creatorcontrib>Mao, Tianlu</creatorcontrib><creatorcontrib>Zhu, Dengming</creatorcontrib><creatorcontrib>Wang, Zhaoqi</creatorcontrib><creatorcontrib>Zhang, Fan</creatorcontrib><title>Saliency-dependent adaptive remeshing for cloth simulation</title><title>Textile research journal</title><description>We propose a method for simulating cloth with meshes dynamically refined according to visual saliency. It is a common belief that it is preferable for the regions of an image being viewed to have more details than others. For a certain scene, a low-resolution cloth mesh is first simulated and rendered into images in the preview stage. Pixel saliency values of these images are predicted according to a pre-trained saliency prediction model. These pixel saliencies are then translated to a vertex saliency of the corresponding meshes. Vertex saliency, together with camera positions and a number of geometric features of surfaces, guides the dynamic remeshing for simulation in the production stage. To build the saliency prediction model, images extracted from various videos of clothing scenes were used as training data. Participants were asked to watch these videos and their eye motion was tracked. A saliency map is generated from the eye motion data for each extracted video frame image. Image feature vectors and map labels are sent to a Support Vector Machine for training to obtain a saliency prediction model. Our method greatly reduces the number of vertices and faces in the clothing model, and generates a speed-up of more than 3 × for scenes with single dressed character, while for multi-character scenes the speed-up is increased to more than 5×. The proposed technique can work together with view-dependency for offline simulation.</description><subject>Apexes</subject><subject>Cloth</subject><subject>Eye</subject><subject>Feature extraction</subject><subject>Finite element method</subject><subject>Pixels</subject><subject>Prediction models</subject><subject>Predictions</subject><subject>Salience</subject><subject>Simulation</subject><subject>Support vector machines</subject><subject>Training</subject><subject>Video</subject><issn>0040-5175</issn><issn>1746-7748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp1kE1Lw0AYhBdRsFbvHgOeV9_9ym68SdEqFDyo57C7edOmpEnc3Qj997ZEEARPc5hnZmAIuWZwy5jWdwASFNOKQyEll-aEzJiWOdVamlMyO9r06J-Tixi3AGCMNjNy_2bbBju_pxUO2FXYpcxWdkjNF2YBdxg3TbfO6j5kvu3TJovNbmxtavrukpzVto149aNz8vH0-L54pqvX5cviYUW9gCJR572SlmuX585ixRAd96CckHVdCUDBcstVkVtXcGS1Ry-lYIZryKVW0ok5uZl6h9B_jhhTue3H0B0mS65AMAFamAMFE-VDH2PAuhxCs7NhXzIojw-Vfx86ROgUiXaNv6X_8t-a42Uu</recordid><startdate>202103</startdate><enddate>202103</enddate><creator>Shi, Min</creator><creator>Ming, Hou</creator><creator>Liu, Yaning</creator><creator>Mao, Tianlu</creator><creator>Zhu, Dengming</creator><creator>Wang, Zhaoqi</creator><creator>Zhang, Fan</creator><general>SAGE Publications</general><general>Sage Publications Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SR</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>JG9</scope><orcidid>https://orcid.org/0000-0001-6539-028X</orcidid></search><sort><creationdate>202103</creationdate><title>Saliency-dependent adaptive remeshing for cloth simulation</title><author>Shi, Min ; Ming, Hou ; Liu, Yaning ; Mao, Tianlu ; Zhu, Dengming ; Wang, Zhaoqi ; Zhang, Fan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c309t-bcc54a27b66baed1eeb2c05b34ffd30e316a2596ab92e1fcec4431827064754b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Apexes</topic><topic>Cloth</topic><topic>Eye</topic><topic>Feature extraction</topic><topic>Finite element method</topic><topic>Pixels</topic><topic>Prediction models</topic><topic>Predictions</topic><topic>Salience</topic><topic>Simulation</topic><topic>Support vector machines</topic><topic>Training</topic><topic>Video</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shi, Min</creatorcontrib><creatorcontrib>Ming, Hou</creatorcontrib><creatorcontrib>Liu, Yaning</creatorcontrib><creatorcontrib>Mao, Tianlu</creatorcontrib><creatorcontrib>Zhu, Dengming</creatorcontrib><creatorcontrib>Wang, Zhaoqi</creatorcontrib><creatorcontrib>Zhang, Fan</creatorcontrib><collection>CrossRef</collection><collection>Engineered Materials Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Materials Research Database</collection><jtitle>Textile research journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shi, Min</au><au>Ming, Hou</au><au>Liu, Yaning</au><au>Mao, Tianlu</au><au>Zhu, Dengming</au><au>Wang, Zhaoqi</au><au>Zhang, Fan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Saliency-dependent adaptive remeshing for cloth simulation</atitle><jtitle>Textile research journal</jtitle><date>2021-03</date><risdate>2021</risdate><volume>91</volume><issue>5-6</issue><spage>480</spage><epage>495</epage><pages>480-495</pages><issn>0040-5175</issn><eissn>1746-7748</eissn><abstract>We propose a method for simulating cloth with meshes dynamically refined according to visual saliency. It is a common belief that it is preferable for the regions of an image being viewed to have more details than others. For a certain scene, a low-resolution cloth mesh is first simulated and rendered into images in the preview stage. Pixel saliency values of these images are predicted according to a pre-trained saliency prediction model. These pixel saliencies are then translated to a vertex saliency of the corresponding meshes. Vertex saliency, together with camera positions and a number of geometric features of surfaces, guides the dynamic remeshing for simulation in the production stage. To build the saliency prediction model, images extracted from various videos of clothing scenes were used as training data. Participants were asked to watch these videos and their eye motion was tracked. A saliency map is generated from the eye motion data for each extracted video frame image. Image feature vectors and map labels are sent to a Support Vector Machine for training to obtain a saliency prediction model. Our method greatly reduces the number of vertices and faces in the clothing model, and generates a speed-up of more than 3 × for scenes with single dressed character, while for multi-character scenes the speed-up is increased to more than 5×. The proposed technique can work together with view-dependency for offline simulation.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/0040517520944248</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0001-6539-028X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0040-5175
ispartof Textile research journal, 2021-03, Vol.91 (5-6), p.480-495
issn 0040-5175
1746-7748
language eng
recordid cdi_proquest_journals_2503130738
source Access via SAGE
subjects Apexes
Cloth
Eye
Feature extraction
Finite element method
Pixels
Prediction models
Predictions
Salience
Simulation
Support vector machines
Training
Video
title Saliency-dependent adaptive remeshing for cloth simulation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T01%3A36%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Saliency-dependent%20adaptive%20remeshing%20for%20cloth%20simulation&rft.jtitle=Textile%20research%20journal&rft.au=Shi,%20Min&rft.date=2021-03&rft.volume=91&rft.issue=5-6&rft.spage=480&rft.epage=495&rft.pages=480-495&rft.issn=0040-5175&rft.eissn=1746-7748&rft_id=info:doi/10.1177/0040517520944248&rft_dat=%3Cproquest_cross%3E2503130738%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2503130738&rft_id=info:pmid/&rft_sage_id=10.1177_0040517520944248&rfr_iscdi=true