Character navigation in dynamic environments based on optical flow

Steering and navigation are important components of character animation systems to enable them to autonomously move in their environment. In this work, we propose a synthetic vision model that uses visual features to steer agents through dynamic environments. Our agents perceive optical flow resulti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer graphics forum 2019-05, Vol.38 (2), p.181-192
Hauptverfasser: López, Axel, Chaumette, François, Marchand, Eric, Pettré, Julien
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 192
container_issue 2
container_start_page 181
container_title Computer graphics forum
container_volume 38
creator López, Axel
Chaumette, François
Marchand, Eric
Pettré, Julien
description Steering and navigation are important components of character animation systems to enable them to autonomously move in their environment. In this work, we propose a synthetic vision model that uses visual features to steer agents through dynamic environments. Our agents perceive optical flow resulting from their relative motion with the objects of the environment. The optical flow is then segmented and processed to extract visual features such as the focus of expansion and time‐to‐collision. Then, we establish the relations between these visual features and the agent motion, and use them to design a set of control functions which allow characters to perform object‐dependent tasks, such as following, avoiding and reaching. Control functions are then combined to let characters perform more complex navigation tasks in dynamic environments, such as reaching a goal while avoiding multiple obstacles. Agent's motion is achieved by local minimization of these functions. We demonstrate the efficiency of our approach through a number of scenarios. Our work sets the basis for building a character animation system which imitates human sensorimotor actions. It opens new perspectives to achieve realistic simulation of human characters taking into account perceptual factors, such as the lighting conditions of the environment.
doi_str_mv 10.1111/cgf.13629
format Article
fullrecord <record><control><sourceid>proquest_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_02052554v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2236160036</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3669-4b157370349d03a34d6a1b2004e7a00f2065913227c028eb4d7f9845eb19e97c3</originalsourceid><addsrcrecordid>eNp1kE9PAjEQxRujiYge_AabePKwMP3PHpEImJB40XPT7XahZGmxu2D49hbX6Mm5zOTlNy8vD6F7DCOcZmzW9QhTQYoLNMBMyHwieHGJBoDTLYHza3TTtlsAYFLwAXqabXTUprMx8_ro1rpzwWfOZ9XJ650zmfVHF4PfWd-1WalbW2UJCPvOGd1kdRM-b9FVrZvW3v3sIXqfP7_NlvnqdfEym65yQ4UoclZiLqkEyooKqKasEhqXJAWxUgPUBFJSTAmRBsjElqySdTFh3Ja4sIU0dIgee9-NbtQ-up2OJxW0U8vpSp01IMAJ5-yIE_vQs_sYPg627dQ2HKJP8RQhVGABQMWfo4mhbaOtf20xqHOdKtWpvutM7LhnP11jT_-DaraY9x9fu8tzoQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2236160036</pqid></control><display><type>article</type><title>Character navigation in dynamic environments based on optical flow</title><source>Wiley Online Library Journals Frontfile Complete</source><source>Business Source Complete</source><creator>López, Axel ; Chaumette, François ; Marchand, Eric ; Pettré, Julien</creator><creatorcontrib>López, Axel ; Chaumette, François ; Marchand, Eric ; Pettré, Julien</creatorcontrib><description>Steering and navigation are important components of character animation systems to enable them to autonomously move in their environment. In this work, we propose a synthetic vision model that uses visual features to steer agents through dynamic environments. Our agents perceive optical flow resulting from their relative motion with the objects of the environment. The optical flow is then segmented and processed to extract visual features such as the focus of expansion and time‐to‐collision. Then, we establish the relations between these visual features and the agent motion, and use them to design a set of control functions which allow characters to perform object‐dependent tasks, such as following, avoiding and reaching. Control functions are then combined to let characters perform more complex navigation tasks in dynamic environments, such as reaching a goal while avoiding multiple obstacles. Agent's motion is achieved by local minimization of these functions. We demonstrate the efficiency of our approach through a number of scenarios. Our work sets the basis for building a character animation system which imitates human sensorimotor actions. It opens new perspectives to achieve realistic simulation of human characters taking into account perceptual factors, such as the lighting conditions of the environment.</description><identifier>ISSN: 0167-7055</identifier><identifier>EISSN: 1467-8659</identifier><identifier>DOI: 10.1111/cgf.13629</identifier><language>eng</language><publisher>Oxford: Blackwell Publishing Ltd</publisher><subject>Animation ; CCS Concepts ; Computer Science ; Computer simulation ; Computing methodologies → Modeling and simulation ; Enhanced vision ; Feature extraction ; Graphics ; Model development and analysis ; Model verification and validation ; Navigation ; Object motion ; Optical flow (image analysis) ; Robotics ; Software reviews ; Steering ; Task complexity ; Visual perception</subject><ispartof>Computer graphics forum, 2019-05, Vol.38 (2), p.181-192</ispartof><rights>2019 The Author(s) Computer Graphics Forum © 2019 The Eurographics Association and John Wiley &amp; Sons Ltd. Published by John Wiley &amp; Sons Ltd.</rights><rights>2019 The Eurographics Association and John Wiley &amp; Sons Ltd.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3669-4b157370349d03a34d6a1b2004e7a00f2065913227c028eb4d7f9845eb19e97c3</citedby><cites>FETCH-LOGICAL-c3669-4b157370349d03a34d6a1b2004e7a00f2065913227c028eb4d7f9845eb19e97c3</cites><orcidid>0000-0001-7096-5236 ; 0000-0002-1238-4385 ; 0000-0002-6590-0689 ; 0000-0003-1812-1436</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1111%2Fcgf.13629$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1111%2Fcgf.13629$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>230,314,776,780,881,1411,27901,27902,45550,45551</link.rule.ids><backlink>$$Uhttps://inria.hal.science/hal-02052554$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>López, Axel</creatorcontrib><creatorcontrib>Chaumette, François</creatorcontrib><creatorcontrib>Marchand, Eric</creatorcontrib><creatorcontrib>Pettré, Julien</creatorcontrib><title>Character navigation in dynamic environments based on optical flow</title><title>Computer graphics forum</title><description>Steering and navigation are important components of character animation systems to enable them to autonomously move in their environment. In this work, we propose a synthetic vision model that uses visual features to steer agents through dynamic environments. Our agents perceive optical flow resulting from their relative motion with the objects of the environment. The optical flow is then segmented and processed to extract visual features such as the focus of expansion and time‐to‐collision. Then, we establish the relations between these visual features and the agent motion, and use them to design a set of control functions which allow characters to perform object‐dependent tasks, such as following, avoiding and reaching. Control functions are then combined to let characters perform more complex navigation tasks in dynamic environments, such as reaching a goal while avoiding multiple obstacles. Agent's motion is achieved by local minimization of these functions. We demonstrate the efficiency of our approach through a number of scenarios. Our work sets the basis for building a character animation system which imitates human sensorimotor actions. It opens new perspectives to achieve realistic simulation of human characters taking into account perceptual factors, such as the lighting conditions of the environment.</description><subject>Animation</subject><subject>CCS Concepts</subject><subject>Computer Science</subject><subject>Computer simulation</subject><subject>Computing methodologies → Modeling and simulation</subject><subject>Enhanced vision</subject><subject>Feature extraction</subject><subject>Graphics</subject><subject>Model development and analysis</subject><subject>Model verification and validation</subject><subject>Navigation</subject><subject>Object motion</subject><subject>Optical flow (image analysis)</subject><subject>Robotics</subject><subject>Software reviews</subject><subject>Steering</subject><subject>Task complexity</subject><subject>Visual perception</subject><issn>0167-7055</issn><issn>1467-8659</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNp1kE9PAjEQxRujiYge_AabePKwMP3PHpEImJB40XPT7XahZGmxu2D49hbX6Mm5zOTlNy8vD6F7DCOcZmzW9QhTQYoLNMBMyHwieHGJBoDTLYHza3TTtlsAYFLwAXqabXTUprMx8_ro1rpzwWfOZ9XJ650zmfVHF4PfWd-1WalbW2UJCPvOGd1kdRM-b9FVrZvW3v3sIXqfP7_NlvnqdfEym65yQ4UoclZiLqkEyooKqKasEhqXJAWxUgPUBFJSTAmRBsjElqySdTFh3Ja4sIU0dIgee9-NbtQ-up2OJxW0U8vpSp01IMAJ5-yIE_vQs_sYPg627dQ2HKJP8RQhVGABQMWfo4mhbaOtf20xqHOdKtWpvutM7LhnP11jT_-DaraY9x9fu8tzoQ</recordid><startdate>201905</startdate><enddate>201905</enddate><creator>López, Axel</creator><creator>Chaumette, François</creator><creator>Marchand, Eric</creator><creator>Pettré, Julien</creator><general>Blackwell Publishing Ltd</general><general>Wiley</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0001-7096-5236</orcidid><orcidid>https://orcid.org/0000-0002-1238-4385</orcidid><orcidid>https://orcid.org/0000-0002-6590-0689</orcidid><orcidid>https://orcid.org/0000-0003-1812-1436</orcidid></search><sort><creationdate>201905</creationdate><title>Character navigation in dynamic environments based on optical flow</title><author>López, Axel ; Chaumette, François ; Marchand, Eric ; Pettré, Julien</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3669-4b157370349d03a34d6a1b2004e7a00f2065913227c028eb4d7f9845eb19e97c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Animation</topic><topic>CCS Concepts</topic><topic>Computer Science</topic><topic>Computer simulation</topic><topic>Computing methodologies → Modeling and simulation</topic><topic>Enhanced vision</topic><topic>Feature extraction</topic><topic>Graphics</topic><topic>Model development and analysis</topic><topic>Model verification and validation</topic><topic>Navigation</topic><topic>Object motion</topic><topic>Optical flow (image analysis)</topic><topic>Robotics</topic><topic>Software reviews</topic><topic>Steering</topic><topic>Task complexity</topic><topic>Visual perception</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>López, Axel</creatorcontrib><creatorcontrib>Chaumette, François</creatorcontrib><creatorcontrib>Marchand, Eric</creatorcontrib><creatorcontrib>Pettré, Julien</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>Computer graphics forum</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>López, Axel</au><au>Chaumette, François</au><au>Marchand, Eric</au><au>Pettré, Julien</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Character navigation in dynamic environments based on optical flow</atitle><jtitle>Computer graphics forum</jtitle><date>2019-05</date><risdate>2019</risdate><volume>38</volume><issue>2</issue><spage>181</spage><epage>192</epage><pages>181-192</pages><issn>0167-7055</issn><eissn>1467-8659</eissn><abstract>Steering and navigation are important components of character animation systems to enable them to autonomously move in their environment. In this work, we propose a synthetic vision model that uses visual features to steer agents through dynamic environments. Our agents perceive optical flow resulting from their relative motion with the objects of the environment. The optical flow is then segmented and processed to extract visual features such as the focus of expansion and time‐to‐collision. Then, we establish the relations between these visual features and the agent motion, and use them to design a set of control functions which allow characters to perform object‐dependent tasks, such as following, avoiding and reaching. Control functions are then combined to let characters perform more complex navigation tasks in dynamic environments, such as reaching a goal while avoiding multiple obstacles. Agent's motion is achieved by local minimization of these functions. We demonstrate the efficiency of our approach through a number of scenarios. Our work sets the basis for building a character animation system which imitates human sensorimotor actions. It opens new perspectives to achieve realistic simulation of human characters taking into account perceptual factors, such as the lighting conditions of the environment.</abstract><cop>Oxford</cop><pub>Blackwell Publishing Ltd</pub><doi>10.1111/cgf.13629</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-7096-5236</orcidid><orcidid>https://orcid.org/0000-0002-1238-4385</orcidid><orcidid>https://orcid.org/0000-0002-6590-0689</orcidid><orcidid>https://orcid.org/0000-0003-1812-1436</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0167-7055
ispartof Computer graphics forum, 2019-05, Vol.38 (2), p.181-192
issn 0167-7055
1467-8659
language eng
recordid cdi_hal_primary_oai_HAL_hal_02052554v1
source Wiley Online Library Journals Frontfile Complete; Business Source Complete
subjects Animation
CCS Concepts
Computer Science
Computer simulation
Computing methodologies → Modeling and simulation
Enhanced vision
Feature extraction
Graphics
Model development and analysis
Model verification and validation
Navigation
Object motion
Optical flow (image analysis)
Robotics
Software reviews
Steering
Task complexity
Visual perception
title Character navigation in dynamic environments based on optical flow
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T14%3A17%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Character%20navigation%20in%20dynamic%20environments%20based%20on%20optical%20flow&rft.jtitle=Computer%20graphics%20forum&rft.au=L%C3%B3pez,%20Axel&rft.date=2019-05&rft.volume=38&rft.issue=2&rft.spage=181&rft.epage=192&rft.pages=181-192&rft.issn=0167-7055&rft.eissn=1467-8659&rft_id=info:doi/10.1111/cgf.13629&rft_dat=%3Cproquest_hal_p%3E2236160036%3C/proquest_hal_p%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2236160036&rft_id=info:pmid/&rfr_iscdi=true