Augmented reality navigation method based on indoor natural scene image deep learning

The invention discloses an augmented reality navigation method based on indoor natural scene image deep learning. The method comprises the steps of firstly scanning an indoor natural scene through a three-dimensional laser scanner to extract three-dimensional scene feature recognition points, then c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: SUN JIAXIN, TANG HAOCHEN, BO YINGJIE, LIU WEIWEI, CAO XINGWEN, LIAO ZONGYU, WU MENGQUAN, ZHANG CONGYING, ZHANG WENLIANG, TUO MINGYI, ZHAO ZIQI, NING XIANGYU, ZHOU HUILIN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator SUN JIAXIN
TANG HAOCHEN
BO YINGJIE
LIU WEIWEI
CAO XINGWEN
LIAO ZONGYU
WU MENGQUAN
ZHANG CONGYING
ZHANG WENLIANG
TUO MINGYI
ZHAO ZIQI
NING XIANGYU
ZHOU HUILIN
description The invention discloses an augmented reality navigation method based on indoor natural scene image deep learning. The method comprises the steps of firstly scanning an indoor natural scene through a three-dimensional laser scanner to extract three-dimensional scene feature recognition points, then calculating an internal reference matrix of a smart phone camera, collecting an indoor natural sceneimage through a smart phone to extract two-dimensional image feature recognition points, and establishing an indoor natural scene topological network structure chart through an indoor planar map; binding and mapping the two-dimensional image feature recognition points, the three-dimensional scene feature recognition points and the topological network path nodes through specific descriptors; carrying out deep learning-based image classification on the indoor natural scene images acquired by the smart phone, and segmenting the indoor natural scene into a plurality of sub-scenes; and then tracking and recovering the thre
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN111126304A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN111126304A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN111126304A3</originalsourceid><addsrcrecordid>eNqNizEKAjEURLexEPUO3wMI6or9sihWVlovXzPGQPITkr-CtzeCB3CaYXhvps21G22AKAxlsHf6JuGXs6wuCgXoMxq6cam8bicmxlwNHTN7KncIyAW2IAMk8uAsTuy8mTzYFyx-PWuWx8OlP62Q4oCS-HvUoT9varb7dr3r2n-cD-xGOSQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Augmented reality navigation method based on indoor natural scene image deep learning</title><source>esp@cenet</source><creator>SUN JIAXIN ; TANG HAOCHEN ; BO YINGJIE ; LIU WEIWEI ; CAO XINGWEN ; LIAO ZONGYU ; WU MENGQUAN ; ZHANG CONGYING ; ZHANG WENLIANG ; TUO MINGYI ; ZHAO ZIQI ; NING XIANGYU ; ZHOU HUILIN</creator><creatorcontrib>SUN JIAXIN ; TANG HAOCHEN ; BO YINGJIE ; LIU WEIWEI ; CAO XINGWEN ; LIAO ZONGYU ; WU MENGQUAN ; ZHANG CONGYING ; ZHANG WENLIANG ; TUO MINGYI ; ZHAO ZIQI ; NING XIANGYU ; ZHOU HUILIN</creatorcontrib><description>The invention discloses an augmented reality navigation method based on indoor natural scene image deep learning. The method comprises the steps of firstly scanning an indoor natural scene through a three-dimensional laser scanner to extract three-dimensional scene feature recognition points, then calculating an internal reference matrix of a smart phone camera, collecting an indoor natural sceneimage through a smart phone to extract two-dimensional image feature recognition points, and establishing an indoor natural scene topological network structure chart through an indoor planar map; binding and mapping the two-dimensional image feature recognition points, the three-dimensional scene feature recognition points and the topological network path nodes through specific descriptors; carrying out deep learning-based image classification on the indoor natural scene images acquired by the smart phone, and segmenting the indoor natural scene into a plurality of sub-scenes; and then tracking and recovering the thre</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; GYROSCOPIC INSTRUMENTS ; HANDLING RECORD CARRIERS ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; MEASURING ; MEASURING ANGLES ; MEASURING AREAS ; MEASURING DISTANCES, LEVELS OR BEARINGS ; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS ; MEASURING LENGTH, THICKNESS OR SIMILAR LINEARDIMENSIONS ; NAVIGATION ; PHOTOGRAMMETRY OR VIDEOGRAMMETRY ; PHYSICS ; PRESENTATION OF DATA ; RECOGNITION OF DATA ; RECORD CARRIERS ; SURVEYING ; TESTING</subject><creationdate>2020</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200508&amp;DB=EPODOC&amp;CC=CN&amp;NR=111126304A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200508&amp;DB=EPODOC&amp;CC=CN&amp;NR=111126304A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>SUN JIAXIN</creatorcontrib><creatorcontrib>TANG HAOCHEN</creatorcontrib><creatorcontrib>BO YINGJIE</creatorcontrib><creatorcontrib>LIU WEIWEI</creatorcontrib><creatorcontrib>CAO XINGWEN</creatorcontrib><creatorcontrib>LIAO ZONGYU</creatorcontrib><creatorcontrib>WU MENGQUAN</creatorcontrib><creatorcontrib>ZHANG CONGYING</creatorcontrib><creatorcontrib>ZHANG WENLIANG</creatorcontrib><creatorcontrib>TUO MINGYI</creatorcontrib><creatorcontrib>ZHAO ZIQI</creatorcontrib><creatorcontrib>NING XIANGYU</creatorcontrib><creatorcontrib>ZHOU HUILIN</creatorcontrib><title>Augmented reality navigation method based on indoor natural scene image deep learning</title><description>The invention discloses an augmented reality navigation method based on indoor natural scene image deep learning. The method comprises the steps of firstly scanning an indoor natural scene through a three-dimensional laser scanner to extract three-dimensional scene feature recognition points, then calculating an internal reference matrix of a smart phone camera, collecting an indoor natural sceneimage through a smart phone to extract two-dimensional image feature recognition points, and establishing an indoor natural scene topological network structure chart through an indoor planar map; binding and mapping the two-dimensional image feature recognition points, the three-dimensional scene feature recognition points and the topological network path nodes through specific descriptors; carrying out deep learning-based image classification on the indoor natural scene images acquired by the smart phone, and segmenting the indoor natural scene into a plurality of sub-scenes; and then tracking and recovering the thre</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>GYROSCOPIC INSTRUMENTS</subject><subject>HANDLING RECORD CARRIERS</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>MEASURING</subject><subject>MEASURING ANGLES</subject><subject>MEASURING AREAS</subject><subject>MEASURING DISTANCES, LEVELS OR BEARINGS</subject><subject>MEASURING IRREGULARITIES OF SURFACES OR CONTOURS</subject><subject>MEASURING LENGTH, THICKNESS OR SIMILAR LINEARDIMENSIONS</subject><subject>NAVIGATION</subject><subject>PHOTOGRAMMETRY OR VIDEOGRAMMETRY</subject><subject>PHYSICS</subject><subject>PRESENTATION OF DATA</subject><subject>RECOGNITION OF DATA</subject><subject>RECORD CARRIERS</subject><subject>SURVEYING</subject><subject>TESTING</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2020</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNizEKAjEURLexEPUO3wMI6or9sihWVlovXzPGQPITkr-CtzeCB3CaYXhvps21G22AKAxlsHf6JuGXs6wuCgXoMxq6cam8bicmxlwNHTN7KncIyAW2IAMk8uAsTuy8mTzYFyx-PWuWx8OlP62Q4oCS-HvUoT9varb7dr3r2n-cD-xGOSQ</recordid><startdate>20200508</startdate><enddate>20200508</enddate><creator>SUN JIAXIN</creator><creator>TANG HAOCHEN</creator><creator>BO YINGJIE</creator><creator>LIU WEIWEI</creator><creator>CAO XINGWEN</creator><creator>LIAO ZONGYU</creator><creator>WU MENGQUAN</creator><creator>ZHANG CONGYING</creator><creator>ZHANG WENLIANG</creator><creator>TUO MINGYI</creator><creator>ZHAO ZIQI</creator><creator>NING XIANGYU</creator><creator>ZHOU HUILIN</creator><scope>EVB</scope></search><sort><creationdate>20200508</creationdate><title>Augmented reality navigation method based on indoor natural scene image deep learning</title><author>SUN JIAXIN ; TANG HAOCHEN ; BO YINGJIE ; LIU WEIWEI ; CAO XINGWEN ; LIAO ZONGYU ; WU MENGQUAN ; ZHANG CONGYING ; ZHANG WENLIANG ; TUO MINGYI ; ZHAO ZIQI ; NING XIANGYU ; ZHOU HUILIN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN111126304A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2020</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>GYROSCOPIC INSTRUMENTS</topic><topic>HANDLING RECORD CARRIERS</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>MEASURING</topic><topic>MEASURING ANGLES</topic><topic>MEASURING AREAS</topic><topic>MEASURING DISTANCES, LEVELS OR BEARINGS</topic><topic>MEASURING IRREGULARITIES OF SURFACES OR CONTOURS</topic><topic>MEASURING LENGTH, THICKNESS OR SIMILAR LINEARDIMENSIONS</topic><topic>NAVIGATION</topic><topic>PHOTOGRAMMETRY OR VIDEOGRAMMETRY</topic><topic>PHYSICS</topic><topic>PRESENTATION OF DATA</topic><topic>RECOGNITION OF DATA</topic><topic>RECORD CARRIERS</topic><topic>SURVEYING</topic><topic>TESTING</topic><toplevel>online_resources</toplevel><creatorcontrib>SUN JIAXIN</creatorcontrib><creatorcontrib>TANG HAOCHEN</creatorcontrib><creatorcontrib>BO YINGJIE</creatorcontrib><creatorcontrib>LIU WEIWEI</creatorcontrib><creatorcontrib>CAO XINGWEN</creatorcontrib><creatorcontrib>LIAO ZONGYU</creatorcontrib><creatorcontrib>WU MENGQUAN</creatorcontrib><creatorcontrib>ZHANG CONGYING</creatorcontrib><creatorcontrib>ZHANG WENLIANG</creatorcontrib><creatorcontrib>TUO MINGYI</creatorcontrib><creatorcontrib>ZHAO ZIQI</creatorcontrib><creatorcontrib>NING XIANGYU</creatorcontrib><creatorcontrib>ZHOU HUILIN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>SUN JIAXIN</au><au>TANG HAOCHEN</au><au>BO YINGJIE</au><au>LIU WEIWEI</au><au>CAO XINGWEN</au><au>LIAO ZONGYU</au><au>WU MENGQUAN</au><au>ZHANG CONGYING</au><au>ZHANG WENLIANG</au><au>TUO MINGYI</au><au>ZHAO ZIQI</au><au>NING XIANGYU</au><au>ZHOU HUILIN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Augmented reality navigation method based on indoor natural scene image deep learning</title><date>2020-05-08</date><risdate>2020</risdate><abstract>The invention discloses an augmented reality navigation method based on indoor natural scene image deep learning. The method comprises the steps of firstly scanning an indoor natural scene through a three-dimensional laser scanner to extract three-dimensional scene feature recognition points, then calculating an internal reference matrix of a smart phone camera, collecting an indoor natural sceneimage through a smart phone to extract two-dimensional image feature recognition points, and establishing an indoor natural scene topological network structure chart through an indoor planar map; binding and mapping the two-dimensional image feature recognition points, the three-dimensional scene feature recognition points and the topological network path nodes through specific descriptors; carrying out deep learning-based image classification on the indoor natural scene images acquired by the smart phone, and segmenting the indoor natural scene into a plurality of sub-scenes; and then tracking and recovering the thre</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN111126304A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
GYROSCOPIC INSTRUMENTS
HANDLING RECORD CARRIERS
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
MEASURING
MEASURING ANGLES
MEASURING AREAS
MEASURING DISTANCES, LEVELS OR BEARINGS
MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
MEASURING LENGTH, THICKNESS OR SIMILAR LINEARDIMENSIONS
NAVIGATION
PHOTOGRAMMETRY OR VIDEOGRAMMETRY
PHYSICS
PRESENTATION OF DATA
RECOGNITION OF DATA
RECORD CARRIERS
SURVEYING
TESTING
title Augmented reality navigation method based on indoor natural scene image deep learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T19%3A05%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=SUN%20JIAXIN&rft.date=2020-05-08&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN111126304A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true