Development of a Virtual Environment for Rapid Generation of Synthetic Training Images for Artificial Intelligence Object Recognition

In the field of machine learning and computer vision, the lack of annotated datasets is a major challenge for model development and accuracy improvement. Synthetic data generation addresses this issue by providing large, diverse, and accurately annotated datasets, thereby enhancing model training an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2024-12, Vol.13 (23), p.4740
Hauptverfasser: Wang, Chenyu, Tinsley, Lawrence, Honarvar Shakibaei Asli, Barmak
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 23
container_start_page 4740
container_title Electronics (Basel)
container_volume 13
creator Wang, Chenyu
Tinsley, Lawrence
Honarvar Shakibaei Asli, Barmak
description In the field of machine learning and computer vision, the lack of annotated datasets is a major challenge for model development and accuracy improvement. Synthetic data generation addresses this issue by providing large, diverse, and accurately annotated datasets, thereby enhancing model training and validation. This study presents a Unity-based virtual environment that utilises the Unity Perception package to generate high-quality datasets. First, high-precision 3D (Three-Dimensional) models are created using a 3D structured light scanner, with textures processed to remove specular reflections. These models are then imported into Unity to generate diverse and accurately annotated synthetic datasets. The experimental results indicate that object recognition models trained with synthetic data achieve a high rate of performance on real images, validating the effectiveness of synthetic data in improving model generalisation and application performance. Monocular distance measurement verification shows that the synthetic data closely matches real-world physical scales, confirming its visual realism and physical accuracy.
doi_str_mv 10.3390/electronics13234740
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_3144068032</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A819847707</galeid><sourcerecordid>A819847707</sourcerecordid><originalsourceid>FETCH-LOGICAL-g672-85f29446463644bf773c5be2ede080fb5624cec4946c12c62c879469c7de6f3e3</originalsourceid><addsrcrecordid>eNpNkM1qAjEQx0NpoWJ9gl4CPa_NJjHZHMVaKwiClV4lGyfbyJrYbBT6AH3vRu2hM4f5Mx-_GQahx5IMGVPkGVowKQbvTFcyyrjk5Ab1KJGqUFTR23_6Hg26bkeyqZJVjPTQzwucoA2HPfiEg8Uaf7iYjrrFU39ymXop2BDxSh_cFs_AQ9TJBX_ufv_26ROSM3gdtfPON3i-1w10l4lxTM464zJs7hO0rWvAG8DLepcvxiswofHuzHpAd1a3HQz-Yh-tX6fryVuxWM7mk_GiaISkRTWyVHEuuGCC89pKycyoBgpbIBWx9UhQbsBwxYUpqRHUVDJrZeQWhGXA-ujpij3E8HWELm124Rh93rhhJedEVCQ_sI-G165Gt7Bx3oYUtcm-hb0zwYN1OT-uSlVxKYlkv9KZd5U</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3144068032</pqid></control><display><type>article</type><title>Development of a Virtual Environment for Rapid Generation of Synthetic Training Images for Artificial Intelligence Object Recognition</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Wang, Chenyu ; Tinsley, Lawrence ; Honarvar Shakibaei Asli, Barmak</creator><creatorcontrib>Wang, Chenyu ; Tinsley, Lawrence ; Honarvar Shakibaei Asli, Barmak</creatorcontrib><description>In the field of machine learning and computer vision, the lack of annotated datasets is a major challenge for model development and accuracy improvement. Synthetic data generation addresses this issue by providing large, diverse, and accurately annotated datasets, thereby enhancing model training and validation. This study presents a Unity-based virtual environment that utilises the Unity Perception package to generate high-quality datasets. First, high-precision 3D (Three-Dimensional) models are created using a 3D structured light scanner, with textures processed to remove specular reflections. These models are then imported into Unity to generate diverse and accurately annotated synthetic datasets. The experimental results indicate that object recognition models trained with synthetic data achieve a high rate of performance on real images, validating the effectiveness of synthetic data in improving model generalisation and application performance. Monocular distance measurement verification shows that the synthetic data closely matches real-world physical scales, confirming its visual realism and physical accuracy.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics13234740</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Algorithms ; Annotations ; Artificial intelligence ; Computer vision ; Costs ; Datasets ; Distance measurement ; Flexibility ; Games ; Labeling ; Machine learning ; Machine vision ; Neural networks ; Object recognition ; Realism ; Researchers ; Robotics ; Scanning devices ; Semantics ; Synthetic data ; Texture recognition ; Virtual environments</subject><ispartof>Electronics (Basel), 2024-12, Vol.13 (23), p.4740</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Wang, Chenyu</creatorcontrib><creatorcontrib>Tinsley, Lawrence</creatorcontrib><creatorcontrib>Honarvar Shakibaei Asli, Barmak</creatorcontrib><title>Development of a Virtual Environment for Rapid Generation of Synthetic Training Images for Artificial Intelligence Object Recognition</title><title>Electronics (Basel)</title><description>In the field of machine learning and computer vision, the lack of annotated datasets is a major challenge for model development and accuracy improvement. Synthetic data generation addresses this issue by providing large, diverse, and accurately annotated datasets, thereby enhancing model training and validation. This study presents a Unity-based virtual environment that utilises the Unity Perception package to generate high-quality datasets. First, high-precision 3D (Three-Dimensional) models are created using a 3D structured light scanner, with textures processed to remove specular reflections. These models are then imported into Unity to generate diverse and accurately annotated synthetic datasets. The experimental results indicate that object recognition models trained with synthetic data achieve a high rate of performance on real images, validating the effectiveness of synthetic data in improving model generalisation and application performance. Monocular distance measurement verification shows that the synthetic data closely matches real-world physical scales, confirming its visual realism and physical accuracy.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Annotations</subject><subject>Artificial intelligence</subject><subject>Computer vision</subject><subject>Costs</subject><subject>Datasets</subject><subject>Distance measurement</subject><subject>Flexibility</subject><subject>Games</subject><subject>Labeling</subject><subject>Machine learning</subject><subject>Machine vision</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Realism</subject><subject>Researchers</subject><subject>Robotics</subject><subject>Scanning devices</subject><subject>Semantics</subject><subject>Synthetic data</subject><subject>Texture recognition</subject><subject>Virtual environments</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpNkM1qAjEQx0NpoWJ9gl4CPa_NJjHZHMVaKwiClV4lGyfbyJrYbBT6AH3vRu2hM4f5Mx-_GQahx5IMGVPkGVowKQbvTFcyyrjk5Ab1KJGqUFTR23_6Hg26bkeyqZJVjPTQzwucoA2HPfiEg8Uaf7iYjrrFU39ymXop2BDxSh_cFs_AQ9TJBX_ufv_26ROSM3gdtfPON3i-1w10l4lxTM464zJs7hO0rWvAG8DLepcvxiswofHuzHpAd1a3HQz-Yh-tX6fryVuxWM7mk_GiaISkRTWyVHEuuGCC89pKycyoBgpbIBWx9UhQbsBwxYUpqRHUVDJrZeQWhGXA-ujpij3E8HWELm124Rh93rhhJedEVCQ_sI-G165Gt7Bx3oYUtcm-hb0zwYN1OT-uSlVxKYlkv9KZd5U</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Wang, Chenyu</creator><creator>Tinsley, Lawrence</creator><creator>Honarvar Shakibaei Asli, Barmak</creator><general>MDPI AG</general><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20241201</creationdate><title>Development of a Virtual Environment for Rapid Generation of Synthetic Training Images for Artificial Intelligence Object Recognition</title><author>Wang, Chenyu ; Tinsley, Lawrence ; Honarvar Shakibaei Asli, Barmak</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-g672-85f29446463644bf773c5be2ede080fb5624cec4946c12c62c879469c7de6f3e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Annotations</topic><topic>Artificial intelligence</topic><topic>Computer vision</topic><topic>Costs</topic><topic>Datasets</topic><topic>Distance measurement</topic><topic>Flexibility</topic><topic>Games</topic><topic>Labeling</topic><topic>Machine learning</topic><topic>Machine vision</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Realism</topic><topic>Researchers</topic><topic>Robotics</topic><topic>Scanning devices</topic><topic>Semantics</topic><topic>Synthetic data</topic><topic>Texture recognition</topic><topic>Virtual environments</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Chenyu</creatorcontrib><creatorcontrib>Tinsley, Lawrence</creatorcontrib><creatorcontrib>Honarvar Shakibaei Asli, Barmak</creatorcontrib><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Chenyu</au><au>Tinsley, Lawrence</au><au>Honarvar Shakibaei Asli, Barmak</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Development of a Virtual Environment for Rapid Generation of Synthetic Training Images for Artificial Intelligence Object Recognition</atitle><jtitle>Electronics (Basel)</jtitle><date>2024-12-01</date><risdate>2024</risdate><volume>13</volume><issue>23</issue><spage>4740</spage><pages>4740-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>In the field of machine learning and computer vision, the lack of annotated datasets is a major challenge for model development and accuracy improvement. Synthetic data generation addresses this issue by providing large, diverse, and accurately annotated datasets, thereby enhancing model training and validation. This study presents a Unity-based virtual environment that utilises the Unity Perception package to generate high-quality datasets. First, high-precision 3D (Three-Dimensional) models are created using a 3D structured light scanner, with textures processed to remove specular reflections. These models are then imported into Unity to generate diverse and accurately annotated synthetic datasets. The experimental results indicate that object recognition models trained with synthetic data achieve a high rate of performance on real images, validating the effectiveness of synthetic data in improving model generalisation and application performance. Monocular distance measurement verification shows that the synthetic data closely matches real-world physical scales, confirming its visual realism and physical accuracy.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics13234740</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2079-9292
ispartof Electronics (Basel), 2024-12, Vol.13 (23), p.4740
issn 2079-9292
2079-9292
language eng
recordid cdi_proquest_journals_3144068032
source MDPI - Multidisciplinary Digital Publishing Institute; EZB-FREE-00999 freely available EZB journals
subjects Accuracy
Algorithms
Annotations
Artificial intelligence
Computer vision
Costs
Datasets
Distance measurement
Flexibility
Games
Labeling
Machine learning
Machine vision
Neural networks
Object recognition
Realism
Researchers
Robotics
Scanning devices
Semantics
Synthetic data
Texture recognition
Virtual environments
title Development of a Virtual Environment for Rapid Generation of Synthetic Training Images for Artificial Intelligence Object Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T08%3A15%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Development%20of%20a%20Virtual%20Environment%20for%20Rapid%20Generation%20of%20Synthetic%20Training%20Images%20for%20Artificial%20Intelligence%20Object%20Recognition&rft.jtitle=Electronics%20(Basel)&rft.au=Wang,%20Chenyu&rft.date=2024-12-01&rft.volume=13&rft.issue=23&rft.spage=4740&rft.pages=4740-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics13234740&rft_dat=%3Cgale_proqu%3EA819847707%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3144068032&rft_id=info:pmid/&rft_galeid=A819847707&rfr_iscdi=true