Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing

Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly fr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Sustainability 2018-03, Vol.10 (3), p.816
Hauptverfasser: Sung, Yunsick, Jin, Yong, Kwak, Jeonghoon, Lee, Sang-Geol, Cho, Kyungeun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 3
container_start_page 816
container_title Sustainability
container_volume 10
creator Sung, Yunsick
Jin, Yong
Kwak, Jeonghoon
Lee, Sang-Geol
Cho, Kyungeun
description Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs) owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS), a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images, respectively.
doi_str_mv 10.3390/su10030816
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2110080259</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2110080259</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-1a4d9c8b2b32f0999695ef52bbf952836cd72f6264f6bb987b9bad2d401303073</originalsourceid><addsrcrecordid>eNpNkEFLAzEQhYMoWGov_oKANyE6SbrZzbEutRZKPajnJdkktaWbrMmu4L83UkHn8ubweDPvQ-iawh3nEu7TSAE4VFScoQmDkhIKBZz_2y_RLKUD5OGcSiomSC_Mp_KtNbhWnY0Krzu1s7iOoe_3focXfR-Dat-xCxHX2y15UCmbl96QIZAsuA5-iOGYcPD4ZUyD2nuljzkidP045IwrdOHUMdnZr07R2-PytX4im-fVul5sSMtkMRCq5ka2lWaaMwdSSiEL6wqmtZMFq7hoTcmcYGLuhNayKrXUyjAzB8pz65JP0c0pN3_8Mdo0NIcwRp9PNoxmMhWwQmbX7cnVxpBStK7p475T8auh0PxgbP4w8m8d-GMd</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2110080259</pqid></control><display><type>article</type><title>Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Sung, Yunsick ; Jin, Yong ; Kwak, Jeonghoon ; Lee, Sang-Geol ; Cho, Kyungeun</creator><creatorcontrib>Sung, Yunsick ; Jin, Yong ; Kwak, Jeonghoon ; Lee, Sang-Geol ; Cho, Kyungeun</creatorcontrib><description>Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs) owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS), a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images, respectively.</description><identifier>ISSN: 2071-1050</identifier><identifier>EISSN: 2071-1050</identifier><identifier>DOI: 10.3390/su10030816</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Automobiles ; Automotive parts ; Autonomous cars ; Cameras ; Computation ; Computer simulation ; Driving ; Image detection ; Input devices ; Neural networks ; Pedals ; Race cars ; Sustainability</subject><ispartof>Sustainability, 2018-03, Vol.10 (3), p.816</ispartof><rights>2018. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-1a4d9c8b2b32f0999695ef52bbf952836cd72f6264f6bb987b9bad2d401303073</citedby><cites>FETCH-LOGICAL-c295t-1a4d9c8b2b32f0999695ef52bbf952836cd72f6264f6bb987b9bad2d401303073</cites><orcidid>0000-0003-2219-0848</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Sung, Yunsick</creatorcontrib><creatorcontrib>Jin, Yong</creatorcontrib><creatorcontrib>Kwak, Jeonghoon</creatorcontrib><creatorcontrib>Lee, Sang-Geol</creatorcontrib><creatorcontrib>Cho, Kyungeun</creatorcontrib><title>Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing</title><title>Sustainability</title><description>Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs) owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS), a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images, respectively.</description><subject>Automobiles</subject><subject>Automotive parts</subject><subject>Autonomous cars</subject><subject>Cameras</subject><subject>Computation</subject><subject>Computer simulation</subject><subject>Driving</subject><subject>Image detection</subject><subject>Input devices</subject><subject>Neural networks</subject><subject>Pedals</subject><subject>Race cars</subject><subject>Sustainability</subject><issn>2071-1050</issn><issn>2071-1050</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpNkEFLAzEQhYMoWGov_oKANyE6SbrZzbEutRZKPajnJdkktaWbrMmu4L83UkHn8ubweDPvQ-iawh3nEu7TSAE4VFScoQmDkhIKBZz_2y_RLKUD5OGcSiomSC_Mp_KtNbhWnY0Krzu1s7iOoe_3focXfR-Dat-xCxHX2y15UCmbl96QIZAsuA5-iOGYcPD4ZUyD2nuljzkidP045IwrdOHUMdnZr07R2-PytX4im-fVul5sSMtkMRCq5ka2lWaaMwdSSiEL6wqmtZMFq7hoTcmcYGLuhNayKrXUyjAzB8pz65JP0c0pN3_8Mdo0NIcwRp9PNoxmMhWwQmbX7cnVxpBStK7p475T8auh0PxgbP4w8m8d-GMd</recordid><startdate>20180315</startdate><enddate>20180315</enddate><creator>Sung, Yunsick</creator><creator>Jin, Yong</creator><creator>Kwak, Jeonghoon</creator><creator>Lee, Sang-Geol</creator><creator>Cho, Kyungeun</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>4U-</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0003-2219-0848</orcidid></search><sort><creationdate>20180315</creationdate><title>Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing</title><author>Sung, Yunsick ; Jin, Yong ; Kwak, Jeonghoon ; Lee, Sang-Geol ; Cho, Kyungeun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-1a4d9c8b2b32f0999695ef52bbf952836cd72f6264f6bb987b9bad2d401303073</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Automobiles</topic><topic>Automotive parts</topic><topic>Autonomous cars</topic><topic>Cameras</topic><topic>Computation</topic><topic>Computer simulation</topic><topic>Driving</topic><topic>Image detection</topic><topic>Input devices</topic><topic>Neural networks</topic><topic>Pedals</topic><topic>Race cars</topic><topic>Sustainability</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sung, Yunsick</creatorcontrib><creatorcontrib>Jin, Yong</creatorcontrib><creatorcontrib>Kwak, Jeonghoon</creatorcontrib><creatorcontrib>Lee, Sang-Geol</creatorcontrib><creatorcontrib>Cho, Kyungeun</creatorcontrib><collection>CrossRef</collection><collection>University Readers</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Sustainability</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sung, Yunsick</au><au>Jin, Yong</au><au>Kwak, Jeonghoon</au><au>Lee, Sang-Geol</au><au>Cho, Kyungeun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing</atitle><jtitle>Sustainability</jtitle><date>2018-03-15</date><risdate>2018</risdate><volume>10</volume><issue>3</issue><spage>816</spage><pages>816-</pages><issn>2071-1050</issn><eissn>2071-1050</eissn><abstract>Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs) owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS), a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images, respectively.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/su10030816</doi><orcidid>https://orcid.org/0000-0003-2219-0848</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2071-1050
ispartof Sustainability, 2018-03, Vol.10 (3), p.816
issn 2071-1050
2071-1050
language eng
recordid cdi_proquest_journals_2110080259
source MDPI - Multidisciplinary Digital Publishing Institute; EZB-FREE-00999 freely available EZB journals
subjects Automobiles
Automotive parts
Autonomous cars
Cameras
Computation
Computer simulation
Driving
Image detection
Input devices
Neural networks
Pedals
Race cars
Sustainability
title Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T15%3A12%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Advanced%20Camera%20Image%20Cropping%20Approach%20for%20CNN-Based%20End-to-End%20Controls%20on%20Sustainable%20Computing&rft.jtitle=Sustainability&rft.au=Sung,%20Yunsick&rft.date=2018-03-15&rft.volume=10&rft.issue=3&rft.spage=816&rft.pages=816-&rft.issn=2071-1050&rft.eissn=2071-1050&rft_id=info:doi/10.3390/su10030816&rft_dat=%3Cproquest_cross%3E2110080259%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2110080259&rft_id=info:pmid/&rfr_iscdi=true