Toward Extremely Lightweight Distracted Driver Recognition With Distillation-Based Neural Architecture Search and Knowledge Transfer
The number of traffic accidents has been continuously increasing in recent years worldwide. Many accidents are caused by distracted drivers, who take their attention away from driving. Motivated by the success of Convolutional Neural Networks (CNNs) in computer vision, many researchers developed CNN...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-02 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Liu, Dichao Yamasaki, Toshihiko Wang, Yu Mase, Kenji Kato, Jien |
description | The number of traffic accidents has been continuously increasing in recent years worldwide. Many accidents are caused by distracted drivers, who take their attention away from driving. Motivated by the success of Convolutional Neural Networks (CNNs) in computer vision, many researchers developed CNN-based algorithms to recognize distracted driving from a dashcam and warn the driver against unsafe behaviors. However, current models have too many parameters, which is unfeasible for vehicle-mounted computing. This work proposes a novel knowledge-distillation-based framework to solve this problem. The proposed framework first constructs a high-performance teacher network by progressively strengthening the robustness to illumination changes from shallow to deep layers of a CNN. Then, the teacher network is used to guide the architecture searching process of a student network through knowledge distillation. After that, we use the teacher network again to transfer knowledge to the student network by knowledge distillation. Experimental results on the Statefarm Distracted Driver Detection Dataset and AUC Distracted Driver Dataset show that the proposed approach is highly effective for recognizing distracted driving behaviors from photos: (1) the teacher network's accuracy surpasses the previous best accuracy; (2) the student network achieves very high accuracy with only 0.42M parameters (around 55% of the previous most lightweight model). Furthermore, the student network architecture can be extended to a spatial-temporal 3D CNN for recognizing distracted driving from video clips. The 3D student network largely surpasses the previous best accuracy with only 2.03M parameters on the Drive&Act Dataset. The source code is available at https://github.com/Dichao-Liu/Lightweight_Distracted_Driver_Recognition_with_Distillation-Based_NAS_and_Knowledge_Transfer. |
doi_str_mv | 10.48550/arxiv.2302.04527 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2302_04527</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2775127940</sourcerecordid><originalsourceid>FETCH-LOGICAL-a520-6f278eabb3c7666ec0743249b065372786557b45b1b531edce60e343436bb64b3</originalsourceid><addsrcrecordid>eNotkEtrwkAQgJdCoWL9AT11oefYzT71aNXaUmmhDfQYdpNRV2JiJxsf9_7wRi0DM8zMxzB8hNzFrC8HSrFHiwe_63PBeJ9Jxc0V6XAh4mggOb8hvbpeM8a4Nlwp0SG_SbW3mNPpISBsoDjSuV-uwh5OmU58HdBmAXI6Qb8DpJ-QVcvSB1-V9NuH1RnxRWFPk-jJ1i36Dg3ago4wW_kAWWgQ6BfYtqW2zOlbWe0LyJdAE7RlvQC8JdcLW9TQ-69dkjxPk_FLNP-YvY5H88gqziK94GYA1jmRGa01ZMxIweXQMa2EaXdaKeOkcrFTIoY8A81AyDa0c1o60SX3l7NnRekW_cbiMT2pSs-qWuLhQmyx-mmgDum6arBsf0q5MSrmZiiZ-ANgZW2y</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2775127940</pqid></control><display><type>article</type><title>Toward Extremely Lightweight Distracted Driver Recognition With Distillation-Based Neural Architecture Search and Knowledge Transfer</title><source>Free eJournals</source><source>arXiv.org</source><creator>Liu, Dichao ; Yamasaki, Toshihiko ; Wang, Yu ; Mase, Kenji ; Kato, Jien</creator><creatorcontrib>Liu, Dichao ; Yamasaki, Toshihiko ; Wang, Yu ; Mase, Kenji ; Kato, Jien</creatorcontrib><description>The number of traffic accidents has been continuously increasing in recent years worldwide. Many accidents are caused by distracted drivers, who take their attention away from driving. Motivated by the success of Convolutional Neural Networks (CNNs) in computer vision, many researchers developed CNN-based algorithms to recognize distracted driving from a dashcam and warn the driver against unsafe behaviors. However, current models have too many parameters, which is unfeasible for vehicle-mounted computing. This work proposes a novel knowledge-distillation-based framework to solve this problem. The proposed framework first constructs a high-performance teacher network by progressively strengthening the robustness to illumination changes from shallow to deep layers of a CNN. Then, the teacher network is used to guide the architecture searching process of a student network through knowledge distillation. After that, we use the teacher network again to transfer knowledge to the student network by knowledge distillation. Experimental results on the Statefarm Distracted Driver Detection Dataset and AUC Distracted Driver Dataset show that the proposed approach is highly effective for recognizing distracted driving behaviors from photos: (1) the teacher network's accuracy surpasses the previous best accuracy; (2) the student network achieves very high accuracy with only 0.42M parameters (around 55% of the previous most lightweight model). Furthermore, the student network architecture can be extended to a spatial-temporal 3D CNN for recognizing distracted driving from video clips. The 3D student network largely surpasses the previous best accuracy with only 2.03M parameters on the Drive&Act Dataset. The source code is available at https://github.com/Dichao-Liu/Lightweight_Distracted_Driver_Recognition_with_Distillation-Based_NAS_and_Knowledge_Transfer.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2302.04527</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Algorithms ; Artificial neural networks ; Computer architecture ; Computer Science - Computer Vision and Pattern Recognition ; Computer vision ; Datasets ; Distillation ; Distracted driving ; Knowledge ; Knowledge management ; Lightweight ; Mathematical models ; Neural networks ; Parameters ; Recognition ; Source code ; Teachers ; Traffic accidents</subject><ispartof>arXiv.org, 2023-02</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2302.04527$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/TITS.2022.3217342$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Dichao</creatorcontrib><creatorcontrib>Yamasaki, Toshihiko</creatorcontrib><creatorcontrib>Wang, Yu</creatorcontrib><creatorcontrib>Mase, Kenji</creatorcontrib><creatorcontrib>Kato, Jien</creatorcontrib><title>Toward Extremely Lightweight Distracted Driver Recognition With Distillation-Based Neural Architecture Search and Knowledge Transfer</title><title>arXiv.org</title><description>The number of traffic accidents has been continuously increasing in recent years worldwide. Many accidents are caused by distracted drivers, who take their attention away from driving. Motivated by the success of Convolutional Neural Networks (CNNs) in computer vision, many researchers developed CNN-based algorithms to recognize distracted driving from a dashcam and warn the driver against unsafe behaviors. However, current models have too many parameters, which is unfeasible for vehicle-mounted computing. This work proposes a novel knowledge-distillation-based framework to solve this problem. The proposed framework first constructs a high-performance teacher network by progressively strengthening the robustness to illumination changes from shallow to deep layers of a CNN. Then, the teacher network is used to guide the architecture searching process of a student network through knowledge distillation. After that, we use the teacher network again to transfer knowledge to the student network by knowledge distillation. Experimental results on the Statefarm Distracted Driver Detection Dataset and AUC Distracted Driver Dataset show that the proposed approach is highly effective for recognizing distracted driving behaviors from photos: (1) the teacher network's accuracy surpasses the previous best accuracy; (2) the student network achieves very high accuracy with only 0.42M parameters (around 55% of the previous most lightweight model). Furthermore, the student network architecture can be extended to a spatial-temporal 3D CNN for recognizing distracted driving from video clips. The 3D student network largely surpasses the previous best accuracy with only 2.03M parameters on the Drive&Act Dataset. The source code is available at https://github.com/Dichao-Liu/Lightweight_Distracted_Driver_Recognition_with_Distillation-Based_NAS_and_Knowledge_Transfer.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Computer architecture</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer vision</subject><subject>Datasets</subject><subject>Distillation</subject><subject>Distracted driving</subject><subject>Knowledge</subject><subject>Knowledge management</subject><subject>Lightweight</subject><subject>Mathematical models</subject><subject>Neural networks</subject><subject>Parameters</subject><subject>Recognition</subject><subject>Source code</subject><subject>Teachers</subject><subject>Traffic accidents</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotkEtrwkAQgJdCoWL9AT11oefYzT71aNXaUmmhDfQYdpNRV2JiJxsf9_7wRi0DM8zMxzB8hNzFrC8HSrFHiwe_63PBeJ9Jxc0V6XAh4mggOb8hvbpeM8a4Nlwp0SG_SbW3mNPpISBsoDjSuV-uwh5OmU58HdBmAXI6Qb8DpJ-QVcvSB1-V9NuH1RnxRWFPk-jJ1i36Dg3ago4wW_kAWWgQ6BfYtqW2zOlbWe0LyJdAE7RlvQC8JdcLW9TQ-69dkjxPk_FLNP-YvY5H88gqziK94GYA1jmRGa01ZMxIweXQMa2EaXdaKeOkcrFTIoY8A81AyDa0c1o60SX3l7NnRekW_cbiMT2pSs-qWuLhQmyx-mmgDum6arBsf0q5MSrmZiiZ-ANgZW2y</recordid><startdate>20230209</startdate><enddate>20230209</enddate><creator>Liu, Dichao</creator><creator>Yamasaki, Toshihiko</creator><creator>Wang, Yu</creator><creator>Mase, Kenji</creator><creator>Kato, Jien</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230209</creationdate><title>Toward Extremely Lightweight Distracted Driver Recognition With Distillation-Based Neural Architecture Search and Knowledge Transfer</title><author>Liu, Dichao ; Yamasaki, Toshihiko ; Wang, Yu ; Mase, Kenji ; Kato, Jien</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a520-6f278eabb3c7666ec0743249b065372786557b45b1b531edce60e343436bb64b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Computer architecture</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer vision</topic><topic>Datasets</topic><topic>Distillation</topic><topic>Distracted driving</topic><topic>Knowledge</topic><topic>Knowledge management</topic><topic>Lightweight</topic><topic>Mathematical models</topic><topic>Neural networks</topic><topic>Parameters</topic><topic>Recognition</topic><topic>Source code</topic><topic>Teachers</topic><topic>Traffic accidents</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Dichao</creatorcontrib><creatorcontrib>Yamasaki, Toshihiko</creatorcontrib><creatorcontrib>Wang, Yu</creatorcontrib><creatorcontrib>Mase, Kenji</creatorcontrib><creatorcontrib>Kato, Jien</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied & Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Dichao</au><au>Yamasaki, Toshihiko</au><au>Wang, Yu</au><au>Mase, Kenji</au><au>Kato, Jien</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Toward Extremely Lightweight Distracted Driver Recognition With Distillation-Based Neural Architecture Search and Knowledge Transfer</atitle><jtitle>arXiv.org</jtitle><date>2023-02-09</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>The number of traffic accidents has been continuously increasing in recent years worldwide. Many accidents are caused by distracted drivers, who take their attention away from driving. Motivated by the success of Convolutional Neural Networks (CNNs) in computer vision, many researchers developed CNN-based algorithms to recognize distracted driving from a dashcam and warn the driver against unsafe behaviors. However, current models have too many parameters, which is unfeasible for vehicle-mounted computing. This work proposes a novel knowledge-distillation-based framework to solve this problem. The proposed framework first constructs a high-performance teacher network by progressively strengthening the robustness to illumination changes from shallow to deep layers of a CNN. Then, the teacher network is used to guide the architecture searching process of a student network through knowledge distillation. After that, we use the teacher network again to transfer knowledge to the student network by knowledge distillation. Experimental results on the Statefarm Distracted Driver Detection Dataset and AUC Distracted Driver Dataset show that the proposed approach is highly effective for recognizing distracted driving behaviors from photos: (1) the teacher network's accuracy surpasses the previous best accuracy; (2) the student network achieves very high accuracy with only 0.42M parameters (around 55% of the previous most lightweight model). Furthermore, the student network architecture can be extended to a spatial-temporal 3D CNN for recognizing distracted driving from video clips. The 3D student network largely surpasses the previous best accuracy with only 2.03M parameters on the Drive&Act Dataset. The source code is available at https://github.com/Dichao-Liu/Lightweight_Distracted_Driver_Recognition_with_Distillation-Based_NAS_and_Knowledge_Transfer.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2302.04527</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-02 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2302_04527 |
source | Free eJournals; arXiv.org |
subjects | Accuracy Algorithms Artificial neural networks Computer architecture Computer Science - Computer Vision and Pattern Recognition Computer vision Datasets Distillation Distracted driving Knowledge Knowledge management Lightweight Mathematical models Neural networks Parameters Recognition Source code Teachers Traffic accidents |
title | Toward Extremely Lightweight Distracted Driver Recognition With Distillation-Based Neural Architecture Search and Knowledge Transfer |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T07%3A41%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Toward%20Extremely%20Lightweight%20Distracted%20Driver%20Recognition%20With%20Distillation-Based%20Neural%20Architecture%20Search%20and%20Knowledge%20Transfer&rft.jtitle=arXiv.org&rft.au=Liu,%20Dichao&rft.date=2023-02-09&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2302.04527&rft_dat=%3Cproquest_arxiv%3E2775127940%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2775127940&rft_id=info:pmid/&rfr_iscdi=true |