Driver distraction detection method

The invention discloses a driver distraction detection method. Converting each frame of driver image into a grayscale image; the method comprises the following steps: firstly, extracting gray scale images corresponding to training samples, performing normalization processing and preprocessing in seq...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: YAN DIQUN, QIAN JIANGBO, CHEN YEFANG, QIN BINBIN, DONG YIHONG
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator YAN DIQUN
QIAN JIANGBO
CHEN YEFANG
QIN BINBIN
DONG YIHONG
description The invention discloses a driver distraction detection method. Converting each frame of driver image into a grayscale image; the method comprises the following steps: firstly, extracting gray scale images corresponding to training samples, performing normalization processing and preprocessing in sequence, inputting one of the training samples into an initialized convolutional neural network, performing batch regularization processing on HOG features extracted from the gray scale images corresponding to the training samples, and then performing full connection layer connection to obtain HOG feature vectors; and finally, carrying out global mean pooling on an output result of each convolution layer to obtain a total feature vector composed of the feature vector and the HOG feature vector, and carrying out full connection layer and Softmax classification in the convolutional neural network in sequence to obtain a global mean pooling result of each convolution layer. Obtaining the actual action category of the dr
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN111626186A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN111626186A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN111626186A3</originalsourceid><addsrcrecordid>eNrjZFB2KcosSy1SSMksLilKTC7JzM9TSEktSYWwclNLMvJTeBhY0xJzilN5oTQ3g6Kba4izh25qQX58anFBYnJqXmpJvLOfoaGhmZGZoYWZozExagACrya6</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Driver distraction detection method</title><source>esp@cenet</source><creator>YAN DIQUN ; QIAN JIANGBO ; CHEN YEFANG ; QIN BINBIN ; DONG YIHONG</creator><creatorcontrib>YAN DIQUN ; QIAN JIANGBO ; CHEN YEFANG ; QIN BINBIN ; DONG YIHONG</creatorcontrib><description>The invention discloses a driver distraction detection method. Converting each frame of driver image into a grayscale image; the method comprises the following steps: firstly, extracting gray scale images corresponding to training samples, performing normalization processing and preprocessing in sequence, inputting one of the training samples into an initialized convolutional neural network, performing batch regularization processing on HOG features extracted from the gray scale images corresponding to the training samples, and then performing full connection layer connection to obtain HOG feature vectors; and finally, carrying out global mean pooling on an output result of each convolution layer to obtain a total feature vector composed of the feature vector and the HOG feature vector, and carrying out full connection layer and Softmax classification in the convolutional neural network in sequence to obtain a global mean pooling result of each convolution layer. Obtaining the actual action category of the dr</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; HANDLING RECORD CARRIERS ; PHYSICS ; PRESENTATION OF DATA ; RECOGNITION OF DATA ; RECORD CARRIERS</subject><creationdate>2020</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200904&amp;DB=EPODOC&amp;CC=CN&amp;NR=111626186A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200904&amp;DB=EPODOC&amp;CC=CN&amp;NR=111626186A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>YAN DIQUN</creatorcontrib><creatorcontrib>QIAN JIANGBO</creatorcontrib><creatorcontrib>CHEN YEFANG</creatorcontrib><creatorcontrib>QIN BINBIN</creatorcontrib><creatorcontrib>DONG YIHONG</creatorcontrib><title>Driver distraction detection method</title><description>The invention discloses a driver distraction detection method. Converting each frame of driver image into a grayscale image; the method comprises the following steps: firstly, extracting gray scale images corresponding to training samples, performing normalization processing and preprocessing in sequence, inputting one of the training samples into an initialized convolutional neural network, performing batch regularization processing on HOG features extracted from the gray scale images corresponding to the training samples, and then performing full connection layer connection to obtain HOG feature vectors; and finally, carrying out global mean pooling on an output result of each convolution layer to obtain a total feature vector composed of the feature vector and the HOG feature vector, and carrying out full connection layer and Softmax classification in the convolutional neural network in sequence to obtain a global mean pooling result of each convolution layer. Obtaining the actual action category of the dr</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>HANDLING RECORD CARRIERS</subject><subject>PHYSICS</subject><subject>PRESENTATION OF DATA</subject><subject>RECOGNITION OF DATA</subject><subject>RECORD CARRIERS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2020</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZFB2KcosSy1SSMksLilKTC7JzM9TSEktSYWwclNLMvJTeBhY0xJzilN5oTQ3g6Kba4izh25qQX58anFBYnJqXmpJvLOfoaGhmZGZoYWZozExagACrya6</recordid><startdate>20200904</startdate><enddate>20200904</enddate><creator>YAN DIQUN</creator><creator>QIAN JIANGBO</creator><creator>CHEN YEFANG</creator><creator>QIN BINBIN</creator><creator>DONG YIHONG</creator><scope>EVB</scope></search><sort><creationdate>20200904</creationdate><title>Driver distraction detection method</title><author>YAN DIQUN ; QIAN JIANGBO ; CHEN YEFANG ; QIN BINBIN ; DONG YIHONG</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN111626186A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2020</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>HANDLING RECORD CARRIERS</topic><topic>PHYSICS</topic><topic>PRESENTATION OF DATA</topic><topic>RECOGNITION OF DATA</topic><topic>RECORD CARRIERS</topic><toplevel>online_resources</toplevel><creatorcontrib>YAN DIQUN</creatorcontrib><creatorcontrib>QIAN JIANGBO</creatorcontrib><creatorcontrib>CHEN YEFANG</creatorcontrib><creatorcontrib>QIN BINBIN</creatorcontrib><creatorcontrib>DONG YIHONG</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>YAN DIQUN</au><au>QIAN JIANGBO</au><au>CHEN YEFANG</au><au>QIN BINBIN</au><au>DONG YIHONG</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Driver distraction detection method</title><date>2020-09-04</date><risdate>2020</risdate><abstract>The invention discloses a driver distraction detection method. Converting each frame of driver image into a grayscale image; the method comprises the following steps: firstly, extracting gray scale images corresponding to training samples, performing normalization processing and preprocessing in sequence, inputting one of the training samples into an initialized convolutional neural network, performing batch regularization processing on HOG features extracted from the gray scale images corresponding to the training samples, and then performing full connection layer connection to obtain HOG feature vectors; and finally, carrying out global mean pooling on an output result of each convolution layer to obtain a total feature vector composed of the feature vector and the HOG feature vector, and carrying out full connection layer and Softmax classification in the convolutional neural network in sequence to obtain a global mean pooling result of each convolution layer. Obtaining the actual action category of the dr</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN111626186A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
HANDLING RECORD CARRIERS
PHYSICS
PRESENTATION OF DATA
RECOGNITION OF DATA
RECORD CARRIERS
title Driver distraction detection method
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T05%3A00%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=YAN%20DIQUN&rft.date=2020-09-04&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN111626186A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true