Domain-Adaptive Online Active Learning for Real-Time Intelligent Video Analytics on Edge Devices

Deep learning (DL) for intelligent video analytics is increasingly pervasive in various application domains, ranging from Healthcare to Industry 5.0. A significant trend involves deploying DL models on edge devices with limited resources. Techniques, such as pruning, quantization, and early exit, ha...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on computer-aided design of integrated circuits and systems 2024-11, Vol.43 (11), p.4105-4116
Hauptverfasser: Boldo, Michele, De Marchi, Mirco, Martini, Enrico, Aldegheri, Stefano, Bombieri, Nicola
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4116
container_issue 11
container_start_page 4105
container_title IEEE transactions on computer-aided design of integrated circuits and systems
container_volume 43
creator Boldo, Michele
De Marchi, Mirco
Martini, Enrico
Aldegheri, Stefano
Bombieri, Nicola
description Deep learning (DL) for intelligent video analytics is increasingly pervasive in various application domains, ranging from Healthcare to Industry 5.0. A significant trend involves deploying DL models on edge devices with limited resources. Techniques, such as pruning, quantization, and early exit, have demonstrated the feasibility of real-time inference at the edge by compressing and optimizing deep neural networks (DNNs). However, adapting pretrained models to new and dynamic scenarios remains a significant challenge. While solutions like domain adaptation, active learning (AL), and teacher-student knowledge distillation (KD) contribute to addressing this challenge, they often rely on cloud or well-equipped computing platforms for fine tuning. In this study, we propose a framework for domain-adaptive online AL of DNN models tailored for intelligent video analytics on resource-constrained devices. Our framework employs a KD approach where both teacher and student models are deployed on the edge device. To determine when to retrain the student DNN model without ground-truth or cloud-based teacher inference, our model utilizes singular value decomposition of input data. It implements the identification of key data frames and efficient retraining of the student through the teacher execution at the edge, aiming to prevent model overfitting. We evaluate the framework through two case studies: 1) human pose estimation and 2) car object detection, both implemented on an NVIDIA Jetson NX device.
doi_str_mv 10.1109/TCAD.2024.3453188
format Article
fullrecord <record><control><sourceid>crossref_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10745828</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10745828</ieee_id><sourcerecordid>10_1109_TCAD_2024_3453188</sourcerecordid><originalsourceid>FETCH-LOGICAL-c191t-6def9abe1cdeb2dd1fee5a887c36d6bd49e469bb77152cf6645a8cea9eeb0a963</originalsourceid><addsrcrecordid>eNpNkMtKAzEYhYMoWKsPILjIC6Tmz2RyWQ5trYVCQarbMZP8UyLTTJkZCn17W9uFq8PhXBYfIc_AJwDcvm6mxWwiuJCTTOYZGHNDRmAzzSTkcEtGXGjDONf8njz0_Q_nIHNhR-R71u5cTKwIbj_EA9J1amJCWvg_t0LXpZi2tG47-oGuYZu4Q7pMAzZN3GIa6FcM2NIiueY4RN_TNtF52CKd4SF67B_JXe2aHp-uOiafb_PN9J2t1ovltFgxDxYGpgLW1lUIPmAlQoAaMXfGaJ-poKogLUplq0pryIWvlZKn1KOziBV3VmVjApdf37V932Fd7ru4c92xBF6eEZVnROUZUXlFdNq8XDYREf_1tcyNMNkvgeJkeg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Domain-Adaptive Online Active Learning for Real-Time Intelligent Video Analytics on Edge Devices</title><source>IEEE Electronic Library (IEL)</source><creator>Boldo, Michele ; De Marchi, Mirco ; Martini, Enrico ; Aldegheri, Stefano ; Bombieri, Nicola</creator><creatorcontrib>Boldo, Michele ; De Marchi, Mirco ; Martini, Enrico ; Aldegheri, Stefano ; Bombieri, Nicola</creatorcontrib><description>Deep learning (DL) for intelligent video analytics is increasingly pervasive in various application domains, ranging from Healthcare to Industry 5.0. A significant trend involves deploying DL models on edge devices with limited resources. Techniques, such as pruning, quantization, and early exit, have demonstrated the feasibility of real-time inference at the edge by compressing and optimizing deep neural networks (DNNs). However, adapting pretrained models to new and dynamic scenarios remains a significant challenge. While solutions like domain adaptation, active learning (AL), and teacher-student knowledge distillation (KD) contribute to addressing this challenge, they often rely on cloud or well-equipped computing platforms for fine tuning. In this study, we propose a framework for domain-adaptive online AL of DNN models tailored for intelligent video analytics on resource-constrained devices. Our framework employs a KD approach where both teacher and student models are deployed on the edge device. To determine when to retrain the student DNN model without ground-truth or cloud-based teacher inference, our model utilizes singular value decomposition of input data. It implements the identification of key data frames and efficient retraining of the student through the teacher execution at the edge, aiming to prevent model overfitting. We evaluate the framework through two case studies: 1) human pose estimation and 2) car object detection, both implemented on an NVIDIA Jetson NX device.</description><identifier>ISSN: 0278-0070</identifier><identifier>EISSN: 1937-4151</identifier><identifier>DOI: 10.1109/TCAD.2024.3453188</identifier><identifier>CODEN: ITCSDI</identifier><language>eng</language><publisher>IEEE</publisher><subject>Active learning (AL) ; Adaptation models ; Artificial neural networks ; Computational modeling ; Data models ; edge AI ; edge training ; human pose estimation (HPE) ; Integrated circuit modeling ; online distillation ; Pose estimation ; Real-time systems ; real-time training ; Streaming media ; Training ; Visual analytics</subject><ispartof>IEEE transactions on computer-aided design of integrated circuits and systems, 2024-11, Vol.43 (11), p.4105-4116</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c191t-6def9abe1cdeb2dd1fee5a887c36d6bd49e469bb77152cf6645a8cea9eeb0a963</cites><orcidid>0000-0002-4177-6599 ; 0000-0002-3193-6798 ; 0000-0003-2731-2816 ; 0000-0001-6914-289X ; 0000-0003-3256-5885</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10745828$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54737</link.rule.ids></links><search><creatorcontrib>Boldo, Michele</creatorcontrib><creatorcontrib>De Marchi, Mirco</creatorcontrib><creatorcontrib>Martini, Enrico</creatorcontrib><creatorcontrib>Aldegheri, Stefano</creatorcontrib><creatorcontrib>Bombieri, Nicola</creatorcontrib><title>Domain-Adaptive Online Active Learning for Real-Time Intelligent Video Analytics on Edge Devices</title><title>IEEE transactions on computer-aided design of integrated circuits and systems</title><addtitle>TCAD</addtitle><description>Deep learning (DL) for intelligent video analytics is increasingly pervasive in various application domains, ranging from Healthcare to Industry 5.0. A significant trend involves deploying DL models on edge devices with limited resources. Techniques, such as pruning, quantization, and early exit, have demonstrated the feasibility of real-time inference at the edge by compressing and optimizing deep neural networks (DNNs). However, adapting pretrained models to new and dynamic scenarios remains a significant challenge. While solutions like domain adaptation, active learning (AL), and teacher-student knowledge distillation (KD) contribute to addressing this challenge, they often rely on cloud or well-equipped computing platforms for fine tuning. In this study, we propose a framework for domain-adaptive online AL of DNN models tailored for intelligent video analytics on resource-constrained devices. Our framework employs a KD approach where both teacher and student models are deployed on the edge device. To determine when to retrain the student DNN model without ground-truth or cloud-based teacher inference, our model utilizes singular value decomposition of input data. It implements the identification of key data frames and efficient retraining of the student through the teacher execution at the edge, aiming to prevent model overfitting. We evaluate the framework through two case studies: 1) human pose estimation and 2) car object detection, both implemented on an NVIDIA Jetson NX device.</description><subject>Active learning (AL)</subject><subject>Adaptation models</subject><subject>Artificial neural networks</subject><subject>Computational modeling</subject><subject>Data models</subject><subject>edge AI</subject><subject>edge training</subject><subject>human pose estimation (HPE)</subject><subject>Integrated circuit modeling</subject><subject>online distillation</subject><subject>Pose estimation</subject><subject>Real-time systems</subject><subject>real-time training</subject><subject>Streaming media</subject><subject>Training</subject><subject>Visual analytics</subject><issn>0278-0070</issn><issn>1937-4151</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNpNkMtKAzEYhYMoWKsPILjIC6Tmz2RyWQ5trYVCQarbMZP8UyLTTJkZCn17W9uFq8PhXBYfIc_AJwDcvm6mxWwiuJCTTOYZGHNDRmAzzSTkcEtGXGjDONf8njz0_Q_nIHNhR-R71u5cTKwIbj_EA9J1amJCWvg_t0LXpZi2tG47-oGuYZu4Q7pMAzZN3GIa6FcM2NIiueY4RN_TNtF52CKd4SF67B_JXe2aHp-uOiafb_PN9J2t1ovltFgxDxYGpgLW1lUIPmAlQoAaMXfGaJ-poKogLUplq0pryIWvlZKn1KOziBV3VmVjApdf37V932Fd7ru4c92xBF6eEZVnROUZUXlFdNq8XDYREf_1tcyNMNkvgeJkeg</recordid><startdate>202411</startdate><enddate>202411</enddate><creator>Boldo, Michele</creator><creator>De Marchi, Mirco</creator><creator>Martini, Enrico</creator><creator>Aldegheri, Stefano</creator><creator>Bombieri, Nicola</creator><general>IEEE</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-4177-6599</orcidid><orcidid>https://orcid.org/0000-0002-3193-6798</orcidid><orcidid>https://orcid.org/0000-0003-2731-2816</orcidid><orcidid>https://orcid.org/0000-0001-6914-289X</orcidid><orcidid>https://orcid.org/0000-0003-3256-5885</orcidid></search><sort><creationdate>202411</creationdate><title>Domain-Adaptive Online Active Learning for Real-Time Intelligent Video Analytics on Edge Devices</title><author>Boldo, Michele ; De Marchi, Mirco ; Martini, Enrico ; Aldegheri, Stefano ; Bombieri, Nicola</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c191t-6def9abe1cdeb2dd1fee5a887c36d6bd49e469bb77152cf6645a8cea9eeb0a963</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Active learning (AL)</topic><topic>Adaptation models</topic><topic>Artificial neural networks</topic><topic>Computational modeling</topic><topic>Data models</topic><topic>edge AI</topic><topic>edge training</topic><topic>human pose estimation (HPE)</topic><topic>Integrated circuit modeling</topic><topic>online distillation</topic><topic>Pose estimation</topic><topic>Real-time systems</topic><topic>real-time training</topic><topic>Streaming media</topic><topic>Training</topic><topic>Visual analytics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Boldo, Michele</creatorcontrib><creatorcontrib>De Marchi, Mirco</creatorcontrib><creatorcontrib>Martini, Enrico</creatorcontrib><creatorcontrib>Aldegheri, Stefano</creatorcontrib><creatorcontrib>Bombieri, Nicola</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on computer-aided design of integrated circuits and systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Boldo, Michele</au><au>De Marchi, Mirco</au><au>Martini, Enrico</au><au>Aldegheri, Stefano</au><au>Bombieri, Nicola</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Domain-Adaptive Online Active Learning for Real-Time Intelligent Video Analytics on Edge Devices</atitle><jtitle>IEEE transactions on computer-aided design of integrated circuits and systems</jtitle><stitle>TCAD</stitle><date>2024-11</date><risdate>2024</risdate><volume>43</volume><issue>11</issue><spage>4105</spage><epage>4116</epage><pages>4105-4116</pages><issn>0278-0070</issn><eissn>1937-4151</eissn><coden>ITCSDI</coden><abstract>Deep learning (DL) for intelligent video analytics is increasingly pervasive in various application domains, ranging from Healthcare to Industry 5.0. A significant trend involves deploying DL models on edge devices with limited resources. Techniques, such as pruning, quantization, and early exit, have demonstrated the feasibility of real-time inference at the edge by compressing and optimizing deep neural networks (DNNs). However, adapting pretrained models to new and dynamic scenarios remains a significant challenge. While solutions like domain adaptation, active learning (AL), and teacher-student knowledge distillation (KD) contribute to addressing this challenge, they often rely on cloud or well-equipped computing platforms for fine tuning. In this study, we propose a framework for domain-adaptive online AL of DNN models tailored for intelligent video analytics on resource-constrained devices. Our framework employs a KD approach where both teacher and student models are deployed on the edge device. To determine when to retrain the student DNN model without ground-truth or cloud-based teacher inference, our model utilizes singular value decomposition of input data. It implements the identification of key data frames and efficient retraining of the student through the teacher execution at the edge, aiming to prevent model overfitting. We evaluate the framework through two case studies: 1) human pose estimation and 2) car object detection, both implemented on an NVIDIA Jetson NX device.</abstract><pub>IEEE</pub><doi>10.1109/TCAD.2024.3453188</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-4177-6599</orcidid><orcidid>https://orcid.org/0000-0002-3193-6798</orcidid><orcidid>https://orcid.org/0000-0003-2731-2816</orcidid><orcidid>https://orcid.org/0000-0001-6914-289X</orcidid><orcidid>https://orcid.org/0000-0003-3256-5885</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0278-0070
ispartof IEEE transactions on computer-aided design of integrated circuits and systems, 2024-11, Vol.43 (11), p.4105-4116
issn 0278-0070
1937-4151
language eng
recordid cdi_ieee_primary_10745828
source IEEE Electronic Library (IEL)
subjects Active learning (AL)
Adaptation models
Artificial neural networks
Computational modeling
Data models
edge AI
edge training
human pose estimation (HPE)
Integrated circuit modeling
online distillation
Pose estimation
Real-time systems
real-time training
Streaming media
Training
Visual analytics
title Domain-Adaptive Online Active Learning for Real-Time Intelligent Video Analytics on Edge Devices
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T10%3A51%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Domain-Adaptive%20Online%20Active%20Learning%20for%20Real-Time%20Intelligent%20Video%20Analytics%20on%20Edge%20Devices&rft.jtitle=IEEE%20transactions%20on%20computer-aided%20design%20of%20integrated%20circuits%20and%20systems&rft.au=Boldo,%20Michele&rft.date=2024-11&rft.volume=43&rft.issue=11&rft.spage=4105&rft.epage=4116&rft.pages=4105-4116&rft.issn=0278-0070&rft.eissn=1937-4151&rft.coden=ITCSDI&rft_id=info:doi/10.1109/TCAD.2024.3453188&rft_dat=%3Ccrossref_ieee_%3E10_1109_TCAD_2024_3453188%3C/crossref_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10745828&rfr_iscdi=true