Hybrid Cross Deep Network for Domain Adaptation and Energy Saving in Visual Internet of Things
Recently, Visual Internet of Things (VIoT) has become a fast-growing field based on various applications. In this paper, we focus on two critical challenges for applications in VIoT, i.e., domain adaptation and energy saving. The images captured by various visual sensors in VIoT appear quite differe...
Gespeichert in:
Veröffentlicht in: | IEEE internet of things journal 2019-08, Vol.6 (4), p.6026-6033 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 6033 |
---|---|
container_issue | 4 |
container_start_page | 6026 |
container_title | IEEE internet of things journal |
container_volume | 6 |
creator | Zhang, Zhong Li, Donghong |
description | Recently, Visual Internet of Things (VIoT) has become a fast-growing field based on various applications. In this paper, we focus on two critical challenges for applications in VIoT, i.e., domain adaptation and energy saving. The images captured by various visual sensors in VIoT appear quite different due to changes in visual sensor locations, visual sensor settings, image resolutions, and illuminations. Meanwhile, VIoT generates a number of images, and transmitting original images would take up much bandwidth. In order to effectively classify such images and save energy, we propose a novel deep model named hybrid cross deep network (HCDN), which could learn domain-invariant and discriminative features for images in VIoT. The proposed HCDN is designed to contain the cross regularization loss and the classification loss. Moreover, it is also trained with images from different visual sensors. Specifically, the cross regularization loss selects the triplet samples from the source domain and the target domain, and adopts the calibration parameter to align the difference between two domains. We employ the vector extracted from the proposed HCDN to represent each image, which requires a smaller storage capacity than the original images. Energy consumption will be reduced when we transmit such vectors to the intelligent visual label system for image classification in VIoT. The proposed HCDN is verified on two domain adaptation datasets, and the experimental results prove its effectiveness. |
doi_str_mv | 10.1109/JIOT.2018.2867083 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2268432105</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8445552</ieee_id><sourcerecordid>2268432105</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-848e944b0f390bca2e2276e6e0cf95a57f68af391fd025cd98a8273351aeea2d3</originalsourceid><addsrcrecordid>eNpNkE9PAjEQxTdGE4nyAYyXJp4X-2fb7R4JoGCIHESPNmV3iovQru2i4dvbDcR4mkneezN5vyS5IXhACC7un2aL5YBiIgdUihxLdpb0KKN5mglBz__tl0k_hA3GOMY4KUQveZ8eVr6u0Mi7ENAYoEHP0P44_4mM82jsdrq2aFjpptVt7SzStkITC359QC_6u7ZrFPW3Ouz1Fs1sC95Ci5xBy4-ohevkwuhtgP5pXiWvD5PlaJrOF4-z0XCelowXbSozCUWWrbBhBV6VmgKluQABuDQF1zw3QuqoEVNhysuqkFrSnDFONICmFbtK7o53G---9hBatXF7b-NLRamQGaOxcHSRo6vs2nowqvH1TvuDIlh1JFVHUnUk1YlkzNweMzUA_PlllnHOKfsFMrdumA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2268432105</pqid></control><display><type>article</type><title>Hybrid Cross Deep Network for Domain Adaptation and Energy Saving in Visual Internet of Things</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Zhong ; Li, Donghong</creator><creatorcontrib>Zhang, Zhong ; Li, Donghong</creatorcontrib><description>Recently, Visual Internet of Things (VIoT) has become a fast-growing field based on various applications. In this paper, we focus on two critical challenges for applications in VIoT, i.e., domain adaptation and energy saving. The images captured by various visual sensors in VIoT appear quite different due to changes in visual sensor locations, visual sensor settings, image resolutions, and illuminations. Meanwhile, VIoT generates a number of images, and transmitting original images would take up much bandwidth. In order to effectively classify such images and save energy, we propose a novel deep model named hybrid cross deep network (HCDN), which could learn domain-invariant and discriminative features for images in VIoT. The proposed HCDN is designed to contain the cross regularization loss and the classification loss. Moreover, it is also trained with images from different visual sensors. Specifically, the cross regularization loss selects the triplet samples from the source domain and the target domain, and adopts the calibration parameter to align the difference between two domains. We employ the vector extracted from the proposed HCDN to represent each image, which requires a smaller storage capacity than the original images. Energy consumption will be reduced when we transmit such vectors to the intelligent visual label system for image classification in VIoT. The proposed HCDN is verified on two domain adaptation datasets, and the experimental results prove its effectiveness.</description><identifier>ISSN: 2327-4662</identifier><identifier>EISSN: 2327-4662</identifier><identifier>DOI: 10.1109/JIOT.2018.2867083</identifier><identifier>CODEN: IITJAU</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Adaptation ; Deep convolutional neural networks (CNNs) ; domain adaptation ; Domains ; Energy conservation ; Energy consumption ; energy saving ; Energy storage ; Energy transmission ; Feature extraction ; Image classification ; Image sensors ; Image transmission ; Intelligent sensors ; Internet of Things ; Regularization ; Sensors ; Storage capacity ; Task analysis ; Visual Internet of Things (VIoT) ; Visualization</subject><ispartof>IEEE internet of things journal, 2019-08, Vol.6 (4), p.6026-6033</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c359t-848e944b0f390bca2e2276e6e0cf95a57f68af391fd025cd98a8273351aeea2d3</citedby><cites>FETCH-LOGICAL-c359t-848e944b0f390bca2e2276e6e0cf95a57f68af391fd025cd98a8273351aeea2d3</cites><orcidid>0000-0002-4911-6173</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8445552$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8445552$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhang, Zhong</creatorcontrib><creatorcontrib>Li, Donghong</creatorcontrib><title>Hybrid Cross Deep Network for Domain Adaptation and Energy Saving in Visual Internet of Things</title><title>IEEE internet of things journal</title><addtitle>JIoT</addtitle><description>Recently, Visual Internet of Things (VIoT) has become a fast-growing field based on various applications. In this paper, we focus on two critical challenges for applications in VIoT, i.e., domain adaptation and energy saving. The images captured by various visual sensors in VIoT appear quite different due to changes in visual sensor locations, visual sensor settings, image resolutions, and illuminations. Meanwhile, VIoT generates a number of images, and transmitting original images would take up much bandwidth. In order to effectively classify such images and save energy, we propose a novel deep model named hybrid cross deep network (HCDN), which could learn domain-invariant and discriminative features for images in VIoT. The proposed HCDN is designed to contain the cross regularization loss and the classification loss. Moreover, it is also trained with images from different visual sensors. Specifically, the cross regularization loss selects the triplet samples from the source domain and the target domain, and adopts the calibration parameter to align the difference between two domains. We employ the vector extracted from the proposed HCDN to represent each image, which requires a smaller storage capacity than the original images. Energy consumption will be reduced when we transmit such vectors to the intelligent visual label system for image classification in VIoT. The proposed HCDN is verified on two domain adaptation datasets, and the experimental results prove its effectiveness.</description><subject>Adaptation</subject><subject>Deep convolutional neural networks (CNNs)</subject><subject>domain adaptation</subject><subject>Domains</subject><subject>Energy conservation</subject><subject>Energy consumption</subject><subject>energy saving</subject><subject>Energy storage</subject><subject>Energy transmission</subject><subject>Feature extraction</subject><subject>Image classification</subject><subject>Image sensors</subject><subject>Image transmission</subject><subject>Intelligent sensors</subject><subject>Internet of Things</subject><subject>Regularization</subject><subject>Sensors</subject><subject>Storage capacity</subject><subject>Task analysis</subject><subject>Visual Internet of Things (VIoT)</subject><subject>Visualization</subject><issn>2327-4662</issn><issn>2327-4662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE9PAjEQxTdGE4nyAYyXJp4X-2fb7R4JoGCIHESPNmV3iovQru2i4dvbDcR4mkneezN5vyS5IXhACC7un2aL5YBiIgdUihxLdpb0KKN5mglBz__tl0k_hA3GOMY4KUQveZ8eVr6u0Mi7ENAYoEHP0P44_4mM82jsdrq2aFjpptVt7SzStkITC359QC_6u7ZrFPW3Ouz1Fs1sC95Ci5xBy4-ohevkwuhtgP5pXiWvD5PlaJrOF4-z0XCelowXbSozCUWWrbBhBV6VmgKluQABuDQF1zw3QuqoEVNhysuqkFrSnDFONICmFbtK7o53G---9hBatXF7b-NLRamQGaOxcHSRo6vs2nowqvH1TvuDIlh1JFVHUnUk1YlkzNweMzUA_PlllnHOKfsFMrdumA</recordid><startdate>20190801</startdate><enddate>20190801</enddate><creator>Zhang, Zhong</creator><creator>Li, Donghong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-4911-6173</orcidid></search><sort><creationdate>20190801</creationdate><title>Hybrid Cross Deep Network for Domain Adaptation and Energy Saving in Visual Internet of Things</title><author>Zhang, Zhong ; Li, Donghong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-848e944b0f390bca2e2276e6e0cf95a57f68af391fd025cd98a8273351aeea2d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Adaptation</topic><topic>Deep convolutional neural networks (CNNs)</topic><topic>domain adaptation</topic><topic>Domains</topic><topic>Energy conservation</topic><topic>Energy consumption</topic><topic>energy saving</topic><topic>Energy storage</topic><topic>Energy transmission</topic><topic>Feature extraction</topic><topic>Image classification</topic><topic>Image sensors</topic><topic>Image transmission</topic><topic>Intelligent sensors</topic><topic>Internet of Things</topic><topic>Regularization</topic><topic>Sensors</topic><topic>Storage capacity</topic><topic>Task analysis</topic><topic>Visual Internet of Things (VIoT)</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Zhong</creatorcontrib><creatorcontrib>Li, Donghong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE internet of things journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Zhong</au><au>Li, Donghong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hybrid Cross Deep Network for Domain Adaptation and Energy Saving in Visual Internet of Things</atitle><jtitle>IEEE internet of things journal</jtitle><stitle>JIoT</stitle><date>2019-08-01</date><risdate>2019</risdate><volume>6</volume><issue>4</issue><spage>6026</spage><epage>6033</epage><pages>6026-6033</pages><issn>2327-4662</issn><eissn>2327-4662</eissn><coden>IITJAU</coden><abstract>Recently, Visual Internet of Things (VIoT) has become a fast-growing field based on various applications. In this paper, we focus on two critical challenges for applications in VIoT, i.e., domain adaptation and energy saving. The images captured by various visual sensors in VIoT appear quite different due to changes in visual sensor locations, visual sensor settings, image resolutions, and illuminations. Meanwhile, VIoT generates a number of images, and transmitting original images would take up much bandwidth. In order to effectively classify such images and save energy, we propose a novel deep model named hybrid cross deep network (HCDN), which could learn domain-invariant and discriminative features for images in VIoT. The proposed HCDN is designed to contain the cross regularization loss and the classification loss. Moreover, it is also trained with images from different visual sensors. Specifically, the cross regularization loss selects the triplet samples from the source domain and the target domain, and adopts the calibration parameter to align the difference between two domains. We employ the vector extracted from the proposed HCDN to represent each image, which requires a smaller storage capacity than the original images. Energy consumption will be reduced when we transmit such vectors to the intelligent visual label system for image classification in VIoT. The proposed HCDN is verified on two domain adaptation datasets, and the experimental results prove its effectiveness.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/JIOT.2018.2867083</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-4911-6173</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2327-4662 |
ispartof | IEEE internet of things journal, 2019-08, Vol.6 (4), p.6026-6033 |
issn | 2327-4662 2327-4662 |
language | eng |
recordid | cdi_proquest_journals_2268432105 |
source | IEEE Electronic Library (IEL) |
subjects | Adaptation Deep convolutional neural networks (CNNs) domain adaptation Domains Energy conservation Energy consumption energy saving Energy storage Energy transmission Feature extraction Image classification Image sensors Image transmission Intelligent sensors Internet of Things Regularization Sensors Storage capacity Task analysis Visual Internet of Things (VIoT) Visualization |
title | Hybrid Cross Deep Network for Domain Adaptation and Energy Saving in Visual Internet of Things |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T08%3A34%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hybrid%20Cross%20Deep%20Network%20for%20Domain%20Adaptation%20and%20Energy%20Saving%20in%20Visual%20Internet%20of%20Things&rft.jtitle=IEEE%20internet%20of%20things%20journal&rft.au=Zhang,%20Zhong&rft.date=2019-08-01&rft.volume=6&rft.issue=4&rft.spage=6026&rft.epage=6033&rft.pages=6026-6033&rft.issn=2327-4662&rft.eissn=2327-4662&rft.coden=IITJAU&rft_id=info:doi/10.1109/JIOT.2018.2867083&rft_dat=%3Cproquest_RIE%3E2268432105%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2268432105&rft_id=info:pmid/&rft_ieee_id=8445552&rfr_iscdi=true |