Adversarial Binary Mutual Learning for Semi-Supervised Deep Hashing

Hashing is a popular search algorithm for its compact binary representation and efficient Hamming distance calculation. Benefited from the advance of deep learning, deep hashing methods have achieved promising performance. However, those methods usually learn with expensive labeled data but fail to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2022-08, Vol.33 (8), p.4110-4124
Hauptverfasser: Wang, Guan'An, Hu, Qinghao, Yang, Yang, Cheng, Jian, Hou, Zeng-Guang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4124
container_issue 8
container_start_page 4110
container_title IEEE transaction on neural networks and learning systems
container_volume 33
creator Wang, Guan'An
Hu, Qinghao
Yang, Yang
Cheng, Jian
Hou, Zeng-Guang
description Hashing is a popular search algorithm for its compact binary representation and efficient Hamming distance calculation. Benefited from the advance of deep learning, deep hashing methods have achieved promising performance. However, those methods usually learn with expensive labeled data but fail to utilize unlabeled data. Furthermore, the traditional pairwise loss used by those methods cannot explicitly force similar/dissimilar pairs to small/large distances. Both weaknesses limit existing methods' performance. To solve the first problem, we propose a novel semi-supervised deep hashing model named adversarial binary mutual learning (ABML). Specifically, our ABML consists of a generative model G_{H} and a discriminative model D_{H} , where D_{H} learns labeled data in a supervised way and G_{H} learns unlabeled data by synthesizing real images. We adopt an adversarial learning (AL) strategy to transfer the knowledge of unlabeled data to D_{H} by making G_{H} and D_{H} mutually learn from each other. To solve the second problem, we propose a novel Weibull cross-entropy loss (WCE) by using the Weibull distribution, which can distinguish tiny differences of distances and explicitly force similar/dissimilar distances as small/large as possible. Thus, the learned features are more discriminative. Finally, by incorporating ABML with WCE loss, our model can acquire more semantic and discriminative features. Extensive experiments on four common data sets (CIFAR-10, large database of handwritten digits (MNIST), ImageNet-10, and NUS-WIDE) and a large-scale data set ImageNet demonstrate that our approach successfully overcomes the two difficulties above and significantly outperforms state-of-the-art hashing methods.
doi_str_mv 10.1109/TNNLS.2021.3055834
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9372892</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9372892</ieee_id><sourcerecordid>2697569598</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-2635b8fa1bfd0e99a248e1fc92f7508e03361644cc516a94f391f4f0af7957423</originalsourceid><addsrcrecordid>eNpdkEtPAjEQgBujEYP8AU3MJl68LPa97RHxgQniAUy8NWV3qktgd21ZEv-9RZCDc5mZzDeTyYfQBcF9QrC-nU0m42mfYkr6DAuhGD9CZ5RImlKm1PGhzt47qBfCAseQWEiuT1GHMak45uwMDQfFBnywvrTL5K6srP9OXtp1G7sxWF-V1Ufiap9MYVWm07YBvykDFMk9QJOMbPiMwDk6cXYZoLfPXfT2-DAbjtLx69PzcDBOcybIOqWSiblylsxdgUFrS7kC4nJNXSawAhy_IpLzPBdEWs0d08Rxh63LtMg4ZV10s7vb-PqrhbA2qzLksFzaCuo2GMq1ZkozvEWv_6GLuvVV_M5QqTMhtdAqUnRH5b4OwYMzjS9XUYEh2Gwtm1_LZmvZ7C3Hpav96Xa-guKw8uc0Apc7oASAw1izjCpN2Q9zan6Y</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2697569598</pqid></control><display><type>article</type><title>Adversarial Binary Mutual Learning for Semi-Supervised Deep Hashing</title><source>IEEE Electronic Library (IEL)</source><creator>Wang, Guan'An ; Hu, Qinghao ; Yang, Yang ; Cheng, Jian ; Hou, Zeng-Guang</creator><creatorcontrib>Wang, Guan'An ; Hu, Qinghao ; Yang, Yang ; Cheng, Jian ; Hou, Zeng-Guang</creatorcontrib><description><![CDATA[Hashing is a popular search algorithm for its compact binary representation and efficient Hamming distance calculation. Benefited from the advance of deep learning, deep hashing methods have achieved promising performance. However, those methods usually learn with expensive labeled data but fail to utilize unlabeled data. Furthermore, the traditional pairwise loss used by those methods cannot explicitly force similar/dissimilar pairs to small/large distances. Both weaknesses limit existing methods' performance. To solve the first problem, we propose a novel semi-supervised deep hashing model named adversarial binary mutual learning (ABML). Specifically, our ABML consists of a generative model <inline-formula> <tex-math notation="LaTeX">G_{H} </tex-math></inline-formula> and a discriminative model <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula>, where <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula> learns labeled data in a supervised way and <inline-formula> <tex-math notation="LaTeX">G_{H} </tex-math></inline-formula> learns unlabeled data by synthesizing real images. We adopt an adversarial learning (AL) strategy to transfer the knowledge of unlabeled data to <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula> by making <inline-formula> <tex-math notation="LaTeX">G_{H} </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula> mutually learn from each other. To solve the second problem, we propose a novel Weibull cross-entropy loss (WCE) by using the Weibull distribution, which can distinguish tiny differences of distances and explicitly force similar/dissimilar distances as small/large as possible. Thus, the learned features are more discriminative. Finally, by incorporating ABML with WCE loss, our model can acquire more semantic and discriminative features. Extensive experiments on four common data sets (CIFAR-10, large database of handwritten digits (MNIST), ImageNet-10, and NUS-WIDE) and a large-scale data set ImageNet demonstrate that our approach successfully overcomes the two difficulties above and significantly outperforms state-of-the-art hashing methods.]]></description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2021.3055834</identifier><identifier>PMID: 33684043</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Adversarial learning (AL) ; Binary codes ; Computational modeling ; Data models ; Datasets ; Deep learning ; Entropy (Information theory) ; Force ; Handwriting ; Hash functions ; hashing ; Knowledge management ; Learning ; Machine learning ; Search algorithms ; Semantics ; Training data ; Transfer learning ; Weibull distribution</subject><ispartof>IEEE transaction on neural networks and learning systems, 2022-08, Vol.33 (8), p.4110-4124</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-2635b8fa1bfd0e99a248e1fc92f7508e03361644cc516a94f391f4f0af7957423</citedby><cites>FETCH-LOGICAL-c351t-2635b8fa1bfd0e99a248e1fc92f7508e03361644cc516a94f391f4f0af7957423</cites><orcidid>0000-0002-1534-5840 ; 0000-0003-1289-2758 ; 0000-0001-6015-494X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9372892$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9372892$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33684043$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Guan'An</creatorcontrib><creatorcontrib>Hu, Qinghao</creatorcontrib><creatorcontrib>Yang, Yang</creatorcontrib><creatorcontrib>Cheng, Jian</creatorcontrib><creatorcontrib>Hou, Zeng-Guang</creatorcontrib><title>Adversarial Binary Mutual Learning for Semi-Supervised Deep Hashing</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description><![CDATA[Hashing is a popular search algorithm for its compact binary representation and efficient Hamming distance calculation. Benefited from the advance of deep learning, deep hashing methods have achieved promising performance. However, those methods usually learn with expensive labeled data but fail to utilize unlabeled data. Furthermore, the traditional pairwise loss used by those methods cannot explicitly force similar/dissimilar pairs to small/large distances. Both weaknesses limit existing methods' performance. To solve the first problem, we propose a novel semi-supervised deep hashing model named adversarial binary mutual learning (ABML). Specifically, our ABML consists of a generative model <inline-formula> <tex-math notation="LaTeX">G_{H} </tex-math></inline-formula> and a discriminative model <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula>, where <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula> learns labeled data in a supervised way and <inline-formula> <tex-math notation="LaTeX">G_{H} </tex-math></inline-formula> learns unlabeled data by synthesizing real images. We adopt an adversarial learning (AL) strategy to transfer the knowledge of unlabeled data to <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula> by making <inline-formula> <tex-math notation="LaTeX">G_{H} </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula> mutually learn from each other. To solve the second problem, we propose a novel Weibull cross-entropy loss (WCE) by using the Weibull distribution, which can distinguish tiny differences of distances and explicitly force similar/dissimilar distances as small/large as possible. Thus, the learned features are more discriminative. Finally, by incorporating ABML with WCE loss, our model can acquire more semantic and discriminative features. Extensive experiments on four common data sets (CIFAR-10, large database of handwritten digits (MNIST), ImageNet-10, and NUS-WIDE) and a large-scale data set ImageNet demonstrate that our approach successfully overcomes the two difficulties above and significantly outperforms state-of-the-art hashing methods.]]></description><subject>Adversarial learning (AL)</subject><subject>Binary codes</subject><subject>Computational modeling</subject><subject>Data models</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Entropy (Information theory)</subject><subject>Force</subject><subject>Handwriting</subject><subject>Hash functions</subject><subject>hashing</subject><subject>Knowledge management</subject><subject>Learning</subject><subject>Machine learning</subject><subject>Search algorithms</subject><subject>Semantics</subject><subject>Training data</subject><subject>Transfer learning</subject><subject>Weibull distribution</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkEtPAjEQgBujEYP8AU3MJl68LPa97RHxgQniAUy8NWV3qktgd21ZEv-9RZCDc5mZzDeTyYfQBcF9QrC-nU0m42mfYkr6DAuhGD9CZ5RImlKm1PGhzt47qBfCAseQWEiuT1GHMak45uwMDQfFBnywvrTL5K6srP9OXtp1G7sxWF-V1Ufiap9MYVWm07YBvykDFMk9QJOMbPiMwDk6cXYZoLfPXfT2-DAbjtLx69PzcDBOcybIOqWSiblylsxdgUFrS7kC4nJNXSawAhy_IpLzPBdEWs0d08Rxh63LtMg4ZV10s7vb-PqrhbA2qzLksFzaCuo2GMq1ZkozvEWv_6GLuvVV_M5QqTMhtdAqUnRH5b4OwYMzjS9XUYEh2Gwtm1_LZmvZ7C3Hpav96Xa-guKw8uc0Apc7oASAw1izjCpN2Q9zan6Y</recordid><startdate>20220801</startdate><enddate>20220801</enddate><creator>Wang, Guan'An</creator><creator>Hu, Qinghao</creator><creator>Yang, Yang</creator><creator>Cheng, Jian</creator><creator>Hou, Zeng-Guang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-1534-5840</orcidid><orcidid>https://orcid.org/0000-0003-1289-2758</orcidid><orcidid>https://orcid.org/0000-0001-6015-494X</orcidid></search><sort><creationdate>20220801</creationdate><title>Adversarial Binary Mutual Learning for Semi-Supervised Deep Hashing</title><author>Wang, Guan'An ; Hu, Qinghao ; Yang, Yang ; Cheng, Jian ; Hou, Zeng-Guang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-2635b8fa1bfd0e99a248e1fc92f7508e03361644cc516a94f391f4f0af7957423</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adversarial learning (AL)</topic><topic>Binary codes</topic><topic>Computational modeling</topic><topic>Data models</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Entropy (Information theory)</topic><topic>Force</topic><topic>Handwriting</topic><topic>Hash functions</topic><topic>hashing</topic><topic>Knowledge management</topic><topic>Learning</topic><topic>Machine learning</topic><topic>Search algorithms</topic><topic>Semantics</topic><topic>Training data</topic><topic>Transfer learning</topic><topic>Weibull distribution</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Guan'An</creatorcontrib><creatorcontrib>Hu, Qinghao</creatorcontrib><creatorcontrib>Yang, Yang</creatorcontrib><creatorcontrib>Cheng, Jian</creatorcontrib><creatorcontrib>Hou, Zeng-Guang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Guan'An</au><au>Hu, Qinghao</au><au>Yang, Yang</au><au>Cheng, Jian</au><au>Hou, Zeng-Guang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial Binary Mutual Learning for Semi-Supervised Deep Hashing</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2022-08-01</date><risdate>2022</risdate><volume>33</volume><issue>8</issue><spage>4110</spage><epage>4124</epage><pages>4110-4124</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract><![CDATA[Hashing is a popular search algorithm for its compact binary representation and efficient Hamming distance calculation. Benefited from the advance of deep learning, deep hashing methods have achieved promising performance. However, those methods usually learn with expensive labeled data but fail to utilize unlabeled data. Furthermore, the traditional pairwise loss used by those methods cannot explicitly force similar/dissimilar pairs to small/large distances. Both weaknesses limit existing methods' performance. To solve the first problem, we propose a novel semi-supervised deep hashing model named adversarial binary mutual learning (ABML). Specifically, our ABML consists of a generative model <inline-formula> <tex-math notation="LaTeX">G_{H} </tex-math></inline-formula> and a discriminative model <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula>, where <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula> learns labeled data in a supervised way and <inline-formula> <tex-math notation="LaTeX">G_{H} </tex-math></inline-formula> learns unlabeled data by synthesizing real images. We adopt an adversarial learning (AL) strategy to transfer the knowledge of unlabeled data to <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula> by making <inline-formula> <tex-math notation="LaTeX">G_{H} </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">D_{H} </tex-math></inline-formula> mutually learn from each other. To solve the second problem, we propose a novel Weibull cross-entropy loss (WCE) by using the Weibull distribution, which can distinguish tiny differences of distances and explicitly force similar/dissimilar distances as small/large as possible. Thus, the learned features are more discriminative. Finally, by incorporating ABML with WCE loss, our model can acquire more semantic and discriminative features. Extensive experiments on four common data sets (CIFAR-10, large database of handwritten digits (MNIST), ImageNet-10, and NUS-WIDE) and a large-scale data set ImageNet demonstrate that our approach successfully overcomes the two difficulties above and significantly outperforms state-of-the-art hashing methods.]]></abstract><cop>United States</cop><pub>IEEE</pub><pmid>33684043</pmid><doi>10.1109/TNNLS.2021.3055834</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-1534-5840</orcidid><orcidid>https://orcid.org/0000-0003-1289-2758</orcidid><orcidid>https://orcid.org/0000-0001-6015-494X</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2022-08, Vol.33 (8), p.4110-4124
issn 2162-237X
2162-2388
language eng
recordid cdi_ieee_primary_9372892
source IEEE Electronic Library (IEL)
subjects Adversarial learning (AL)
Binary codes
Computational modeling
Data models
Datasets
Deep learning
Entropy (Information theory)
Force
Handwriting
Hash functions
hashing
Knowledge management
Learning
Machine learning
Search algorithms
Semantics
Training data
Transfer learning
Weibull distribution
title Adversarial Binary Mutual Learning for Semi-Supervised Deep Hashing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T22%3A07%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20Binary%20Mutual%20Learning%20for%20Semi-Supervised%20Deep%20Hashing&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Wang,%20Guan'An&rft.date=2022-08-01&rft.volume=33&rft.issue=8&rft.spage=4110&rft.epage=4124&rft.pages=4110-4124&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2021.3055834&rft_dat=%3Cproquest_RIE%3E2697569598%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2697569598&rft_id=info:pmid/33684043&rft_ieee_id=9372892&rfr_iscdi=true