Cross-modal Zero-shot Hashing
Hashing has been widely studied for big data retrieval due to its low storage cost and fast query speed. Zero-shot hashing (ZSH) aims to learn a hashing model that is trained using only samples from seen categories, but can generalize well to samples of unseen categories. ZSH generally uses category...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Liu, Xuanwu Li, Zhao Wang, Jun Yu, Guoxian Domeniconi, Carlotta Zhang, Xiangliang |
description | Hashing has been widely studied for big data retrieval due to its low storage
cost and fast query speed. Zero-shot hashing (ZSH) aims to learn a hashing
model that is trained using only samples from seen categories, but can
generalize well to samples of unseen categories. ZSH generally uses category
attributes to seek a semantic embedding space to transfer knowledge from seen
categories to unseen ones. As a result, it may perform poorly when labeled data
are insufficient. ZSH methods are mainly designed for single-modality data,
which prevents their application to the widely spread multi-modal data. On the
other hand, existing cross-modal hashing solutions assume that all the
modalities share the same category labels, while in practice the labels of
different data modalities may be different. To address these issues, we propose
a general Cross-modal Zero-shot Hashing (CZHash) solution to effectively
leverage unlabeled and labeled multi-modality data with different label spaces.
CZHash first quantifies the composite similarity between instances using label
and feature information. It then defines an objective function to achieve deep
feature learning compatible with the composite similarity preserving, category
attribute space learning, and hashing coding function learning. CZHash further
introduces an alternative optimization procedure to jointly optimize these
learning objectives. Experiments on benchmark multi-modal datasets show that
CZHash significantly outperforms related representative hashing approaches both
on effectiveness and adaptability. |
doi_str_mv | 10.48550/arxiv.1908.07388 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1908_07388</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1908_07388</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-361b41caa02143fc9ad52f88ea7b1b363a4688e4da3236f53c83a24adb37e4013</originalsourceid><addsrcrecordid>eNotzr0OgjAYheEuDga9AAcjN1Bs-UpbRkNUTExcmFzIx0-FBMS0xujdq-h08i4nDyELzgKho4it0T7bR8BjpgOmQOspWSZ2cI72Q4Wdf67tQF0z3P0UXdNeLzMyMdi5ev5fj2S7bZak9HjaH5LNkaJUmoLkheAlIgu5AFPGWEWh0bpGVfACJKCQnxIVQgjSRFBqwFBgVYCqBePgkdXvdvTlN9v2aF_515mPTngDrdQ2dg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Cross-modal Zero-shot Hashing</title><source>arXiv.org</source><creator>Liu, Xuanwu ; Li, Zhao ; Wang, Jun ; Yu, Guoxian ; Domeniconi, Carlotta ; Zhang, Xiangliang</creator><creatorcontrib>Liu, Xuanwu ; Li, Zhao ; Wang, Jun ; Yu, Guoxian ; Domeniconi, Carlotta ; Zhang, Xiangliang</creatorcontrib><description>Hashing has been widely studied for big data retrieval due to its low storage
cost and fast query speed. Zero-shot hashing (ZSH) aims to learn a hashing
model that is trained using only samples from seen categories, but can
generalize well to samples of unseen categories. ZSH generally uses category
attributes to seek a semantic embedding space to transfer knowledge from seen
categories to unseen ones. As a result, it may perform poorly when labeled data
are insufficient. ZSH methods are mainly designed for single-modality data,
which prevents their application to the widely spread multi-modal data. On the
other hand, existing cross-modal hashing solutions assume that all the
modalities share the same category labels, while in practice the labels of
different data modalities may be different. To address these issues, we propose
a general Cross-modal Zero-shot Hashing (CZHash) solution to effectively
leverage unlabeled and labeled multi-modality data with different label spaces.
CZHash first quantifies the composite similarity between instances using label
and feature information. It then defines an objective function to achieve deep
feature learning compatible with the composite similarity preserving, category
attribute space learning, and hashing coding function learning. CZHash further
introduces an alternative optimization procedure to jointly optimize these
learning objectives. Experiments on benchmark multi-modal datasets show that
CZHash significantly outperforms related representative hashing approaches both
on effectiveness and adaptability.</description><identifier>DOI: 10.48550/arxiv.1908.07388</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2019-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1908.07388$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1908.07388$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Xuanwu</creatorcontrib><creatorcontrib>Li, Zhao</creatorcontrib><creatorcontrib>Wang, Jun</creatorcontrib><creatorcontrib>Yu, Guoxian</creatorcontrib><creatorcontrib>Domeniconi, Carlotta</creatorcontrib><creatorcontrib>Zhang, Xiangliang</creatorcontrib><title>Cross-modal Zero-shot Hashing</title><description>Hashing has been widely studied for big data retrieval due to its low storage
cost and fast query speed. Zero-shot hashing (ZSH) aims to learn a hashing
model that is trained using only samples from seen categories, but can
generalize well to samples of unseen categories. ZSH generally uses category
attributes to seek a semantic embedding space to transfer knowledge from seen
categories to unseen ones. As a result, it may perform poorly when labeled data
are insufficient. ZSH methods are mainly designed for single-modality data,
which prevents their application to the widely spread multi-modal data. On the
other hand, existing cross-modal hashing solutions assume that all the
modalities share the same category labels, while in practice the labels of
different data modalities may be different. To address these issues, we propose
a general Cross-modal Zero-shot Hashing (CZHash) solution to effectively
leverage unlabeled and labeled multi-modality data with different label spaces.
CZHash first quantifies the composite similarity between instances using label
and feature information. It then defines an objective function to achieve deep
feature learning compatible with the composite similarity preserving, category
attribute space learning, and hashing coding function learning. CZHash further
introduces an alternative optimization procedure to jointly optimize these
learning objectives. Experiments on benchmark multi-modal datasets show that
CZHash significantly outperforms related representative hashing approaches both
on effectiveness and adaptability.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr0OgjAYheEuDga9AAcjN1Bs-UpbRkNUTExcmFzIx0-FBMS0xujdq-h08i4nDyELzgKho4it0T7bR8BjpgOmQOspWSZ2cI72Q4Wdf67tQF0z3P0UXdNeLzMyMdi5ev5fj2S7bZak9HjaH5LNkaJUmoLkheAlIgu5AFPGWEWh0bpGVfACJKCQnxIVQgjSRFBqwFBgVYCqBePgkdXvdvTlN9v2aF_515mPTngDrdQ2dg</recordid><startdate>20190819</startdate><enddate>20190819</enddate><creator>Liu, Xuanwu</creator><creator>Li, Zhao</creator><creator>Wang, Jun</creator><creator>Yu, Guoxian</creator><creator>Domeniconi, Carlotta</creator><creator>Zhang, Xiangliang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190819</creationdate><title>Cross-modal Zero-shot Hashing</title><author>Liu, Xuanwu ; Li, Zhao ; Wang, Jun ; Yu, Guoxian ; Domeniconi, Carlotta ; Zhang, Xiangliang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-361b41caa02143fc9ad52f88ea7b1b363a4688e4da3236f53c83a24adb37e4013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Xuanwu</creatorcontrib><creatorcontrib>Li, Zhao</creatorcontrib><creatorcontrib>Wang, Jun</creatorcontrib><creatorcontrib>Yu, Guoxian</creatorcontrib><creatorcontrib>Domeniconi, Carlotta</creatorcontrib><creatorcontrib>Zhang, Xiangliang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Xuanwu</au><au>Li, Zhao</au><au>Wang, Jun</au><au>Yu, Guoxian</au><au>Domeniconi, Carlotta</au><au>Zhang, Xiangliang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Cross-modal Zero-shot Hashing</atitle><date>2019-08-19</date><risdate>2019</risdate><abstract>Hashing has been widely studied for big data retrieval due to its low storage
cost and fast query speed. Zero-shot hashing (ZSH) aims to learn a hashing
model that is trained using only samples from seen categories, but can
generalize well to samples of unseen categories. ZSH generally uses category
attributes to seek a semantic embedding space to transfer knowledge from seen
categories to unseen ones. As a result, it may perform poorly when labeled data
are insufficient. ZSH methods are mainly designed for single-modality data,
which prevents their application to the widely spread multi-modal data. On the
other hand, existing cross-modal hashing solutions assume that all the
modalities share the same category labels, while in practice the labels of
different data modalities may be different. To address these issues, we propose
a general Cross-modal Zero-shot Hashing (CZHash) solution to effectively
leverage unlabeled and labeled multi-modality data with different label spaces.
CZHash first quantifies the composite similarity between instances using label
and feature information. It then defines an objective function to achieve deep
feature learning compatible with the composite similarity preserving, category
attribute space learning, and hashing coding function learning. CZHash further
introduces an alternative optimization procedure to jointly optimize these
learning objectives. Experiments on benchmark multi-modal datasets show that
CZHash significantly outperforms related representative hashing approaches both
on effectiveness and adaptability.</abstract><doi>10.48550/arxiv.1908.07388</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1908.07388 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1908_07388 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | Cross-modal Zero-shot Hashing |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T11%3A17%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Cross-modal%20Zero-shot%20Hashing&rft.au=Liu,%20Xuanwu&rft.date=2019-08-19&rft_id=info:doi/10.48550/arxiv.1908.07388&rft_dat=%3Carxiv_GOX%3E1908_07388%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |