Deep Coupled Metric Learning for Cross-Modal Matching
In this paper, we propose a new deep coupled metric learning (DCML) method for cross-modal matching, which aims to match samples captured from two different modalities (e.g., texts versus images, visible versus near infrared images). Unlike existing cross-modal matching methods which learn a linear...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on multimedia 2017-06, Vol.19 (6), p.1234-1244 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1244 |
---|---|
container_issue | 6 |
container_start_page | 1234 |
container_title | IEEE transactions on multimedia |
container_volume | 19 |
creator | Liong, Venice Erin Jiwen Lu Yap-Peng Tan Jie Zhou |
description | In this paper, we propose a new deep coupled metric learning (DCML) method for cross-modal matching, which aims to match samples captured from two different modalities (e.g., texts versus images, visible versus near infrared images). Unlike existing cross-modal matching methods which learn a linear common space to reduce the modality gap, our DCML designs two feedforward neural networks which learn two sets of hierarchical nonlinear transformations (one set for each modality) to nonlinearly map samples from different modalities into a shared latent feature subspace, under which the intraclass variation is minimized and the interclass variation is maximized, and the difference of each data pair captured from two modalities of the same class is minimized, respectively. Experimental results on four different cross-modal matching datasets validate the efficacy of the proposed approach. |
doi_str_mv | 10.1109/TMM.2016.2646180 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_7801952</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7801952</ieee_id><sourcerecordid>1901491546</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-83455c8547b982e163c9ebdfb22757851ad7e9aed174470e0bd4b5698d8bd4553</originalsourceid><addsrcrecordid>eNo9kL1PwzAQxS0EEqWwI7FEYk65c-zYHlGggNSIpcyWE18gVWiCkwz897hqxXRPuvfu48fYLcIKEczDtixXHDBf8VzkqOGMLdAITAGUOo9ackgNR7hkV-O4A0AhQS2YfCIakqKfh458UtIU2jrZkAv7dv-ZNH1IitCPY1r23nVJ6ab6Kzau2UXjupFuTnXJPtbP2-I13by_vBWPm7TmBqdUZ0LKWkuhKqM5YZ7VhirfVJwrqbRE5xUZRx6VEAoIKi8qmRvtdVRSZkt2f5w7hP5npnGyu34O-7jSookvGJQijy44uurDqYEaO4T224Vfi2APcGyEYw9w7AlOjNwdIy0R_duVBjSSZ3-vYl3I</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1901491546</pqid></control><display><type>article</type><title>Deep Coupled Metric Learning for Cross-Modal Matching</title><source>IEEE Electronic Library (IEL)</source><creator>Liong, Venice Erin ; Jiwen Lu ; Yap-Peng Tan ; Jie Zhou</creator><creatorcontrib>Liong, Venice Erin ; Jiwen Lu ; Yap-Peng Tan ; Jie Zhou</creatorcontrib><description>In this paper, we propose a new deep coupled metric learning (DCML) method for cross-modal matching, which aims to match samples captured from two different modalities (e.g., texts versus images, visible versus near infrared images). Unlike existing cross-modal matching methods which learn a linear common space to reduce the modality gap, our DCML designs two feedforward neural networks which learn two sets of hierarchical nonlinear transformations (one set for each modality) to nonlinearly map samples from different modalities into a shared latent feature subspace, under which the intraclass variation is minimized and the interclass variation is maximized, and the difference of each data pair captured from two modalities of the same class is minimized, respectively. Experimental results on four different cross-modal matching datasets validate the efficacy of the proposed approach.</description><identifier>ISSN: 1520-9210</identifier><identifier>EISSN: 1941-0077</identifier><identifier>DOI: 10.1109/TMM.2016.2646180</identifier><identifier>CODEN: ITMUF8</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Artificial neural networks ; Correlation ; Coupled learning ; cross-modal matching ; deep model ; Infrared imagery ; Kernel ; Learning systems ; Machine learning ; Matching ; Measurement ; metric learning ; multimedia retrieval ; Neural networks ; Semantics ; Transformations (mathematics)</subject><ispartof>IEEE transactions on multimedia, 2017-06, Vol.19 (6), p.1234-1244</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2017</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-83455c8547b982e163c9ebdfb22757851ad7e9aed174470e0bd4b5698d8bd4553</citedby><cites>FETCH-LOGICAL-c291t-83455c8547b982e163c9ebdfb22757851ad7e9aed174470e0bd4b5698d8bd4553</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7801952$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7801952$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liong, Venice Erin</creatorcontrib><creatorcontrib>Jiwen Lu</creatorcontrib><creatorcontrib>Yap-Peng Tan</creatorcontrib><creatorcontrib>Jie Zhou</creatorcontrib><title>Deep Coupled Metric Learning for Cross-Modal Matching</title><title>IEEE transactions on multimedia</title><addtitle>TMM</addtitle><description>In this paper, we propose a new deep coupled metric learning (DCML) method for cross-modal matching, which aims to match samples captured from two different modalities (e.g., texts versus images, visible versus near infrared images). Unlike existing cross-modal matching methods which learn a linear common space to reduce the modality gap, our DCML designs two feedforward neural networks which learn two sets of hierarchical nonlinear transformations (one set for each modality) to nonlinearly map samples from different modalities into a shared latent feature subspace, under which the intraclass variation is minimized and the interclass variation is maximized, and the difference of each data pair captured from two modalities of the same class is minimized, respectively. Experimental results on four different cross-modal matching datasets validate the efficacy of the proposed approach.</description><subject>Artificial neural networks</subject><subject>Correlation</subject><subject>Coupled learning</subject><subject>cross-modal matching</subject><subject>deep model</subject><subject>Infrared imagery</subject><subject>Kernel</subject><subject>Learning systems</subject><subject>Machine learning</subject><subject>Matching</subject><subject>Measurement</subject><subject>metric learning</subject><subject>multimedia retrieval</subject><subject>Neural networks</subject><subject>Semantics</subject><subject>Transformations (mathematics)</subject><issn>1520-9210</issn><issn>1941-0077</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kL1PwzAQxS0EEqWwI7FEYk65c-zYHlGggNSIpcyWE18gVWiCkwz897hqxXRPuvfu48fYLcIKEczDtixXHDBf8VzkqOGMLdAITAGUOo9ackgNR7hkV-O4A0AhQS2YfCIakqKfh458UtIU2jrZkAv7dv-ZNH1IitCPY1r23nVJ6ab6Kzau2UXjupFuTnXJPtbP2-I13by_vBWPm7TmBqdUZ0LKWkuhKqM5YZ7VhirfVJwrqbRE5xUZRx6VEAoIKi8qmRvtdVRSZkt2f5w7hP5npnGyu34O-7jSookvGJQijy44uurDqYEaO4T224Vfi2APcGyEYw9w7AlOjNwdIy0R_duVBjSSZ3-vYl3I</recordid><startdate>20170601</startdate><enddate>20170601</enddate><creator>Liong, Venice Erin</creator><creator>Jiwen Lu</creator><creator>Yap-Peng Tan</creator><creator>Jie Zhou</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20170601</creationdate><title>Deep Coupled Metric Learning for Cross-Modal Matching</title><author>Liong, Venice Erin ; Jiwen Lu ; Yap-Peng Tan ; Jie Zhou</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-83455c8547b982e163c9ebdfb22757851ad7e9aed174470e0bd4b5698d8bd4553</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Artificial neural networks</topic><topic>Correlation</topic><topic>Coupled learning</topic><topic>cross-modal matching</topic><topic>deep model</topic><topic>Infrared imagery</topic><topic>Kernel</topic><topic>Learning systems</topic><topic>Machine learning</topic><topic>Matching</topic><topic>Measurement</topic><topic>metric learning</topic><topic>multimedia retrieval</topic><topic>Neural networks</topic><topic>Semantics</topic><topic>Transformations (mathematics)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liong, Venice Erin</creatorcontrib><creatorcontrib>Jiwen Lu</creatorcontrib><creatorcontrib>Yap-Peng Tan</creatorcontrib><creatorcontrib>Jie Zhou</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on multimedia</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liong, Venice Erin</au><au>Jiwen Lu</au><au>Yap-Peng Tan</au><au>Jie Zhou</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Coupled Metric Learning for Cross-Modal Matching</atitle><jtitle>IEEE transactions on multimedia</jtitle><stitle>TMM</stitle><date>2017-06-01</date><risdate>2017</risdate><volume>19</volume><issue>6</issue><spage>1234</spage><epage>1244</epage><pages>1234-1244</pages><issn>1520-9210</issn><eissn>1941-0077</eissn><coden>ITMUF8</coden><abstract>In this paper, we propose a new deep coupled metric learning (DCML) method for cross-modal matching, which aims to match samples captured from two different modalities (e.g., texts versus images, visible versus near infrared images). Unlike existing cross-modal matching methods which learn a linear common space to reduce the modality gap, our DCML designs two feedforward neural networks which learn two sets of hierarchical nonlinear transformations (one set for each modality) to nonlinearly map samples from different modalities into a shared latent feature subspace, under which the intraclass variation is minimized and the interclass variation is maximized, and the difference of each data pair captured from two modalities of the same class is minimized, respectively. Experimental results on four different cross-modal matching datasets validate the efficacy of the proposed approach.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TMM.2016.2646180</doi><tpages>11</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1520-9210 |
ispartof | IEEE transactions on multimedia, 2017-06, Vol.19 (6), p.1234-1244 |
issn | 1520-9210 1941-0077 |
language | eng |
recordid | cdi_ieee_primary_7801952 |
source | IEEE Electronic Library (IEL) |
subjects | Artificial neural networks Correlation Coupled learning cross-modal matching deep model Infrared imagery Kernel Learning systems Machine learning Matching Measurement metric learning multimedia retrieval Neural networks Semantics Transformations (mathematics) |
title | Deep Coupled Metric Learning for Cross-Modal Matching |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T10%3A33%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Coupled%20Metric%20Learning%20for%20Cross-Modal%20Matching&rft.jtitle=IEEE%20transactions%20on%20multimedia&rft.au=Liong,%20Venice%20Erin&rft.date=2017-06-01&rft.volume=19&rft.issue=6&rft.spage=1234&rft.epage=1244&rft.pages=1234-1244&rft.issn=1520-9210&rft.eissn=1941-0077&rft.coden=ITMUF8&rft_id=info:doi/10.1109/TMM.2016.2646180&rft_dat=%3Cproquest_RIE%3E1901491546%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1901491546&rft_id=info:pmid/&rft_ieee_id=7801952&rfr_iscdi=true |