Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization
We propose a general information-theoretic approach to semi-supervised metric learning called (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled...
Gespeichert in:
Veröffentlicht in: | Neural computation 2014-08, Vol.26 (8), p.1717-1762 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1762 |
---|---|
container_issue | 8 |
container_start_page | 1717 |
container_title | Neural computation |
container_volume | 26 |
creator | Niu, Gang Dai, Bo Yamada, Makoto Sugiyama, Masashi |
description | We propose a general information-theoretic approach to semi-supervised metric learning called
(SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize
by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that
compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments. |
doi_str_mv | 10.1162/NECO_a_00614 |
format | Article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmed_primary_24877733</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1620035278</sourcerecordid><originalsourceid>FETCH-LOGICAL-c508t-291690406a234bfc19eb6036e47cec08c7d604824fe39e08486edae28249e7f43</originalsourceid><addsrcrecordid>eNqFkctrFEEQhxtRzBq9eZYBLzk4Wv2YftyUZdXAJpE8wFvT21sTO-xMT7pnFpK_3plslEUCngqqPr6q7h8hbyl8pFSyT6eL-Zl1FkBS8YzMaMWh1Fr_fE5moI0plZTqgLzK-QYmBqqX5IAJrZTifEZ-HLd1TI3rQ2zLy18YE_bBFxfYhPJi6DBtQ8Z1cYJ9GttLdKkN7XWxDa5YtH2K3V1xjtfDxqVw_yB5TV7UbpPxzWM9JFdfF5fz7-Xy7Nvx_Muy9BXovmSGSgMCpGNcrGpPDa4kcIlCefSgvVpLEJqJGrlB0EJLXDtkY8egqgU_JEc7b5fi7YC5t03IHjcb12Icsh2_BoBXTOn_o5VgcryKT9b3_6A3cUjt-JCJqjhVwkzUhx3lU8w5YW27FBqX7iwFO4Vi90MZ8XeP0mHV4Pov_CeFEfi8A5qwt7BFH7dMBm05cC6kZcDoqLdg7H3oHuZ7O46eUDx5zm9ipqgF</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1545317494</pqid></control><display><type>article</type><title>Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization</title><source>MEDLINE</source><source>MIT Press Journals</source><creator>Niu, Gang ; Dai, Bo ; Yamada, Makoto ; Sugiyama, Masashi</creator><creatorcontrib>Niu, Gang ; Dai, Bo ; Yamada, Makoto ; Sugiyama, Masashi</creatorcontrib><description>We propose a general information-theoretic approach to semi-supervised metric learning called
(SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize
by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that
compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.</description><identifier>ISSN: 0899-7667</identifier><identifier>EISSN: 1530-888X</identifier><identifier>DOI: 10.1162/NECO_a_00614</identifier><identifier>PMID: 24877733</identifier><identifier>CODEN: NEUCEB</identifier><language>eng</language><publisher>One Rogers Street, Cambridge, MA 02142-1209, USA: MIT Press</publisher><subject>Algorithms ; Artificial Intelligence ; Brain ; Comparative analysis ; Computation ; Entropy ; Entropy (Information theory) ; Information Theory ; Learning ; Letters ; Manifolds ; Optimization ; Optimization algorithms ; Projection ; Regularization</subject><ispartof>Neural computation, 2014-08, Vol.26 (8), p.1717-1762</ispartof><rights>Copyright MIT Press Journals Aug 2014</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c508t-291690406a234bfc19eb6036e47cec08c7d604824fe39e08486edae28249e7f43</citedby><cites>FETCH-LOGICAL-c508t-291690406a234bfc19eb6036e47cec08c7d604824fe39e08486edae28249e7f43</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://direct.mit.edu/neco/article/doi/10.1162/NECO_a_00614$$EHTML$$P50$$Gmit$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,53984,53985</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/24877733$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Niu, Gang</creatorcontrib><creatorcontrib>Dai, Bo</creatorcontrib><creatorcontrib>Yamada, Makoto</creatorcontrib><creatorcontrib>Sugiyama, Masashi</creatorcontrib><title>Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization</title><title>Neural computation</title><addtitle>Neural Comput</addtitle><description>We propose a general information-theoretic approach to semi-supervised metric learning called
(SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize
by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that
compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Brain</subject><subject>Comparative analysis</subject><subject>Computation</subject><subject>Entropy</subject><subject>Entropy (Information theory)</subject><subject>Information Theory</subject><subject>Learning</subject><subject>Letters</subject><subject>Manifolds</subject><subject>Optimization</subject><subject>Optimization algorithms</subject><subject>Projection</subject><subject>Regularization</subject><issn>0899-7667</issn><issn>1530-888X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2014</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNqFkctrFEEQhxtRzBq9eZYBLzk4Wv2YftyUZdXAJpE8wFvT21sTO-xMT7pnFpK_3plslEUCngqqPr6q7h8hbyl8pFSyT6eL-Zl1FkBS8YzMaMWh1Fr_fE5moI0plZTqgLzK-QYmBqqX5IAJrZTifEZ-HLd1TI3rQ2zLy18YE_bBFxfYhPJi6DBtQ8Z1cYJ9GttLdKkN7XWxDa5YtH2K3V1xjtfDxqVw_yB5TV7UbpPxzWM9JFdfF5fz7-Xy7Nvx_Muy9BXovmSGSgMCpGNcrGpPDa4kcIlCefSgvVpLEJqJGrlB0EJLXDtkY8egqgU_JEc7b5fi7YC5t03IHjcb12Icsh2_BoBXTOn_o5VgcryKT9b3_6A3cUjt-JCJqjhVwkzUhx3lU8w5YW27FBqX7iwFO4Vi90MZ8XeP0mHV4Pov_CeFEfi8A5qwt7BFH7dMBm05cC6kZcDoqLdg7H3oHuZ7O46eUDx5zm9ipqgF</recordid><startdate>20140801</startdate><enddate>20140801</enddate><creator>Niu, Gang</creator><creator>Dai, Bo</creator><creator>Yamada, Makoto</creator><creator>Sugiyama, Masashi</creator><general>MIT Press</general><general>MIT Press Journals, The</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><scope>7U5</scope></search><sort><creationdate>20140801</creationdate><title>Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization</title><author>Niu, Gang ; Dai, Bo ; Yamada, Makoto ; Sugiyama, Masashi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c508t-291690406a234bfc19eb6036e47cec08c7d604824fe39e08486edae28249e7f43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2014</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Brain</topic><topic>Comparative analysis</topic><topic>Computation</topic><topic>Entropy</topic><topic>Entropy (Information theory)</topic><topic>Information Theory</topic><topic>Learning</topic><topic>Letters</topic><topic>Manifolds</topic><topic>Optimization</topic><topic>Optimization algorithms</topic><topic>Projection</topic><topic>Regularization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Niu, Gang</creatorcontrib><creatorcontrib>Dai, Bo</creatorcontrib><creatorcontrib>Yamada, Makoto</creatorcontrib><creatorcontrib>Sugiyama, Masashi</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><collection>Solid State and Superconductivity Abstracts</collection><jtitle>Neural computation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Niu, Gang</au><au>Dai, Bo</au><au>Yamada, Makoto</au><au>Sugiyama, Masashi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization</atitle><jtitle>Neural computation</jtitle><addtitle>Neural Comput</addtitle><date>2014-08-01</date><risdate>2014</risdate><volume>26</volume><issue>8</issue><spage>1717</spage><epage>1762</epage><pages>1717-1762</pages><issn>0899-7667</issn><eissn>1530-888X</eissn><coden>NEUCEB</coden><abstract>We propose a general information-theoretic approach to semi-supervised metric learning called
(SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize
by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that
compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.</abstract><cop>One Rogers Street, Cambridge, MA 02142-1209, USA</cop><pub>MIT Press</pub><pmid>24877733</pmid><doi>10.1162/NECO_a_00614</doi><tpages>46</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0899-7667 |
ispartof | Neural computation, 2014-08, Vol.26 (8), p.1717-1762 |
issn | 0899-7667 1530-888X |
language | eng |
recordid | cdi_pubmed_primary_24877733 |
source | MEDLINE; MIT Press Journals |
subjects | Algorithms Artificial Intelligence Brain Comparative analysis Computation Entropy Entropy (Information theory) Information Theory Learning Letters Manifolds Optimization Optimization algorithms Projection Regularization |
title | Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T23%3A56%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Information-Theoretic%20Semi-Supervised%20Metric%20Learning%20via%20Entropy%20Regularization&rft.jtitle=Neural%20computation&rft.au=Niu,%20Gang&rft.date=2014-08-01&rft.volume=26&rft.issue=8&rft.spage=1717&rft.epage=1762&rft.pages=1717-1762&rft.issn=0899-7667&rft.eissn=1530-888X&rft.coden=NEUCEB&rft_id=info:doi/10.1162/NECO_a_00614&rft_dat=%3Cproquest_pubme%3E1620035278%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1545317494&rft_id=info:pmid/24877733&rfr_iscdi=true |