Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face Learning
With the recent development of deep convolutional neural networks and large-scale datasets, deep face recognition has made remarkable progress and been widely used in various applications. However, unlike the existing public face datasets, in many real-world scenarios of face recognition, the depth...
Gespeichert in:
Veröffentlicht in: | ACM transactions on multimedia computing communications and applications 2023-11, Vol.19 (6), p.1-20 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 20 |
---|---|
container_issue | 6 |
container_start_page | 1 |
container_title | ACM transactions on multimedia computing communications and applications |
container_volume | 19 |
creator | Tai, Yichun Shi, Hailin Zeng, Dan Du, Hang Hu, Yibo Zhang, Zicheng Zhang, Zhijiang Mei, Tao |
description | With the recent development of deep convolutional neural networks and large-scale datasets, deep face recognition has made remarkable progress and been widely used in various applications. However, unlike the existing public face datasets, in many real-world scenarios of face recognition, the depth of training dataset is shallow, which means only two face images are available for each ID. With the non-uniform increase of samples, such issue is converted to a more general case, a.k.a long-tail face learning, which suffers from data imbalance and intra-class diversity dearth simultaneously. These adverse conditions damage the training and result in the decline of model performance. Based on the Semi-Siamese Training (SST), we introduce an advanced solution, named Multi-Agent Semi-Siamese Training (MASST), to address these problems. MASST includes a probe network and multiple gallery agents, the former aims to encode the probe features, and the latter constitutes a stack of networks that encode the prototypes (gallery features). For each training iteration, the gallery network, which is sequentially rotated from the stack, and the probe network form a pair of semi-siamese networks. We give the theoretical and empirical analysis that, given the long-tail (or shallow) data and training loss, MASST smooths the loss landscape and satisfies the Lipschitz continuity with the help of multiple agents and the updating gallery queue. The proposed method is out of extra-dependency, thus can be easily integrated with the existing loss functions and network architectures. It is worth noting that, although multiple gallery agents are employed for training, only the probe network is needed for inference, without increasing the inference cost. Extensive experiments and comparisons demonstrate the advantages of MASST for long-tail and shallow face learning. |
doi_str_mv | 10.1145/3594669 |
format | Article |
fullrecord | <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3594669</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3594669</sourcerecordid><originalsourceid>FETCH-LOGICAL-a239t-c9506c45283a9701b0a2f1833c0c9eb4534507e18aae6710262471cf7df714003</originalsourceid><addsrcrecordid>eNo9kE1LAzEYhIMoWKt495Sbp2jezdfmWIrVwoqHreflbZrUyH5IsiL-e1tae5qBeRiGIeQW-AOAVI9CWam1PSMTUAqYLrU6P3llLslVzp-cC62knpDl63c7Rjbb-n6kte8iqyN2Pnu6Shj72G9pGBKthn7LRowtxX5D6w9s2-GHLtB5WnlMe-6aXARss7856pS8L55W8xdWvT0v57OKYSHsyJxVXDupilKgNRzWHIsApRCOO-vXUgmpuPFQInptgBe6kAZcMJtgQO52T8n9odelIefkQ_OVYofptwHe7B9ojg_syLsDia47Qf_hHyjTU4g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face Learning</title><source>ACM Digital Library</source><creator>Tai, Yichun ; Shi, Hailin ; Zeng, Dan ; Du, Hang ; Hu, Yibo ; Zhang, Zicheng ; Zhang, Zhijiang ; Mei, Tao</creator><creatorcontrib>Tai, Yichun ; Shi, Hailin ; Zeng, Dan ; Du, Hang ; Hu, Yibo ; Zhang, Zicheng ; Zhang, Zhijiang ; Mei, Tao</creatorcontrib><description>With the recent development of deep convolutional neural networks and large-scale datasets, deep face recognition has made remarkable progress and been widely used in various applications. However, unlike the existing public face datasets, in many real-world scenarios of face recognition, the depth of training dataset is shallow, which means only two face images are available for each ID. With the non-uniform increase of samples, such issue is converted to a more general case, a.k.a long-tail face learning, which suffers from data imbalance and intra-class diversity dearth simultaneously. These adverse conditions damage the training and result in the decline of model performance. Based on the Semi-Siamese Training (SST), we introduce an advanced solution, named Multi-Agent Semi-Siamese Training (MASST), to address these problems. MASST includes a probe network and multiple gallery agents, the former aims to encode the probe features, and the latter constitutes a stack of networks that encode the prototypes (gallery features). For each training iteration, the gallery network, which is sequentially rotated from the stack, and the probe network form a pair of semi-siamese networks. We give the theoretical and empirical analysis that, given the long-tail (or shallow) data and training loss, MASST smooths the loss landscape and satisfies the Lipschitz continuity with the help of multiple agents and the updating gallery queue. The proposed method is out of extra-dependency, thus can be easily integrated with the existing loss functions and network architectures. It is worth noting that, although multiple gallery agents are employed for training, only the probe network is needed for inference, without increasing the inference cost. Extensive experiments and comparisons demonstrate the advantages of MASST for long-tail and shallow face learning.</description><identifier>ISSN: 1551-6857</identifier><identifier>EISSN: 1551-6865</identifier><identifier>DOI: 10.1145/3594669</identifier><language>eng</language><publisher>New York, NY: ACM</publisher><subject>Biometrics ; Computing methodologies</subject><ispartof>ACM transactions on multimedia computing communications and applications, 2023-11, Vol.19 (6), p.1-20</ispartof><rights>Copyright held by the owner/author(s). Publication rights licensed to ACM.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-a239t-c9506c45283a9701b0a2f1833c0c9eb4534507e18aae6710262471cf7df714003</cites><orcidid>0000-0003-3656-2593 ; 0000-0003-1066-4399 ; 0000-0002-7699-0747 ; 0000-0002-3603-2683 ; 0000-0002-9553-7358 ; 0000-0002-5990-7307 ; 0009-0006-7762-6986 ; 0000-0003-1300-1769</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://dl.acm.org/doi/pdf/10.1145/3594669$$EPDF$$P50$$Gacm$$Hfree_for_read</linktopdf><link.rule.ids>314,776,780,2276,27901,27902,40172,75970</link.rule.ids></links><search><creatorcontrib>Tai, Yichun</creatorcontrib><creatorcontrib>Shi, Hailin</creatorcontrib><creatorcontrib>Zeng, Dan</creatorcontrib><creatorcontrib>Du, Hang</creatorcontrib><creatorcontrib>Hu, Yibo</creatorcontrib><creatorcontrib>Zhang, Zicheng</creatorcontrib><creatorcontrib>Zhang, Zhijiang</creatorcontrib><creatorcontrib>Mei, Tao</creatorcontrib><title>Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face Learning</title><title>ACM transactions on multimedia computing communications and applications</title><addtitle>ACM TOMM</addtitle><description>With the recent development of deep convolutional neural networks and large-scale datasets, deep face recognition has made remarkable progress and been widely used in various applications. However, unlike the existing public face datasets, in many real-world scenarios of face recognition, the depth of training dataset is shallow, which means only two face images are available for each ID. With the non-uniform increase of samples, such issue is converted to a more general case, a.k.a long-tail face learning, which suffers from data imbalance and intra-class diversity dearth simultaneously. These adverse conditions damage the training and result in the decline of model performance. Based on the Semi-Siamese Training (SST), we introduce an advanced solution, named Multi-Agent Semi-Siamese Training (MASST), to address these problems. MASST includes a probe network and multiple gallery agents, the former aims to encode the probe features, and the latter constitutes a stack of networks that encode the prototypes (gallery features). For each training iteration, the gallery network, which is sequentially rotated from the stack, and the probe network form a pair of semi-siamese networks. We give the theoretical and empirical analysis that, given the long-tail (or shallow) data and training loss, MASST smooths the loss landscape and satisfies the Lipschitz continuity with the help of multiple agents and the updating gallery queue. The proposed method is out of extra-dependency, thus can be easily integrated with the existing loss functions and network architectures. It is worth noting that, although multiple gallery agents are employed for training, only the probe network is needed for inference, without increasing the inference cost. Extensive experiments and comparisons demonstrate the advantages of MASST for long-tail and shallow face learning.</description><subject>Biometrics</subject><subject>Computing methodologies</subject><issn>1551-6857</issn><issn>1551-6865</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNo9kE1LAzEYhIMoWKt495Sbp2jezdfmWIrVwoqHreflbZrUyH5IsiL-e1tae5qBeRiGIeQW-AOAVI9CWam1PSMTUAqYLrU6P3llLslVzp-cC62knpDl63c7Rjbb-n6kte8iqyN2Pnu6Shj72G9pGBKthn7LRowtxX5D6w9s2-GHLtB5WnlMe-6aXARss7856pS8L55W8xdWvT0v57OKYSHsyJxVXDupilKgNRzWHIsApRCOO-vXUgmpuPFQInptgBe6kAZcMJtgQO52T8n9odelIefkQ_OVYofptwHe7B9ojg_syLsDia47Qf_hHyjTU4g</recordid><startdate>20231130</startdate><enddate>20231130</enddate><creator>Tai, Yichun</creator><creator>Shi, Hailin</creator><creator>Zeng, Dan</creator><creator>Du, Hang</creator><creator>Hu, Yibo</creator><creator>Zhang, Zicheng</creator><creator>Zhang, Zhijiang</creator><creator>Mei, Tao</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-3656-2593</orcidid><orcidid>https://orcid.org/0000-0003-1066-4399</orcidid><orcidid>https://orcid.org/0000-0002-7699-0747</orcidid><orcidid>https://orcid.org/0000-0002-3603-2683</orcidid><orcidid>https://orcid.org/0000-0002-9553-7358</orcidid><orcidid>https://orcid.org/0000-0002-5990-7307</orcidid><orcidid>https://orcid.org/0009-0006-7762-6986</orcidid><orcidid>https://orcid.org/0000-0003-1300-1769</orcidid></search><sort><creationdate>20231130</creationdate><title>Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face Learning</title><author>Tai, Yichun ; Shi, Hailin ; Zeng, Dan ; Du, Hang ; Hu, Yibo ; Zhang, Zicheng ; Zhang, Zhijiang ; Mei, Tao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a239t-c9506c45283a9701b0a2f1833c0c9eb4534507e18aae6710262471cf7df714003</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Biometrics</topic><topic>Computing methodologies</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tai, Yichun</creatorcontrib><creatorcontrib>Shi, Hailin</creatorcontrib><creatorcontrib>Zeng, Dan</creatorcontrib><creatorcontrib>Du, Hang</creatorcontrib><creatorcontrib>Hu, Yibo</creatorcontrib><creatorcontrib>Zhang, Zicheng</creatorcontrib><creatorcontrib>Zhang, Zhijiang</creatorcontrib><creatorcontrib>Mei, Tao</creatorcontrib><collection>CrossRef</collection><jtitle>ACM transactions on multimedia computing communications and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tai, Yichun</au><au>Shi, Hailin</au><au>Zeng, Dan</au><au>Du, Hang</au><au>Hu, Yibo</au><au>Zhang, Zicheng</au><au>Zhang, Zhijiang</au><au>Mei, Tao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face Learning</atitle><jtitle>ACM transactions on multimedia computing communications and applications</jtitle><stitle>ACM TOMM</stitle><date>2023-11-30</date><risdate>2023</risdate><volume>19</volume><issue>6</issue><spage>1</spage><epage>20</epage><pages>1-20</pages><issn>1551-6857</issn><eissn>1551-6865</eissn><abstract>With the recent development of deep convolutional neural networks and large-scale datasets, deep face recognition has made remarkable progress and been widely used in various applications. However, unlike the existing public face datasets, in many real-world scenarios of face recognition, the depth of training dataset is shallow, which means only two face images are available for each ID. With the non-uniform increase of samples, such issue is converted to a more general case, a.k.a long-tail face learning, which suffers from data imbalance and intra-class diversity dearth simultaneously. These adverse conditions damage the training and result in the decline of model performance. Based on the Semi-Siamese Training (SST), we introduce an advanced solution, named Multi-Agent Semi-Siamese Training (MASST), to address these problems. MASST includes a probe network and multiple gallery agents, the former aims to encode the probe features, and the latter constitutes a stack of networks that encode the prototypes (gallery features). For each training iteration, the gallery network, which is sequentially rotated from the stack, and the probe network form a pair of semi-siamese networks. We give the theoretical and empirical analysis that, given the long-tail (or shallow) data and training loss, MASST smooths the loss landscape and satisfies the Lipschitz continuity with the help of multiple agents and the updating gallery queue. The proposed method is out of extra-dependency, thus can be easily integrated with the existing loss functions and network architectures. It is worth noting that, although multiple gallery agents are employed for training, only the probe network is needed for inference, without increasing the inference cost. Extensive experiments and comparisons demonstrate the advantages of MASST for long-tail and shallow face learning.</abstract><cop>New York, NY</cop><pub>ACM</pub><doi>10.1145/3594669</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0003-3656-2593</orcidid><orcidid>https://orcid.org/0000-0003-1066-4399</orcidid><orcidid>https://orcid.org/0000-0002-7699-0747</orcidid><orcidid>https://orcid.org/0000-0002-3603-2683</orcidid><orcidid>https://orcid.org/0000-0002-9553-7358</orcidid><orcidid>https://orcid.org/0000-0002-5990-7307</orcidid><orcidid>https://orcid.org/0009-0006-7762-6986</orcidid><orcidid>https://orcid.org/0000-0003-1300-1769</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1551-6857 |
ispartof | ACM transactions on multimedia computing communications and applications, 2023-11, Vol.19 (6), p.1-20 |
issn | 1551-6857 1551-6865 |
language | eng |
recordid | cdi_crossref_primary_10_1145_3594669 |
source | ACM Digital Library |
subjects | Biometrics Computing methodologies |
title | Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T04%3A59%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Agent%20Semi-Siamese%20Training%20for%20Long-tail%20and%20Shallow%20Face%20Learning&rft.jtitle=ACM%20transactions%20on%20multimedia%20computing%20communications%20and%20applications&rft.au=Tai,%20Yichun&rft.date=2023-11-30&rft.volume=19&rft.issue=6&rft.spage=1&rft.epage=20&rft.pages=1-20&rft.issn=1551-6857&rft.eissn=1551-6865&rft_id=info:doi/10.1145/3594669&rft_dat=%3Cacm_cross%3E3594669%3C/acm_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |