PLIP: Language-Image Pre-training for Person Representation Learning
Language-image pre-training is an effective technique for learning powerful representations in general domains. However, when directly turning to person representation learning, these general pre-training methods suffer from unsatisfactory performance. The reason is that they neglect critical person...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zuo, Jialong Hong, Jiahao Zhang, Feng Yu, Changqian Zhou, Hanyu Gao, Changxin Sang, Nong Wang, Jingdong |
description | Language-image pre-training is an effective technique for learning powerful
representations in general domains. However, when directly turning to person
representation learning, these general pre-training methods suffer from
unsatisfactory performance. The reason is that they neglect critical
person-related characteristics, i.e., fine-grained attributes and identities.
To address this issue, we propose a novel language-image pre-training framework
for person representation learning, termed PLIP. Specifically, we elaborately
design three pretext tasks: 1) Text-guided Image Colorization, aims to
establish the correspondence between the person-related image regions and the
fine-grained color-part textual phrases. 2) Image-guided Attributes Prediction,
aims to mine fine-grained attribute information of the person body in the
image; and 3) Identity-based Vision-Language Contrast, aims to correlate the
cross-modal representations at the identity level rather than the instance
level. Moreover, to implement our pre-train framework, we construct a
large-scale person dataset with image-text pairs named SYNTH-PEDES by
automatically generating textual annotations. We pre-train PLIP on SYNTH-PEDES
and evaluate our models by spanning downstream person-centric tasks. PLIP not
only significantly improves existing methods on all these tasks, but also shows
great ability in the zero-shot and domain generalization settings. The code,
dataset and weights will be released
at~\url{https://github.com/Zplusdragon/PLIP} |
doi_str_mv | 10.48550/arxiv.2305.08386 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_08386</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_08386</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-52f814926da1d855ee6fc690064f03e4e9d25a5965401deceb8370468d24fda53</originalsourceid><addsrcrecordid>eNotj8tqwzAURLXpoiT9gK6qH5Ar63EjZ1fSl0EQU7I3t9GVETRKkN3S_n2dtJsZBoZhDmO3tayMs1beY_lOX5XS0lbSaQfX7LHzbbfmHvPwiQOJ9jAr7wqJqWDKKQ88HgvvqIzHzN_oVGikPOGU5ugJy7myZFcRP0a6-fcF2z0_7Tavwm9f2s2DFwgrEFZFV5tGQcA6zHeIIO6hkRJMlJoMNUFZtA1YI-tAe3p3eiUNuKBMDGj1gt39zV4w-lNJByw__Rmnv-DoX7tHRFk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>PLIP: Language-Image Pre-training for Person Representation Learning</title><source>arXiv.org</source><creator>Zuo, Jialong ; Hong, Jiahao ; Zhang, Feng ; Yu, Changqian ; Zhou, Hanyu ; Gao, Changxin ; Sang, Nong ; Wang, Jingdong</creator><creatorcontrib>Zuo, Jialong ; Hong, Jiahao ; Zhang, Feng ; Yu, Changqian ; Zhou, Hanyu ; Gao, Changxin ; Sang, Nong ; Wang, Jingdong</creatorcontrib><description>Language-image pre-training is an effective technique for learning powerful
representations in general domains. However, when directly turning to person
representation learning, these general pre-training methods suffer from
unsatisfactory performance. The reason is that they neglect critical
person-related characteristics, i.e., fine-grained attributes and identities.
To address this issue, we propose a novel language-image pre-training framework
for person representation learning, termed PLIP. Specifically, we elaborately
design three pretext tasks: 1) Text-guided Image Colorization, aims to
establish the correspondence between the person-related image regions and the
fine-grained color-part textual phrases. 2) Image-guided Attributes Prediction,
aims to mine fine-grained attribute information of the person body in the
image; and 3) Identity-based Vision-Language Contrast, aims to correlate the
cross-modal representations at the identity level rather than the instance
level. Moreover, to implement our pre-train framework, we construct a
large-scale person dataset with image-text pairs named SYNTH-PEDES by
automatically generating textual annotations. We pre-train PLIP on SYNTH-PEDES
and evaluate our models by spanning downstream person-centric tasks. PLIP not
only significantly improves existing methods on all these tasks, but also shows
great ability in the zero-shot and domain generalization settings. The code,
dataset and weights will be released
at~\url{https://github.com/Zplusdragon/PLIP}</description><identifier>DOI: 10.48550/arxiv.2305.08386</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-05</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.08386$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.08386$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zuo, Jialong</creatorcontrib><creatorcontrib>Hong, Jiahao</creatorcontrib><creatorcontrib>Zhang, Feng</creatorcontrib><creatorcontrib>Yu, Changqian</creatorcontrib><creatorcontrib>Zhou, Hanyu</creatorcontrib><creatorcontrib>Gao, Changxin</creatorcontrib><creatorcontrib>Sang, Nong</creatorcontrib><creatorcontrib>Wang, Jingdong</creatorcontrib><title>PLIP: Language-Image Pre-training for Person Representation Learning</title><description>Language-image pre-training is an effective technique for learning powerful
representations in general domains. However, when directly turning to person
representation learning, these general pre-training methods suffer from
unsatisfactory performance. The reason is that they neglect critical
person-related characteristics, i.e., fine-grained attributes and identities.
To address this issue, we propose a novel language-image pre-training framework
for person representation learning, termed PLIP. Specifically, we elaborately
design three pretext tasks: 1) Text-guided Image Colorization, aims to
establish the correspondence between the person-related image regions and the
fine-grained color-part textual phrases. 2) Image-guided Attributes Prediction,
aims to mine fine-grained attribute information of the person body in the
image; and 3) Identity-based Vision-Language Contrast, aims to correlate the
cross-modal representations at the identity level rather than the instance
level. Moreover, to implement our pre-train framework, we construct a
large-scale person dataset with image-text pairs named SYNTH-PEDES by
automatically generating textual annotations. We pre-train PLIP on SYNTH-PEDES
and evaluate our models by spanning downstream person-centric tasks. PLIP not
only significantly improves existing methods on all these tasks, but also shows
great ability in the zero-shot and domain generalization settings. The code,
dataset and weights will be released
at~\url{https://github.com/Zplusdragon/PLIP}</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAURLXpoiT9gK6qH5Ar63EjZ1fSl0EQU7I3t9GVETRKkN3S_n2dtJsZBoZhDmO3tayMs1beY_lOX5XS0lbSaQfX7LHzbbfmHvPwiQOJ9jAr7wqJqWDKKQ88HgvvqIzHzN_oVGikPOGU5ugJy7myZFcRP0a6-fcF2z0_7Tavwm9f2s2DFwgrEFZFV5tGQcA6zHeIIO6hkRJMlJoMNUFZtA1YI-tAe3p3eiUNuKBMDGj1gt39zV4w-lNJByw__Rmnv-DoX7tHRFk</recordid><startdate>20230515</startdate><enddate>20230515</enddate><creator>Zuo, Jialong</creator><creator>Hong, Jiahao</creator><creator>Zhang, Feng</creator><creator>Yu, Changqian</creator><creator>Zhou, Hanyu</creator><creator>Gao, Changxin</creator><creator>Sang, Nong</creator><creator>Wang, Jingdong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230515</creationdate><title>PLIP: Language-Image Pre-training for Person Representation Learning</title><author>Zuo, Jialong ; Hong, Jiahao ; Zhang, Feng ; Yu, Changqian ; Zhou, Hanyu ; Gao, Changxin ; Sang, Nong ; Wang, Jingdong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-52f814926da1d855ee6fc690064f03e4e9d25a5965401deceb8370468d24fda53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Zuo, Jialong</creatorcontrib><creatorcontrib>Hong, Jiahao</creatorcontrib><creatorcontrib>Zhang, Feng</creatorcontrib><creatorcontrib>Yu, Changqian</creatorcontrib><creatorcontrib>Zhou, Hanyu</creatorcontrib><creatorcontrib>Gao, Changxin</creatorcontrib><creatorcontrib>Sang, Nong</creatorcontrib><creatorcontrib>Wang, Jingdong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zuo, Jialong</au><au>Hong, Jiahao</au><au>Zhang, Feng</au><au>Yu, Changqian</au><au>Zhou, Hanyu</au><au>Gao, Changxin</au><au>Sang, Nong</au><au>Wang, Jingdong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>PLIP: Language-Image Pre-training for Person Representation Learning</atitle><date>2023-05-15</date><risdate>2023</risdate><abstract>Language-image pre-training is an effective technique for learning powerful
representations in general domains. However, when directly turning to person
representation learning, these general pre-training methods suffer from
unsatisfactory performance. The reason is that they neglect critical
person-related characteristics, i.e., fine-grained attributes and identities.
To address this issue, we propose a novel language-image pre-training framework
for person representation learning, termed PLIP. Specifically, we elaborately
design three pretext tasks: 1) Text-guided Image Colorization, aims to
establish the correspondence between the person-related image regions and the
fine-grained color-part textual phrases. 2) Image-guided Attributes Prediction,
aims to mine fine-grained attribute information of the person body in the
image; and 3) Identity-based Vision-Language Contrast, aims to correlate the
cross-modal representations at the identity level rather than the instance
level. Moreover, to implement our pre-train framework, we construct a
large-scale person dataset with image-text pairs named SYNTH-PEDES by
automatically generating textual annotations. We pre-train PLIP on SYNTH-PEDES
and evaluate our models by spanning downstream person-centric tasks. PLIP not
only significantly improves existing methods on all these tasks, but also shows
great ability in the zero-shot and domain generalization settings. The code,
dataset and weights will be released
at~\url{https://github.com/Zplusdragon/PLIP}</abstract><doi>10.48550/arxiv.2305.08386</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2305.08386 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2305_08386 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | PLIP: Language-Image Pre-training for Person Representation Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T22%3A18%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=PLIP:%20Language-Image%20Pre-training%20for%20Person%20Representation%20Learning&rft.au=Zuo,%20Jialong&rft.date=2023-05-15&rft_id=info:doi/10.48550/arxiv.2305.08386&rft_dat=%3Carxiv_GOX%3E2305_08386%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |