Extreme Multi-Label Skill Extraction Training using Large Language Models

Online job ads serve as a valuable source of information for skill requirements, playing a crucial role in labor market analysis and e-recruitment processes. Since such ads are typically formatted in free text, natural language processing (NLP) technologies are required to automatically process them...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Decorte, Jens-Joris, Verlinden, Severine, Van Hautte, Jeroen, Deleu, Johannes, Develder, Chris, Demeester, Thomas
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Decorte, Jens-Joris
Verlinden, Severine
Van Hautte, Jeroen
Deleu, Johannes
Develder, Chris
Demeester, Thomas
description Online job ads serve as a valuable source of information for skill requirements, playing a crucial role in labor market analysis and e-recruitment processes. Since such ads are typically formatted in free text, natural language processing (NLP) technologies are required to automatically process them. We specifically focus on the task of detecting skills (mentioned literally, or implicitly described) and linking them to a large skill ontology, making it a challenging case of extreme multi-label classification (XMLC). Given that there is no sizable labeled (training) dataset are available for this specific XMLC task, we propose techniques to leverage general Large Language Models (LLMs). We describe a cost-effective approach to generate an accurate, fully synthetic labeled dataset for skill extraction, and present a contrastive learning strategy that proves effective in the task. Our results across three skill extraction benchmarks show a consistent increase of between 15 to 25 percentage points in \textit{R-Precision@5} compared to previously published results that relied solely on distant supervision through literal matches.
doi_str_mv 10.48550/arxiv.2307.10778
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2307_10778</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2307_10778</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-a2d6ff27557bb78aa94aa09b80a0a284a01e552157f60afc8928debeeeee0f83</originalsourceid><addsrcrecordid>eNotj8FqwzAQRHXpoST9gJ6qH7C7li1rfQwhbQMOOSR3s4pXRlRximyX9O8bp53DzMDAwBPiOYO0QK3hleLVf6cqB5NmYAw-iu3mOkY-s9xNYfRJTZaDPHz6EOS80Gn0l14eI_ne952chtlrih3fvO8mupXdpeUwLMWDozDw038uxOFtc1x_JPX-fbte1QmVBhNSbemcMlobaw0SVQURVBaBgBQWBBlrrTJtXAnkTlgpbNnyLHCYL8TL3-sdpfmK_kzxp5mRmjtS_guph0dL</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Extreme Multi-Label Skill Extraction Training using Large Language Models</title><source>arXiv.org</source><creator>Decorte, Jens-Joris ; Verlinden, Severine ; Van Hautte, Jeroen ; Deleu, Johannes ; Develder, Chris ; Demeester, Thomas</creator><creatorcontrib>Decorte, Jens-Joris ; Verlinden, Severine ; Van Hautte, Jeroen ; Deleu, Johannes ; Develder, Chris ; Demeester, Thomas</creatorcontrib><description>Online job ads serve as a valuable source of information for skill requirements, playing a crucial role in labor market analysis and e-recruitment processes. Since such ads are typically formatted in free text, natural language processing (NLP) technologies are required to automatically process them. We specifically focus on the task of detecting skills (mentioned literally, or implicitly described) and linking them to a large skill ontology, making it a challenging case of extreme multi-label classification (XMLC). Given that there is no sizable labeled (training) dataset are available for this specific XMLC task, we propose techniques to leverage general Large Language Models (LLMs). We describe a cost-effective approach to generate an accurate, fully synthetic labeled dataset for skill extraction, and present a contrastive learning strategy that proves effective in the task. Our results across three skill extraction benchmarks show a consistent increase of between 15 to 25 percentage points in \textit{R-Precision@5} compared to previously published results that relied solely on distant supervision through literal matches.</description><identifier>DOI: 10.48550/arxiv.2307.10778</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2307.10778$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2307.10778$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Decorte, Jens-Joris</creatorcontrib><creatorcontrib>Verlinden, Severine</creatorcontrib><creatorcontrib>Van Hautte, Jeroen</creatorcontrib><creatorcontrib>Deleu, Johannes</creatorcontrib><creatorcontrib>Develder, Chris</creatorcontrib><creatorcontrib>Demeester, Thomas</creatorcontrib><title>Extreme Multi-Label Skill Extraction Training using Large Language Models</title><description>Online job ads serve as a valuable source of information for skill requirements, playing a crucial role in labor market analysis and e-recruitment processes. Since such ads are typically formatted in free text, natural language processing (NLP) technologies are required to automatically process them. We specifically focus on the task of detecting skills (mentioned literally, or implicitly described) and linking them to a large skill ontology, making it a challenging case of extreme multi-label classification (XMLC). Given that there is no sizable labeled (training) dataset are available for this specific XMLC task, we propose techniques to leverage general Large Language Models (LLMs). We describe a cost-effective approach to generate an accurate, fully synthetic labeled dataset for skill extraction, and present a contrastive learning strategy that proves effective in the task. Our results across three skill extraction benchmarks show a consistent increase of between 15 to 25 percentage points in \textit{R-Precision@5} compared to previously published results that relied solely on distant supervision through literal matches.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FqwzAQRHXpoST9gJ6qH7C7li1rfQwhbQMOOSR3s4pXRlRximyX9O8bp53DzMDAwBPiOYO0QK3hleLVf6cqB5NmYAw-iu3mOkY-s9xNYfRJTZaDPHz6EOS80Gn0l14eI_ne952chtlrih3fvO8mupXdpeUwLMWDozDw038uxOFtc1x_JPX-fbte1QmVBhNSbemcMlobaw0SVQURVBaBgBQWBBlrrTJtXAnkTlgpbNnyLHCYL8TL3-sdpfmK_kzxp5mRmjtS_guph0dL</recordid><startdate>20230720</startdate><enddate>20230720</enddate><creator>Decorte, Jens-Joris</creator><creator>Verlinden, Severine</creator><creator>Van Hautte, Jeroen</creator><creator>Deleu, Johannes</creator><creator>Develder, Chris</creator><creator>Demeester, Thomas</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230720</creationdate><title>Extreme Multi-Label Skill Extraction Training using Large Language Models</title><author>Decorte, Jens-Joris ; Verlinden, Severine ; Van Hautte, Jeroen ; Deleu, Johannes ; Develder, Chris ; Demeester, Thomas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-a2d6ff27557bb78aa94aa09b80a0a284a01e552157f60afc8928debeeeee0f83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Decorte, Jens-Joris</creatorcontrib><creatorcontrib>Verlinden, Severine</creatorcontrib><creatorcontrib>Van Hautte, Jeroen</creatorcontrib><creatorcontrib>Deleu, Johannes</creatorcontrib><creatorcontrib>Develder, Chris</creatorcontrib><creatorcontrib>Demeester, Thomas</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Decorte, Jens-Joris</au><au>Verlinden, Severine</au><au>Van Hautte, Jeroen</au><au>Deleu, Johannes</au><au>Develder, Chris</au><au>Demeester, Thomas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Extreme Multi-Label Skill Extraction Training using Large Language Models</atitle><date>2023-07-20</date><risdate>2023</risdate><abstract>Online job ads serve as a valuable source of information for skill requirements, playing a crucial role in labor market analysis and e-recruitment processes. Since such ads are typically formatted in free text, natural language processing (NLP) technologies are required to automatically process them. We specifically focus on the task of detecting skills (mentioned literally, or implicitly described) and linking them to a large skill ontology, making it a challenging case of extreme multi-label classification (XMLC). Given that there is no sizable labeled (training) dataset are available for this specific XMLC task, we propose techniques to leverage general Large Language Models (LLMs). We describe a cost-effective approach to generate an accurate, fully synthetic labeled dataset for skill extraction, and present a contrastive learning strategy that proves effective in the task. Our results across three skill extraction benchmarks show a consistent increase of between 15 to 25 percentage points in \textit{R-Precision@5} compared to previously published results that relied solely on distant supervision through literal matches.</abstract><doi>10.48550/arxiv.2307.10778</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2307.10778
ispartof
issn
language eng
recordid cdi_arxiv_primary_2307_10778
source arXiv.org
subjects Computer Science - Computation and Language
title Extreme Multi-Label Skill Extraction Training using Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T13%3A44%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Extreme%20Multi-Label%20Skill%20Extraction%20Training%20using%20Large%20Language%20Models&rft.au=Decorte,%20Jens-Joris&rft.date=2023-07-20&rft_id=info:doi/10.48550/arxiv.2307.10778&rft_dat=%3Carxiv_GOX%3E2307_10778%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true