KeyGen2Vec: Learning Document Embedding via Multi-label Keyword Generation in Question-Answering

Representing documents into high dimensional embedding space while preserving the structural similarity between document sources has been an ultimate goal for many works on text representation learning. Current embedding models, however, mainly rely on the availability of label supervision to increa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ni'mah, Iftitahu, Khoshrou, Samaneh, Menkovski, Vlado, Pechenizkiy, Mykola
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ni'mah, Iftitahu
Khoshrou, Samaneh
Menkovski, Vlado
Pechenizkiy, Mykola
description Representing documents into high dimensional embedding space while preserving the structural similarity between document sources has been an ultimate goal for many works on text representation learning. Current embedding models, however, mainly rely on the availability of label supervision to increase the expressiveness of the resulting embeddings. In contrast, unsupervised embeddings are cheap, but they often cannot capture implicit structure in target corpus, particularly for samples that come from different distribution with the pretraining source. Our study aims to loosen up the dependency on label supervision by learning document embeddings via Sequence-to-Sequence (Seq2Seq) text generator. Specifically, we reformulate keyphrase generation task into multi-label keyword generation in community-based Question Answering (cQA). Our empirical results show that KeyGen2Vec in general is superior than multi-label keyword classifier by up to 14.7% based on Purity, Normalized Mutual Information (NMI), and F1-Score metrics. Interestingly, although in general the absolute advantage of learning embeddings through label supervision is highly positive across evaluation datasets, KeyGen2Vec is shown to be competitive with classifier that exploits topic label supervision in Yahoo! cQA with larger number of latent topic labels.
doi_str_mv 10.48550/arxiv.2310.19650
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_19650</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_19650</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-3ea0cc7be7ad2fb60c7b8d90890641cf473fddecbd4e26ced5d493512d58c5323</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwKr-ARfHjzzYVaUURFCFVLENju8NspQ4yEla-vc4hdWMRjojHULuEr5Sudb83oQfd1wJGYekSDW_Jp-veN6hFx9oH2iJJnjnv-hjb6cO_Ui3XY0A83R0hr5N7ehYa2psaeROfQAaYQxmdL2nztP3CYe5s7UfThgieEOuGtMOePufC3J42h42z6zc714265KZNONMouHWZjVmBkRTpzz2HAqeFzxViW1UJhsAtDUoFKlF0KAKqRMBOrdaCrkgy7_bi2L1HVxnwrmaVauLqvwF4o9QGg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>KeyGen2Vec: Learning Document Embedding via Multi-label Keyword Generation in Question-Answering</title><source>arXiv.org</source><creator>Ni'mah, Iftitahu ; Khoshrou, Samaneh ; Menkovski, Vlado ; Pechenizkiy, Mykola</creator><creatorcontrib>Ni'mah, Iftitahu ; Khoshrou, Samaneh ; Menkovski, Vlado ; Pechenizkiy, Mykola</creatorcontrib><description>Representing documents into high dimensional embedding space while preserving the structural similarity between document sources has been an ultimate goal for many works on text representation learning. Current embedding models, however, mainly rely on the availability of label supervision to increase the expressiveness of the resulting embeddings. In contrast, unsupervised embeddings are cheap, but they often cannot capture implicit structure in target corpus, particularly for samples that come from different distribution with the pretraining source. Our study aims to loosen up the dependency on label supervision by learning document embeddings via Sequence-to-Sequence (Seq2Seq) text generator. Specifically, we reformulate keyphrase generation task into multi-label keyword generation in community-based Question Answering (cQA). Our empirical results show that KeyGen2Vec in general is superior than multi-label keyword classifier by up to 14.7% based on Purity, Normalized Mutual Information (NMI), and F1-Score metrics. Interestingly, although in general the absolute advantage of learning embeddings through label supervision is highly positive across evaluation datasets, KeyGen2Vec is shown to be competitive with classifier that exploits topic label supervision in Yahoo! cQA with larger number of latent topic labels.</description><identifier>DOI: 10.48550/arxiv.2310.19650</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.19650$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.19650$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ni'mah, Iftitahu</creatorcontrib><creatorcontrib>Khoshrou, Samaneh</creatorcontrib><creatorcontrib>Menkovski, Vlado</creatorcontrib><creatorcontrib>Pechenizkiy, Mykola</creatorcontrib><title>KeyGen2Vec: Learning Document Embedding via Multi-label Keyword Generation in Question-Answering</title><description>Representing documents into high dimensional embedding space while preserving the structural similarity between document sources has been an ultimate goal for many works on text representation learning. Current embedding models, however, mainly rely on the availability of label supervision to increase the expressiveness of the resulting embeddings. In contrast, unsupervised embeddings are cheap, but they often cannot capture implicit structure in target corpus, particularly for samples that come from different distribution with the pretraining source. Our study aims to loosen up the dependency on label supervision by learning document embeddings via Sequence-to-Sequence (Seq2Seq) text generator. Specifically, we reformulate keyphrase generation task into multi-label keyword generation in community-based Question Answering (cQA). Our empirical results show that KeyGen2Vec in general is superior than multi-label keyword classifier by up to 14.7% based on Purity, Normalized Mutual Information (NMI), and F1-Score metrics. Interestingly, although in general the absolute advantage of learning embeddings through label supervision is highly positive across evaluation datasets, KeyGen2Vec is shown to be competitive with classifier that exploits topic label supervision in Yahoo! cQA with larger number of latent topic labels.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwKr-ARfHjzzYVaUURFCFVLENju8NspQ4yEla-vc4hdWMRjojHULuEr5Sudb83oQfd1wJGYekSDW_Jp-veN6hFx9oH2iJJnjnv-hjb6cO_Ui3XY0A83R0hr5N7ehYa2psaeROfQAaYQxmdL2nztP3CYe5s7UfThgieEOuGtMOePufC3J42h42z6zc714265KZNONMouHWZjVmBkRTpzz2HAqeFzxViW1UJhsAtDUoFKlF0KAKqRMBOrdaCrkgy7_bi2L1HVxnwrmaVauLqvwF4o9QGg</recordid><startdate>20231030</startdate><enddate>20231030</enddate><creator>Ni'mah, Iftitahu</creator><creator>Khoshrou, Samaneh</creator><creator>Menkovski, Vlado</creator><creator>Pechenizkiy, Mykola</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231030</creationdate><title>KeyGen2Vec: Learning Document Embedding via Multi-label Keyword Generation in Question-Answering</title><author>Ni'mah, Iftitahu ; Khoshrou, Samaneh ; Menkovski, Vlado ; Pechenizkiy, Mykola</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-3ea0cc7be7ad2fb60c7b8d90890641cf473fddecbd4e26ced5d493512d58c5323</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Ni'mah, Iftitahu</creatorcontrib><creatorcontrib>Khoshrou, Samaneh</creatorcontrib><creatorcontrib>Menkovski, Vlado</creatorcontrib><creatorcontrib>Pechenizkiy, Mykola</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ni'mah, Iftitahu</au><au>Khoshrou, Samaneh</au><au>Menkovski, Vlado</au><au>Pechenizkiy, Mykola</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>KeyGen2Vec: Learning Document Embedding via Multi-label Keyword Generation in Question-Answering</atitle><date>2023-10-30</date><risdate>2023</risdate><abstract>Representing documents into high dimensional embedding space while preserving the structural similarity between document sources has been an ultimate goal for many works on text representation learning. Current embedding models, however, mainly rely on the availability of label supervision to increase the expressiveness of the resulting embeddings. In contrast, unsupervised embeddings are cheap, but they often cannot capture implicit structure in target corpus, particularly for samples that come from different distribution with the pretraining source. Our study aims to loosen up the dependency on label supervision by learning document embeddings via Sequence-to-Sequence (Seq2Seq) text generator. Specifically, we reformulate keyphrase generation task into multi-label keyword generation in community-based Question Answering (cQA). Our empirical results show that KeyGen2Vec in general is superior than multi-label keyword classifier by up to 14.7% based on Purity, Normalized Mutual Information (NMI), and F1-Score metrics. Interestingly, although in general the absolute advantage of learning embeddings through label supervision is highly positive across evaluation datasets, KeyGen2Vec is shown to be competitive with classifier that exploits topic label supervision in Yahoo! cQA with larger number of latent topic labels.</abstract><doi>10.48550/arxiv.2310.19650</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.19650
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_19650
source arXiv.org
subjects Computer Science - Computation and Language
title KeyGen2Vec: Learning Document Embedding via Multi-label Keyword Generation in Question-Answering
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T19%3A16%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=KeyGen2Vec:%20Learning%20Document%20Embedding%20via%20Multi-label%20Keyword%20Generation%20in%20Question-Answering&rft.au=Ni'mah,%20Iftitahu&rft.date=2023-10-30&rft_id=info:doi/10.48550/arxiv.2310.19650&rft_dat=%3Carxiv_GOX%3E2310_19650%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true