Keyphrase Generation Based on Self-Attention Mechanism
Keyphrase greatly provides summarized and valuable information. This information can help us not only understand text semantics, but also organize and retrieve text content effectively. The task of automatically generating it has received considerable attention in recent decades. From the previous s...
Gespeichert in:
Veröffentlicht in: | Computers, materials & continua materials & continua, 2019-01, Vol.61 (2), p.569-581 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 581 |
---|---|
container_issue | 2 |
container_start_page | 569 |
container_title | Computers, materials & continua |
container_volume | 61 |
creator | Yang, Kehua Wang, Yaodong Zhang, Wei Yao, Jiqing Le, Yuquan |
description | Keyphrase greatly provides summarized and valuable information. This information can help us not only understand text semantics, but also organize and retrieve text content effectively. The task of automatically generating it has received considerable attention in recent decades. From the previous studies, we can see many workable solutions for obtaining keyphrases. One method is to divide the content to be summarized into multiple blocks of text, then we rank and select the most important content. The disadvantage of this method is that it cannot identify keyphrase that does not include in the text, let alone get the real semantic meaning hidden in the text. Another approach uses recurrent neural networks to generate keyphrases from the semantic aspects of the text, but the inherently sequential nature precludes parallelization within training examples, and distances have limitations on context dependencies. Previous works have demonstrated the benefits of the self-attention mechanism, which can learn global text dependency features and can be parallelized. Inspired by the above observation, we propose a keyphrase generation model, which is based entirely on the self-attention mechanism. It is an encoder-decoder model that can make up the above disadvantage effectively. In addition, we also consider the semantic similarity between keyphrases, and add semantic similarity processing module into the model. This proposed model, which is demonstrated by empirical analysis on five datasets, can achieve competitive performance compared to baseline methods. |
doi_str_mv | 10.32604/cmc.2019.05952 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2396003176</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2308030423</sourcerecordid><originalsourceid>FETCH-LOGICAL-c338t-3277cbedf4febcdcee3e2a69d55d9966c29fd605ed45f3eb6f70516f228835783</originalsourceid><addsrcrecordid>eNp9kEFPwzAMRiMEEmNw5jqJczs3btzmOCYYiCEOwDlqE0frtLUj6Q7795SNAydO_mw92fIT4jaDFCVBPrVbm0rIdApKK3kmRpnKKZFS0vmffCmuYlwDIKGGkaAXPuxWoYo8WXDLoeqbrp3cD72bDOGdNz6Z9T23x_kr21XVNnF7LS58tYl881vH4vPx4WP-lCzfFs_z2TKxiGWfoCwKW7PzuefaOsuMLCvSTimnNZGV2jsCxS5XHrkmX4DKyEtZlqiKEsfi7rR3F7qvPcferLt9aIeTRqKm4Y2soP8pKAEhlzhQ0xNlQxdjYG92odlW4WAyMEeFZlBofhSao0L8BmGBYrw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2396003176</pqid></control><display><type>article</type><title>Keyphrase Generation Based on Self-Attention Mechanism</title><source>EZB-FREE-00999 freely available EZB journals</source><creator>Yang, Kehua ; Wang, Yaodong ; Zhang, Wei ; Yao, Jiqing ; Le, Yuquan</creator><creatorcontrib>Yang, Kehua ; Wang, Yaodong ; Zhang, Wei ; Yao, Jiqing ; Le, Yuquan</creatorcontrib><description>Keyphrase greatly provides summarized and valuable information. This information can help us not only understand text semantics, but also organize and retrieve text content effectively. The task of automatically generating it has received considerable attention in recent decades. From the previous studies, we can see many workable solutions for obtaining keyphrases. One method is to divide the content to be summarized into multiple blocks of text, then we rank and select the most important content. The disadvantage of this method is that it cannot identify keyphrase that does not include in the text, let alone get the real semantic meaning hidden in the text. Another approach uses recurrent neural networks to generate keyphrases from the semantic aspects of the text, but the inherently sequential nature precludes parallelization within training examples, and distances have limitations on context dependencies. Previous works have demonstrated the benefits of the self-attention mechanism, which can learn global text dependency features and can be parallelized. Inspired by the above observation, we propose a keyphrase generation model, which is based entirely on the self-attention mechanism. It is an encoder-decoder model that can make up the above disadvantage effectively. In addition, we also consider the semantic similarity between keyphrases, and add semantic similarity processing module into the model. This proposed model, which is demonstrated by empirical analysis on five datasets, can achieve competitive performance compared to baseline methods.</description><identifier>ISSN: 1546-2226</identifier><identifier>ISSN: 1546-2218</identifier><identifier>EISSN: 1546-2226</identifier><identifier>DOI: 10.32604/cmc.2019.05952</identifier><language>eng</language><publisher>Henderson: Tech Science Press</publisher><subject>Coders ; Dependence ; Empirical analysis ; Encoders-Decoders ; Recurrent neural networks ; Semantics ; Similarity</subject><ispartof>Computers, materials & continua, 2019-01, Vol.61 (2), p.569-581</ispartof><rights>Copyright Tech Science Press 2019</rights><rights>2019. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c338t-3277cbedf4febcdcee3e2a69d55d9966c29fd605ed45f3eb6f70516f228835783</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,4022,27922,27923,27924</link.rule.ids></links><search><creatorcontrib>Yang, Kehua</creatorcontrib><creatorcontrib>Wang, Yaodong</creatorcontrib><creatorcontrib>Zhang, Wei</creatorcontrib><creatorcontrib>Yao, Jiqing</creatorcontrib><creatorcontrib>Le, Yuquan</creatorcontrib><title>Keyphrase Generation Based on Self-Attention Mechanism</title><title>Computers, materials & continua</title><description>Keyphrase greatly provides summarized and valuable information. This information can help us not only understand text semantics, but also organize and retrieve text content effectively. The task of automatically generating it has received considerable attention in recent decades. From the previous studies, we can see many workable solutions for obtaining keyphrases. One method is to divide the content to be summarized into multiple blocks of text, then we rank and select the most important content. The disadvantage of this method is that it cannot identify keyphrase that does not include in the text, let alone get the real semantic meaning hidden in the text. Another approach uses recurrent neural networks to generate keyphrases from the semantic aspects of the text, but the inherently sequential nature precludes parallelization within training examples, and distances have limitations on context dependencies. Previous works have demonstrated the benefits of the self-attention mechanism, which can learn global text dependency features and can be parallelized. Inspired by the above observation, we propose a keyphrase generation model, which is based entirely on the self-attention mechanism. It is an encoder-decoder model that can make up the above disadvantage effectively. In addition, we also consider the semantic similarity between keyphrases, and add semantic similarity processing module into the model. This proposed model, which is demonstrated by empirical analysis on five datasets, can achieve competitive performance compared to baseline methods.</description><subject>Coders</subject><subject>Dependence</subject><subject>Empirical analysis</subject><subject>Encoders-Decoders</subject><subject>Recurrent neural networks</subject><subject>Semantics</subject><subject>Similarity</subject><issn>1546-2226</issn><issn>1546-2218</issn><issn>1546-2226</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp9kEFPwzAMRiMEEmNw5jqJczs3btzmOCYYiCEOwDlqE0frtLUj6Q7795SNAydO_mw92fIT4jaDFCVBPrVbm0rIdApKK3kmRpnKKZFS0vmffCmuYlwDIKGGkaAXPuxWoYo8WXDLoeqbrp3cD72bDOGdNz6Z9T23x_kr21XVNnF7LS58tYl881vH4vPx4WP-lCzfFs_z2TKxiGWfoCwKW7PzuefaOsuMLCvSTimnNZGV2jsCxS5XHrkmX4DKyEtZlqiKEsfi7rR3F7qvPcferLt9aIeTRqKm4Y2soP8pKAEhlzhQ0xNlQxdjYG92odlW4WAyMEeFZlBofhSao0L8BmGBYrw</recordid><startdate>20190101</startdate><enddate>20190101</enddate><creator>Yang, Kehua</creator><creator>Wang, Yaodong</creator><creator>Zhang, Wei</creator><creator>Yao, Jiqing</creator><creator>Le, Yuquan</creator><general>Tech Science Press</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20190101</creationdate><title>Keyphrase Generation Based on Self-Attention Mechanism</title><author>Yang, Kehua ; Wang, Yaodong ; Zhang, Wei ; Yao, Jiqing ; Le, Yuquan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c338t-3277cbedf4febcdcee3e2a69d55d9966c29fd605ed45f3eb6f70516f228835783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Coders</topic><topic>Dependence</topic><topic>Empirical analysis</topic><topic>Encoders-Decoders</topic><topic>Recurrent neural networks</topic><topic>Semantics</topic><topic>Similarity</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Kehua</creatorcontrib><creatorcontrib>Wang, Yaodong</creatorcontrib><creatorcontrib>Zhang, Wei</creatorcontrib><creatorcontrib>Yao, Jiqing</creatorcontrib><creatorcontrib>Le, Yuquan</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Computers, materials & continua</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Kehua</au><au>Wang, Yaodong</au><au>Zhang, Wei</au><au>Yao, Jiqing</au><au>Le, Yuquan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Keyphrase Generation Based on Self-Attention Mechanism</atitle><jtitle>Computers, materials & continua</jtitle><date>2019-01-01</date><risdate>2019</risdate><volume>61</volume><issue>2</issue><spage>569</spage><epage>581</epage><pages>569-581</pages><issn>1546-2226</issn><issn>1546-2218</issn><eissn>1546-2226</eissn><abstract>Keyphrase greatly provides summarized and valuable information. This information can help us not only understand text semantics, but also organize and retrieve text content effectively. The task of automatically generating it has received considerable attention in recent decades. From the previous studies, we can see many workable solutions for obtaining keyphrases. One method is to divide the content to be summarized into multiple blocks of text, then we rank and select the most important content. The disadvantage of this method is that it cannot identify keyphrase that does not include in the text, let alone get the real semantic meaning hidden in the text. Another approach uses recurrent neural networks to generate keyphrases from the semantic aspects of the text, but the inherently sequential nature precludes parallelization within training examples, and distances have limitations on context dependencies. Previous works have demonstrated the benefits of the self-attention mechanism, which can learn global text dependency features and can be parallelized. Inspired by the above observation, we propose a keyphrase generation model, which is based entirely on the self-attention mechanism. It is an encoder-decoder model that can make up the above disadvantage effectively. In addition, we also consider the semantic similarity between keyphrases, and add semantic similarity processing module into the model. This proposed model, which is demonstrated by empirical analysis on five datasets, can achieve competitive performance compared to baseline methods.</abstract><cop>Henderson</cop><pub>Tech Science Press</pub><doi>10.32604/cmc.2019.05952</doi><tpages>13</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1546-2226 |
ispartof | Computers, materials & continua, 2019-01, Vol.61 (2), p.569-581 |
issn | 1546-2226 1546-2218 1546-2226 |
language | eng |
recordid | cdi_proquest_journals_2396003176 |
source | EZB-FREE-00999 freely available EZB journals |
subjects | Coders Dependence Empirical analysis Encoders-Decoders Recurrent neural networks Semantics Similarity |
title | Keyphrase Generation Based on Self-Attention Mechanism |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T10%3A52%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Keyphrase%20Generation%20Based%20on%20Self-Attention%20Mechanism&rft.jtitle=Computers,%20materials%20&%20continua&rft.au=Yang,%20Kehua&rft.date=2019-01-01&rft.volume=61&rft.issue=2&rft.spage=569&rft.epage=581&rft.pages=569-581&rft.issn=1546-2226&rft.eissn=1546-2226&rft_id=info:doi/10.32604/cmc.2019.05952&rft_dat=%3Cproquest_cross%3E2308030423%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2396003176&rft_id=info:pmid/&rfr_iscdi=true |