Learning Context Using Segment-Level LSTM for Neural Sequence Labeling
This article introduces an approach that learns segment-level context for sequence labeling in natural language processing (NLP). Previous approaches limit their basic unit to a word for feature extraction because sequence labeling is a tokenlevel task in which labels are annotated word-by-word. How...
Gespeichert in:
Veröffentlicht in: | IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2020, Vol.28, p.105-115 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 115 |
---|---|
container_issue | |
container_start_page | 105 |
container_title | IEEE/ACM transactions on audio, speech, and language processing |
container_volume | 28 |
creator | Shin, Youhyun Lee, Sang-goo |
description | This article introduces an approach that learns segment-level context for sequence labeling in natural language processing (NLP). Previous approaches limit their basic unit to a word for feature extraction because sequence labeling is a tokenlevel task in which labels are annotated word-by-word. However, the text segment is an ultimate unit for labeling, and we are easily able to obtain segment information from annotated labels in a IOB/IOBES format. Most neural sequence labeling models expand their learning capacity by employing additional layers, such as a character-level layer, or jointly training NLP tasks with common knowledge. The architecture of our model is based on the charLSTM-BiLSTM-CRF model, and we extend the model with an additional segment-level layer called segLSTM. We therefore suggest a sequence labeling algorithm called charLSTM-BiLSTMCRF-segLSTMsLM which employs an additional segment-level long short-term memory (LSTM) that trains features by learning adjacent context in a segment. We demonstrate the performance of our model on four sequence labeling datasets, namely, Peen Tree Bank, CoNLL 2000, CoNLL 2003, and OntoNotes 5.0. Experimental results show that our model performs better than state-of-theart variants of BiLSTM-CRF. In particular, the proposed model enhances the performance of tasks for finding appropriate labels of multiple token segments. |
doi_str_mv | 10.1109/TASLP.2019.2948773 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2330021453</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8878020</ieee_id><sourcerecordid>2330021453</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-da241b6e78edc2d087d8b0bf197d37a350c31cda70b21aece10ddf51d9dad3183</originalsourceid><addsrcrecordid>eNo9kE9PwzAMxSMEEtPYF4BLJc4ddtKS5DhNDJDKH2nbOUoTd9rUtSPpEHx7OjY42Zbf87N-jF0jjBFB3y0m8-J9zAH1mOtMSSnO2IALrlMtIDv_67mGSzaKcQMACFJrmQ3YrCAbmnWzSqZt09FXlyzjYZrTaktNlxb0SXVSzBcvSdWG5JX2wdb99mNPjaOksCXVvf6KXVS2jjQ61SFbzh4W06e0eHt8nk6K1HGdd6m3PMPynqQi77gHJb0qoaxQSy-kFTk4gc5bCSVHS44QvK9y9NpbL1CJIbs93t2Ftn8hdmbT7kPTRxouBADHLBe9ih9VLrQxBqrMLqy3NnwbBHNAZn6RmQMyc0LWm26OpjUR_RuUkgo4iB_pjmc-</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2330021453</pqid></control><display><type>article</type><title>Learning Context Using Segment-Level LSTM for Neural Sequence Labeling</title><source>IEEE Electronic Library (IEL)</source><creator>Shin, Youhyun ; Lee, Sang-goo</creator><creatorcontrib>Shin, Youhyun ; Lee, Sang-goo</creatorcontrib><description>This article introduces an approach that learns segment-level context for sequence labeling in natural language processing (NLP). Previous approaches limit their basic unit to a word for feature extraction because sequence labeling is a tokenlevel task in which labels are annotated word-by-word. However, the text segment is an ultimate unit for labeling, and we are easily able to obtain segment information from annotated labels in a IOB/IOBES format. Most neural sequence labeling models expand their learning capacity by employing additional layers, such as a character-level layer, or jointly training NLP tasks with common knowledge. The architecture of our model is based on the charLSTM-BiLSTM-CRF model, and we extend the model with an additional segment-level layer called segLSTM. We therefore suggest a sequence labeling algorithm called charLSTM-BiLSTMCRF-segLSTMsLM which employs an additional segment-level long short-term memory (LSTM) that trains features by learning adjacent context in a segment. We demonstrate the performance of our model on four sequence labeling datasets, namely, Peen Tree Bank, CoNLL 2000, CoNLL 2003, and OntoNotes 5.0. Experimental results show that our model performs better than state-of-theart variants of BiLSTM-CRF. In particular, the proposed model enhances the performance of tasks for finding appropriate labels of multiple token segments.</description><identifier>ISSN: 2329-9290</identifier><identifier>EISSN: 2329-9304</identifier><identifier>DOI: 10.1109/TASLP.2019.2948773</identifier><identifier>CODEN: ITASD8</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; BiLSTM-CRF ; Crystals ; Feature extraction ; Hidden Markov models ; joint learning ; Labeling ; Labelling ; Labels ; language modeling ; Learning ; Natural language processing ; Sequence labeling ; Tagging ; Task analysis ; Words (language)</subject><ispartof>IEEE/ACM transactions on audio, speech, and language processing, 2020, Vol.28, p.105-115</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-da241b6e78edc2d087d8b0bf197d37a350c31cda70b21aece10ddf51d9dad3183</citedby><cites>FETCH-LOGICAL-c295t-da241b6e78edc2d087d8b0bf197d37a350c31cda70b21aece10ddf51d9dad3183</cites><orcidid>0000-0001-7013-9057</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8878020$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4024,27923,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8878020$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Shin, Youhyun</creatorcontrib><creatorcontrib>Lee, Sang-goo</creatorcontrib><title>Learning Context Using Segment-Level LSTM for Neural Sequence Labeling</title><title>IEEE/ACM transactions on audio, speech, and language processing</title><addtitle>TASLP</addtitle><description>This article introduces an approach that learns segment-level context for sequence labeling in natural language processing (NLP). Previous approaches limit their basic unit to a word for feature extraction because sequence labeling is a tokenlevel task in which labels are annotated word-by-word. However, the text segment is an ultimate unit for labeling, and we are easily able to obtain segment information from annotated labels in a IOB/IOBES format. Most neural sequence labeling models expand their learning capacity by employing additional layers, such as a character-level layer, or jointly training NLP tasks with common knowledge. The architecture of our model is based on the charLSTM-BiLSTM-CRF model, and we extend the model with an additional segment-level layer called segLSTM. We therefore suggest a sequence labeling algorithm called charLSTM-BiLSTMCRF-segLSTMsLM which employs an additional segment-level long short-term memory (LSTM) that trains features by learning adjacent context in a segment. We demonstrate the performance of our model on four sequence labeling datasets, namely, Peen Tree Bank, CoNLL 2000, CoNLL 2003, and OntoNotes 5.0. Experimental results show that our model performs better than state-of-theart variants of BiLSTM-CRF. In particular, the proposed model enhances the performance of tasks for finding appropriate labels of multiple token segments.</description><subject>Algorithms</subject><subject>BiLSTM-CRF</subject><subject>Crystals</subject><subject>Feature extraction</subject><subject>Hidden Markov models</subject><subject>joint learning</subject><subject>Labeling</subject><subject>Labelling</subject><subject>Labels</subject><subject>language modeling</subject><subject>Learning</subject><subject>Natural language processing</subject><subject>Sequence labeling</subject><subject>Tagging</subject><subject>Task analysis</subject><subject>Words (language)</subject><issn>2329-9290</issn><issn>2329-9304</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE9PwzAMxSMEEtPYF4BLJc4ddtKS5DhNDJDKH2nbOUoTd9rUtSPpEHx7OjY42Zbf87N-jF0jjBFB3y0m8-J9zAH1mOtMSSnO2IALrlMtIDv_67mGSzaKcQMACFJrmQ3YrCAbmnWzSqZt09FXlyzjYZrTaktNlxb0SXVSzBcvSdWG5JX2wdb99mNPjaOksCXVvf6KXVS2jjQ61SFbzh4W06e0eHt8nk6K1HGdd6m3PMPynqQi77gHJb0qoaxQSy-kFTk4gc5bCSVHS44QvK9y9NpbL1CJIbs93t2Ftn8hdmbT7kPTRxouBADHLBe9ih9VLrQxBqrMLqy3NnwbBHNAZn6RmQMyc0LWm26OpjUR_RuUkgo4iB_pjmc-</recordid><startdate>2020</startdate><enddate>2020</enddate><creator>Shin, Youhyun</creator><creator>Lee, Sang-goo</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-7013-9057</orcidid></search><sort><creationdate>2020</creationdate><title>Learning Context Using Segment-Level LSTM for Neural Sequence Labeling</title><author>Shin, Youhyun ; Lee, Sang-goo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-da241b6e78edc2d087d8b0bf197d37a350c31cda70b21aece10ddf51d9dad3183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>BiLSTM-CRF</topic><topic>Crystals</topic><topic>Feature extraction</topic><topic>Hidden Markov models</topic><topic>joint learning</topic><topic>Labeling</topic><topic>Labelling</topic><topic>Labels</topic><topic>language modeling</topic><topic>Learning</topic><topic>Natural language processing</topic><topic>Sequence labeling</topic><topic>Tagging</topic><topic>Task analysis</topic><topic>Words (language)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shin, Youhyun</creatorcontrib><creatorcontrib>Lee, Sang-goo</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shin, Youhyun</au><au>Lee, Sang-goo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Context Using Segment-Level LSTM for Neural Sequence Labeling</atitle><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle><stitle>TASLP</stitle><date>2020</date><risdate>2020</risdate><volume>28</volume><spage>105</spage><epage>115</epage><pages>105-115</pages><issn>2329-9290</issn><eissn>2329-9304</eissn><coden>ITASD8</coden><abstract>This article introduces an approach that learns segment-level context for sequence labeling in natural language processing (NLP). Previous approaches limit their basic unit to a word for feature extraction because sequence labeling is a tokenlevel task in which labels are annotated word-by-word. However, the text segment is an ultimate unit for labeling, and we are easily able to obtain segment information from annotated labels in a IOB/IOBES format. Most neural sequence labeling models expand their learning capacity by employing additional layers, such as a character-level layer, or jointly training NLP tasks with common knowledge. The architecture of our model is based on the charLSTM-BiLSTM-CRF model, and we extend the model with an additional segment-level layer called segLSTM. We therefore suggest a sequence labeling algorithm called charLSTM-BiLSTMCRF-segLSTMsLM which employs an additional segment-level long short-term memory (LSTM) that trains features by learning adjacent context in a segment. We demonstrate the performance of our model on four sequence labeling datasets, namely, Peen Tree Bank, CoNLL 2000, CoNLL 2003, and OntoNotes 5.0. Experimental results show that our model performs better than state-of-theart variants of BiLSTM-CRF. In particular, the proposed model enhances the performance of tasks for finding appropriate labels of multiple token segments.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TASLP.2019.2948773</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0001-7013-9057</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2329-9290 |
ispartof | IEEE/ACM transactions on audio, speech, and language processing, 2020, Vol.28, p.105-115 |
issn | 2329-9290 2329-9304 |
language | eng |
recordid | cdi_proquest_journals_2330021453 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms BiLSTM-CRF Crystals Feature extraction Hidden Markov models joint learning Labeling Labelling Labels language modeling Learning Natural language processing Sequence labeling Tagging Task analysis Words (language) |
title | Learning Context Using Segment-Level LSTM for Neural Sequence Labeling |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T14%3A23%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Context%20Using%20Segment-Level%20LSTM%20for%20Neural%20Sequence%20Labeling&rft.jtitle=IEEE/ACM%20transactions%20on%20audio,%20speech,%20and%20language%20processing&rft.au=Shin,%20Youhyun&rft.date=2020&rft.volume=28&rft.spage=105&rft.epage=115&rft.pages=105-115&rft.issn=2329-9290&rft.eissn=2329-9304&rft.coden=ITASD8&rft_id=info:doi/10.1109/TASLP.2019.2948773&rft_dat=%3Cproquest_RIE%3E2330021453%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2330021453&rft_id=info:pmid/&rft_ieee_id=8878020&rfr_iscdi=true |