Multi-task learning using a hybrid representation for text classification
Text classification is an important task in machine learning. Specifically, deep neural network has been shown strong capability to improve performance in different fields, for example speech recognition, objects recognition and natural language processing. However, in most previous work, the extrac...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2020-06, Vol.32 (11), p.6467-6480 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 6480 |
---|---|
container_issue | 11 |
container_start_page | 6467 |
container_title | Neural computing & applications |
container_volume | 32 |
creator | Lu, Guangquan Gan, Jiangzhang Yin, Jian Luo, Zhiping Li, Bo Zhao, Xishun |
description | Text classification is an important task in machine learning. Specifically, deep neural network has been shown strong capability to improve performance in different fields, for example speech recognition, objects recognition and natural language processing. However, in most previous work, the extracted feature models do not achieve the relative text tasks well. To address this issue, we introduce a novel multi-task learning approach called a hybrid representation-learning network for text classification tasks. Our method consists of two network components: a bidirectional gated recurrent unit with attention network module and a convolutional neural network module. In particular, the attention module allows for the task learning private feature representation in local dependence from training texts and that the convolutional neural network module can learn the global representation on sharing. Experiments on 16 subsets of Amazon review data show that our method outperforms several baselines and also proves the effectiveness of joint learning multi-relative tasks. |
doi_str_mv | 10.1007/s00521-018-3934-y |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2407709563</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407709563</sourcerecordid><originalsourceid>FETCH-LOGICAL-c316t-a53fe22cd5387fe7affcbdfbacbd40298a1b99af74461e23d5ea03d5f909732d3</originalsourceid><addsrcrecordid>eNp1kEtLxDAUhYMoOI7-AHcF19GbV9MsZfAxMOJG1yFtkzFjbcckBfvvzVjBlZtz4d5zzoUPoUsC1wRA3kQAQQkGUmGmGMfTEVoQzhhmIKpjtADF87Xk7BSdxbgDAF5WYoHWT2OXPE4mvhedNaH3_bYY40FN8TbVwbdFsPtgo-2TSX7oCzeEItmvVDSdidE73_zsz9GJM120F79ziV7v715Wj3jz_LBe3W5ww0iZsBHMWUqbVrBKOiuNc03dutpk5UBVZUitlHGS85JYylphDWR1CpRktGVLdDX37sPwOdqY9G4YQ59faspBSlCiZNlFZlcThhiDdXof_IcJkyagD8T0TExnYvpATE85Q-dMzN5-a8Nf8_-hby_JcD8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2407709563</pqid></control><display><type>article</type><title>Multi-task learning using a hybrid representation for text classification</title><source>SpringerNature Journals</source><creator>Lu, Guangquan ; Gan, Jiangzhang ; Yin, Jian ; Luo, Zhiping ; Li, Bo ; Zhao, Xishun</creator><creatorcontrib>Lu, Guangquan ; Gan, Jiangzhang ; Yin, Jian ; Luo, Zhiping ; Li, Bo ; Zhao, Xishun</creatorcontrib><description>Text classification is an important task in machine learning. Specifically, deep neural network has been shown strong capability to improve performance in different fields, for example speech recognition, objects recognition and natural language processing. However, in most previous work, the extracted feature models do not achieve the relative text tasks well. To address this issue, we introduce a novel multi-task learning approach called a hybrid representation-learning network for text classification tasks. Our method consists of two network components: a bidirectional gated recurrent unit with attention network module and a convolutional neural network module. In particular, the attention module allows for the task learning private feature representation in local dependence from training texts and that the convolutional neural network module can learn the global representation on sharing. Experiments on 16 subsets of Amazon review data show that our method outperforms several baselines and also proves the effectiveness of joint learning multi-relative tasks.</description><identifier>ISSN: 0941-0643</identifier><identifier>EISSN: 1433-3058</identifier><identifier>DOI: 10.1007/s00521-018-3934-y</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Artificial Intelligence ; Artificial neural networks ; Classification ; Computational Biology/Bioinformatics ; Computational Science and Engineering ; Computer Science ; Data Mining and Knowledge Discovery ; Feature extraction ; Image Processing and Computer Vision ; Machine learning ; Modules ; Multi-Source Data Understanding (MSDU) ; Natural language processing ; Neural networks ; Object recognition ; Performance enhancement ; Probability and Statistics in Computer Science ; Representations ; Speech recognition</subject><ispartof>Neural computing & applications, 2020-06, Vol.32 (11), p.6467-6480</ispartof><rights>Springer-Verlag London Ltd., part of Springer Nature 2018</rights><rights>Springer-Verlag London Ltd., part of Springer Nature 2018.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c316t-a53fe22cd5387fe7affcbdfbacbd40298a1b99af74461e23d5ea03d5f909732d3</citedby><cites>FETCH-LOGICAL-c316t-a53fe22cd5387fe7affcbdfbacbd40298a1b99af74461e23d5ea03d5f909732d3</cites><orcidid>0000-0001-6908-6269</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00521-018-3934-y$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00521-018-3934-y$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Lu, Guangquan</creatorcontrib><creatorcontrib>Gan, Jiangzhang</creatorcontrib><creatorcontrib>Yin, Jian</creatorcontrib><creatorcontrib>Luo, Zhiping</creatorcontrib><creatorcontrib>Li, Bo</creatorcontrib><creatorcontrib>Zhao, Xishun</creatorcontrib><title>Multi-task learning using a hybrid representation for text classification</title><title>Neural computing & applications</title><addtitle>Neural Comput & Applic</addtitle><description>Text classification is an important task in machine learning. Specifically, deep neural network has been shown strong capability to improve performance in different fields, for example speech recognition, objects recognition and natural language processing. However, in most previous work, the extracted feature models do not achieve the relative text tasks well. To address this issue, we introduce a novel multi-task learning approach called a hybrid representation-learning network for text classification tasks. Our method consists of two network components: a bidirectional gated recurrent unit with attention network module and a convolutional neural network module. In particular, the attention module allows for the task learning private feature representation in local dependence from training texts and that the convolutional neural network module can learn the global representation on sharing. Experiments on 16 subsets of Amazon review data show that our method outperforms several baselines and also proves the effectiveness of joint learning multi-relative tasks.</description><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Classification</subject><subject>Computational Biology/Bioinformatics</subject><subject>Computational Science and Engineering</subject><subject>Computer Science</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Feature extraction</subject><subject>Image Processing and Computer Vision</subject><subject>Machine learning</subject><subject>Modules</subject><subject>Multi-Source Data Understanding (MSDU)</subject><subject>Natural language processing</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Performance enhancement</subject><subject>Probability and Statistics in Computer Science</subject><subject>Representations</subject><subject>Speech recognition</subject><issn>0941-0643</issn><issn>1433-3058</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp1kEtLxDAUhYMoOI7-AHcF19GbV9MsZfAxMOJG1yFtkzFjbcckBfvvzVjBlZtz4d5zzoUPoUsC1wRA3kQAQQkGUmGmGMfTEVoQzhhmIKpjtADF87Xk7BSdxbgDAF5WYoHWT2OXPE4mvhedNaH3_bYY40FN8TbVwbdFsPtgo-2TSX7oCzeEItmvVDSdidE73_zsz9GJM120F79ziV7v715Wj3jz_LBe3W5ww0iZsBHMWUqbVrBKOiuNc03dutpk5UBVZUitlHGS85JYylphDWR1CpRktGVLdDX37sPwOdqY9G4YQ59faspBSlCiZNlFZlcThhiDdXof_IcJkyagD8T0TExnYvpATE85Q-dMzN5-a8Nf8_-hby_JcD8</recordid><startdate>20200601</startdate><enddate>20200601</enddate><creator>Lu, Guangquan</creator><creator>Gan, Jiangzhang</creator><creator>Yin, Jian</creator><creator>Luo, Zhiping</creator><creator>Li, Bo</creator><creator>Zhao, Xishun</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0001-6908-6269</orcidid></search><sort><creationdate>20200601</creationdate><title>Multi-task learning using a hybrid representation for text classification</title><author>Lu, Guangquan ; Gan, Jiangzhang ; Yin, Jian ; Luo, Zhiping ; Li, Bo ; Zhao, Xishun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c316t-a53fe22cd5387fe7affcbdfbacbd40298a1b99af74461e23d5ea03d5f909732d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Classification</topic><topic>Computational Biology/Bioinformatics</topic><topic>Computational Science and Engineering</topic><topic>Computer Science</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Feature extraction</topic><topic>Image Processing and Computer Vision</topic><topic>Machine learning</topic><topic>Modules</topic><topic>Multi-Source Data Understanding (MSDU)</topic><topic>Natural language processing</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Performance enhancement</topic><topic>Probability and Statistics in Computer Science</topic><topic>Representations</topic><topic>Speech recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lu, Guangquan</creatorcontrib><creatorcontrib>Gan, Jiangzhang</creatorcontrib><creatorcontrib>Yin, Jian</creatorcontrib><creatorcontrib>Luo, Zhiping</creatorcontrib><creatorcontrib>Li, Bo</creatorcontrib><creatorcontrib>Zhao, Xishun</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Neural computing & applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lu, Guangquan</au><au>Gan, Jiangzhang</au><au>Yin, Jian</au><au>Luo, Zhiping</au><au>Li, Bo</au><au>Zhao, Xishun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-task learning using a hybrid representation for text classification</atitle><jtitle>Neural computing & applications</jtitle><stitle>Neural Comput & Applic</stitle><date>2020-06-01</date><risdate>2020</risdate><volume>32</volume><issue>11</issue><spage>6467</spage><epage>6480</epage><pages>6467-6480</pages><issn>0941-0643</issn><eissn>1433-3058</eissn><abstract>Text classification is an important task in machine learning. Specifically, deep neural network has been shown strong capability to improve performance in different fields, for example speech recognition, objects recognition and natural language processing. However, in most previous work, the extracted feature models do not achieve the relative text tasks well. To address this issue, we introduce a novel multi-task learning approach called a hybrid representation-learning network for text classification tasks. Our method consists of two network components: a bidirectional gated recurrent unit with attention network module and a convolutional neural network module. In particular, the attention module allows for the task learning private feature representation in local dependence from training texts and that the convolutional neural network module can learn the global representation on sharing. Experiments on 16 subsets of Amazon review data show that our method outperforms several baselines and also proves the effectiveness of joint learning multi-relative tasks.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s00521-018-3934-y</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-6908-6269</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0941-0643 |
ispartof | Neural computing & applications, 2020-06, Vol.32 (11), p.6467-6480 |
issn | 0941-0643 1433-3058 |
language | eng |
recordid | cdi_proquest_journals_2407709563 |
source | SpringerNature Journals |
subjects | Artificial Intelligence Artificial neural networks Classification Computational Biology/Bioinformatics Computational Science and Engineering Computer Science Data Mining and Knowledge Discovery Feature extraction Image Processing and Computer Vision Machine learning Modules Multi-Source Data Understanding (MSDU) Natural language processing Neural networks Object recognition Performance enhancement Probability and Statistics in Computer Science Representations Speech recognition |
title | Multi-task learning using a hybrid representation for text classification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T19%3A50%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-task%20learning%20using%20a%20hybrid%20representation%20for%20text%20classification&rft.jtitle=Neural%20computing%20&%20applications&rft.au=Lu,%20Guangquan&rft.date=2020-06-01&rft.volume=32&rft.issue=11&rft.spage=6467&rft.epage=6480&rft.pages=6467-6480&rft.issn=0941-0643&rft.eissn=1433-3058&rft_id=info:doi/10.1007/s00521-018-3934-y&rft_dat=%3Cproquest_cross%3E2407709563%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2407709563&rft_id=info:pmid/&rfr_iscdi=true |