On Extending Neural Networks with Loss Ensembles for Text Classification
Ensemble techniques are powerful approaches that combine several weak learners to build a stronger one. As a meta learning framework, ensemble techniques can easily be applied to many machine learning techniques. In this paper we propose a neural network extended with an ensemble loss function for t...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Hajiabadi, Hamideh Molla-Aliod, Diego Monsefi, Reza |
description | Ensemble techniques are powerful approaches that combine several weak
learners to build a stronger one. As a meta learning framework, ensemble
techniques can easily be applied to many machine learning techniques. In this
paper we propose a neural network extended with an ensemble loss function for
text classification. The weight of each weak loss function is tuned within the
training phase through the gradient propagation optimization method of the
neural network. The approach is evaluated on several text classification
datasets. We also evaluate its performance in various environments with several
degrees of label noise. Experimental results indicate an improvement of the
results and strong resilience against label noise in comparison with other
methods. |
doi_str_mv | 10.48550/arxiv.1711.05170 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1711_05170</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1711_05170</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-84f859e2d55d445da4251c9a05059a827d6143db4a9234fbc0482f17062aa1f63</originalsourceid><addsrcrecordid>eNotz7FOwzAUhWEvDKjwAEz1CyTYjm_ijCgKFClql-zRTWyDReog29Dw9pTS6d-OzkfIA2e5VADsEcPqvnNecZ4z4BW7JbuDp-2ajNfOv9G9-Qo4n5NOS_iI9OTSO-2WGGnrozmOs4nULoH2Zk20mTFGZ92EyS3-jtxYnKO5v3ZD-ue2b3ZZd3h5bZ66DMuKZUpaBbURGkBLCRqlAD7VyIBBjUpUuuSy0KPEWhTSjhOTStjz01IgclsWG7L9n71Qhs_gjhh-hj_ScCEVv1gCRc8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>On Extending Neural Networks with Loss Ensembles for Text Classification</title><source>arXiv.org</source><creator>Hajiabadi, Hamideh ; Molla-Aliod, Diego ; Monsefi, Reza</creator><creatorcontrib>Hajiabadi, Hamideh ; Molla-Aliod, Diego ; Monsefi, Reza</creatorcontrib><description>Ensemble techniques are powerful approaches that combine several weak
learners to build a stronger one. As a meta learning framework, ensemble
techniques can easily be applied to many machine learning techniques. In this
paper we propose a neural network extended with an ensemble loss function for
text classification. The weight of each weak loss function is tuned within the
training phase through the gradient propagation optimization method of the
neural network. The approach is evaluated on several text classification
datasets. We also evaluate its performance in various environments with several
degrees of label noise. Experimental results indicate an improvement of the
results and strong resilience against label noise in comparison with other
methods.</description><identifier>DOI: 10.48550/arxiv.1711.05170</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2017-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1711.05170$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1711.05170$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hajiabadi, Hamideh</creatorcontrib><creatorcontrib>Molla-Aliod, Diego</creatorcontrib><creatorcontrib>Monsefi, Reza</creatorcontrib><title>On Extending Neural Networks with Loss Ensembles for Text Classification</title><description>Ensemble techniques are powerful approaches that combine several weak
learners to build a stronger one. As a meta learning framework, ensemble
techniques can easily be applied to many machine learning techniques. In this
paper we propose a neural network extended with an ensemble loss function for
text classification. The weight of each weak loss function is tuned within the
training phase through the gradient propagation optimization method of the
neural network. The approach is evaluated on several text classification
datasets. We also evaluate its performance in various environments with several
degrees of label noise. Experimental results indicate an improvement of the
results and strong resilience against label noise in comparison with other
methods.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7FOwzAUhWEvDKjwAEz1CyTYjm_ijCgKFClql-zRTWyDReog29Dw9pTS6d-OzkfIA2e5VADsEcPqvnNecZ4z4BW7JbuDp-2ajNfOv9G9-Qo4n5NOS_iI9OTSO-2WGGnrozmOs4nULoH2Zk20mTFGZ92EyS3-jtxYnKO5v3ZD-ue2b3ZZd3h5bZ66DMuKZUpaBbURGkBLCRqlAD7VyIBBjUpUuuSy0KPEWhTSjhOTStjz01IgclsWG7L9n71Qhs_gjhh-hj_ScCEVv1gCRc8</recordid><startdate>20171114</startdate><enddate>20171114</enddate><creator>Hajiabadi, Hamideh</creator><creator>Molla-Aliod, Diego</creator><creator>Monsefi, Reza</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20171114</creationdate><title>On Extending Neural Networks with Loss Ensembles for Text Classification</title><author>Hajiabadi, Hamideh ; Molla-Aliod, Diego ; Monsefi, Reza</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-84f859e2d55d445da4251c9a05059a827d6143db4a9234fbc0482f17062aa1f63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Hajiabadi, Hamideh</creatorcontrib><creatorcontrib>Molla-Aliod, Diego</creatorcontrib><creatorcontrib>Monsefi, Reza</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hajiabadi, Hamideh</au><au>Molla-Aliod, Diego</au><au>Monsefi, Reza</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On Extending Neural Networks with Loss Ensembles for Text Classification</atitle><date>2017-11-14</date><risdate>2017</risdate><abstract>Ensemble techniques are powerful approaches that combine several weak
learners to build a stronger one. As a meta learning framework, ensemble
techniques can easily be applied to many machine learning techniques. In this
paper we propose a neural network extended with an ensemble loss function for
text classification. The weight of each weak loss function is tuned within the
training phase through the gradient propagation optimization method of the
neural network. The approach is evaluated on several text classification
datasets. We also evaluate its performance in various environments with several
degrees of label noise. Experimental results indicate an improvement of the
results and strong resilience against label noise in comparison with other
methods.</abstract><doi>10.48550/arxiv.1711.05170</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1711.05170 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1711_05170 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning Statistics - Machine Learning |
title | On Extending Neural Networks with Loss Ensembles for Text Classification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T12%3A52%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On%20Extending%20Neural%20Networks%20with%20Loss%20Ensembles%20for%20Text%20Classification&rft.au=Hajiabadi,%20Hamideh&rft.date=2017-11-14&rft_id=info:doi/10.48550/arxiv.1711.05170&rft_dat=%3Carxiv_GOX%3E1711_05170%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |