A Multi-Level Embedding Framework for Decoding Sarcasm Using Context, Emotion, and Sentiment Feature
Sarcasm detection in text poses significant challenges for traditional sentiment analysis, as it often requires an understanding of context, word meanings, and emotional undertones. For example, in the sentence “I totally love working on Christmas holiday”, detecting sarcasm depends on capturing the...
Gespeichert in:
Veröffentlicht in: | Electronics (Basel) 2024-11, Vol.13 (22), p.4429 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 22 |
container_start_page | 4429 |
container_title | Electronics (Basel) |
container_volume | 13 |
creator | Najafabadi, Maryam Khanian Ko, Thoon Zar Chi Chaeikar, Saman Shojae Shabani, Nasrin |
description | Sarcasm detection in text poses significant challenges for traditional sentiment analysis, as it often requires an understanding of context, word meanings, and emotional undertones. For example, in the sentence “I totally love working on Christmas holiday”, detecting sarcasm depends on capturing the contrast between affective words and their context. Existing methods often focus on single-embedding levels, such as word-level or affective-level, neglecting the importance of multi-level context. In this paper, we propose SAWE (Sentence, Affect, and Word Embeddings), a framework that combines sentence-level, affect-level, and context-dependent word embeddings to improve sarcasm detection. We use pre-trained transformer models SBERT and RoBERTa, enhanced with a bidirectional GRU and self-attention, alongside SenticNet to extract affective words. The combined embeddings are processed through a CNN and classified using a multilayer perceptron (MLP). SAWE is evaluated on two benchmark datasets, Sarcasm Corpus V2 (SV2) and Self-Annotated Reddit Corpus 2.0 (SARC 2.0), outperforming previous methods, particularly on long texts, with a 4.2% improvement on F1-Score for SV2. Our results emphasize the importance of multi-level embeddings and contextual information in detecting sarcasm, demonstrating a new direction for future research. |
doi_str_mv | 10.3390/electronics13224429 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_3133009571</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A818202617</galeid><sourcerecordid>A818202617</sourcerecordid><originalsourceid>FETCH-LOGICAL-c196t-3f4be40ac6008131fb6422d7a4e79089d70913978ca4b6f777d4369fb644d23f3</originalsourceid><addsrcrecordid>eNptUUtLAzEQXkRB0f4CLwGv3ZqXm82x1FaFiofqeUmTiUR3k5qkPv69qfXgwRmYF983D6aqzgmeMCbxJfSgcwze6UQYpZxTeVCdUCxkLamkh3_i42qU0gsuIglrGT6pzBTdb_vs6iW8Q4_mwxqMcf4ZLaIa4CPEV2RDRNegw095paJWaUBPaZfNgs_wmceFF7ILfoyUN2gFPruhGLQAlbcRzqojq_oEo19_Wj0t5o-z23r5cHM3my5rTWSTa2b5GjhWusG4JYzYdcMpNUJxEBK30ojd2lK0WvF1Y4UQhrNG7mDcUGbZaXWx77uJ4W0LKXcvYRt9Gdkxwli5-kqQgprsUc-qh855G3JUuqiBwengwbpSn7akpZg2RBQC2xN0DClFsN0mukHFr47gbveC7p8XsG98EntY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3133009571</pqid></control><display><type>article</type><title>A Multi-Level Embedding Framework for Decoding Sarcasm Using Context, Emotion, and Sentiment Feature</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Najafabadi, Maryam Khanian ; Ko, Thoon Zar Chi ; Chaeikar, Saman Shojae ; Shabani, Nasrin</creator><creatorcontrib>Najafabadi, Maryam Khanian ; Ko, Thoon Zar Chi ; Chaeikar, Saman Shojae ; Shabani, Nasrin</creatorcontrib><description>Sarcasm detection in text poses significant challenges for traditional sentiment analysis, as it often requires an understanding of context, word meanings, and emotional undertones. For example, in the sentence “I totally love working on Christmas holiday”, detecting sarcasm depends on capturing the contrast between affective words and their context. Existing methods often focus on single-embedding levels, such as word-level or affective-level, neglecting the importance of multi-level context. In this paper, we propose SAWE (Sentence, Affect, and Word Embeddings), a framework that combines sentence-level, affect-level, and context-dependent word embeddings to improve sarcasm detection. We use pre-trained transformer models SBERT and RoBERTa, enhanced with a bidirectional GRU and self-attention, alongside SenticNet to extract affective words. The combined embeddings are processed through a CNN and classified using a multilayer perceptron (MLP). SAWE is evaluated on two benchmark datasets, Sarcasm Corpus V2 (SV2) and Self-Annotated Reddit Corpus 2.0 (SARC 2.0), outperforming previous methods, particularly on long texts, with a 4.2% improvement on F1-Score for SV2. Our results emphasize the importance of multi-level embeddings and contextual information in detecting sarcasm, demonstrating a new direction for future research.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics13224429</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Analysis ; Classification ; Computational linguistics ; Context ; Embedding ; Language processing ; Multilayer perceptrons ; Natural language interfaces ; Popularity ; Research methodology ; Semantics ; Sentences ; Sentiment analysis ; Social networks ; Words (language)</subject><ispartof>Electronics (Basel), 2024-11, Vol.13 (22), p.4429</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c196t-3f4be40ac6008131fb6422d7a4e79089d70913978ca4b6f777d4369fb644d23f3</cites><orcidid>0000-0002-2958-6901 ; 0000-0002-5071-7515 ; 0000-0001-7283-5101</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Najafabadi, Maryam Khanian</creatorcontrib><creatorcontrib>Ko, Thoon Zar Chi</creatorcontrib><creatorcontrib>Chaeikar, Saman Shojae</creatorcontrib><creatorcontrib>Shabani, Nasrin</creatorcontrib><title>A Multi-Level Embedding Framework for Decoding Sarcasm Using Context, Emotion, and Sentiment Feature</title><title>Electronics (Basel)</title><description>Sarcasm detection in text poses significant challenges for traditional sentiment analysis, as it often requires an understanding of context, word meanings, and emotional undertones. For example, in the sentence “I totally love working on Christmas holiday”, detecting sarcasm depends on capturing the contrast between affective words and their context. Existing methods often focus on single-embedding levels, such as word-level or affective-level, neglecting the importance of multi-level context. In this paper, we propose SAWE (Sentence, Affect, and Word Embeddings), a framework that combines sentence-level, affect-level, and context-dependent word embeddings to improve sarcasm detection. We use pre-trained transformer models SBERT and RoBERTa, enhanced with a bidirectional GRU and self-attention, alongside SenticNet to extract affective words. The combined embeddings are processed through a CNN and classified using a multilayer perceptron (MLP). SAWE is evaluated on two benchmark datasets, Sarcasm Corpus V2 (SV2) and Self-Annotated Reddit Corpus 2.0 (SARC 2.0), outperforming previous methods, particularly on long texts, with a 4.2% improvement on F1-Score for SV2. Our results emphasize the importance of multi-level embeddings and contextual information in detecting sarcasm, demonstrating a new direction for future research.</description><subject>Analysis</subject><subject>Classification</subject><subject>Computational linguistics</subject><subject>Context</subject><subject>Embedding</subject><subject>Language processing</subject><subject>Multilayer perceptrons</subject><subject>Natural language interfaces</subject><subject>Popularity</subject><subject>Research methodology</subject><subject>Semantics</subject><subject>Sentences</subject><subject>Sentiment analysis</subject><subject>Social networks</subject><subject>Words (language)</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNptUUtLAzEQXkRB0f4CLwGv3ZqXm82x1FaFiofqeUmTiUR3k5qkPv69qfXgwRmYF983D6aqzgmeMCbxJfSgcwze6UQYpZxTeVCdUCxkLamkh3_i42qU0gsuIglrGT6pzBTdb_vs6iW8Q4_mwxqMcf4ZLaIa4CPEV2RDRNegw095paJWaUBPaZfNgs_wmceFF7ILfoyUN2gFPruhGLQAlbcRzqojq_oEo19_Wj0t5o-z23r5cHM3my5rTWSTa2b5GjhWusG4JYzYdcMpNUJxEBK30ojd2lK0WvF1Y4UQhrNG7mDcUGbZaXWx77uJ4W0LKXcvYRt9Gdkxwli5-kqQgprsUc-qh855G3JUuqiBwengwbpSn7akpZg2RBQC2xN0DClFsN0mukHFr47gbveC7p8XsG98EntY</recordid><startdate>20241101</startdate><enddate>20241101</enddate><creator>Najafabadi, Maryam Khanian</creator><creator>Ko, Thoon Zar Chi</creator><creator>Chaeikar, Saman Shojae</creator><creator>Shabani, Nasrin</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0002-2958-6901</orcidid><orcidid>https://orcid.org/0000-0002-5071-7515</orcidid><orcidid>https://orcid.org/0000-0001-7283-5101</orcidid></search><sort><creationdate>20241101</creationdate><title>A Multi-Level Embedding Framework for Decoding Sarcasm Using Context, Emotion, and Sentiment Feature</title><author>Najafabadi, Maryam Khanian ; Ko, Thoon Zar Chi ; Chaeikar, Saman Shojae ; Shabani, Nasrin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c196t-3f4be40ac6008131fb6422d7a4e79089d70913978ca4b6f777d4369fb644d23f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Analysis</topic><topic>Classification</topic><topic>Computational linguistics</topic><topic>Context</topic><topic>Embedding</topic><topic>Language processing</topic><topic>Multilayer perceptrons</topic><topic>Natural language interfaces</topic><topic>Popularity</topic><topic>Research methodology</topic><topic>Semantics</topic><topic>Sentences</topic><topic>Sentiment analysis</topic><topic>Social networks</topic><topic>Words (language)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Najafabadi, Maryam Khanian</creatorcontrib><creatorcontrib>Ko, Thoon Zar Chi</creatorcontrib><creatorcontrib>Chaeikar, Saman Shojae</creatorcontrib><creatorcontrib>Shabani, Nasrin</creatorcontrib><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Najafabadi, Maryam Khanian</au><au>Ko, Thoon Zar Chi</au><au>Chaeikar, Saman Shojae</au><au>Shabani, Nasrin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Multi-Level Embedding Framework for Decoding Sarcasm Using Context, Emotion, and Sentiment Feature</atitle><jtitle>Electronics (Basel)</jtitle><date>2024-11-01</date><risdate>2024</risdate><volume>13</volume><issue>22</issue><spage>4429</spage><pages>4429-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>Sarcasm detection in text poses significant challenges for traditional sentiment analysis, as it often requires an understanding of context, word meanings, and emotional undertones. For example, in the sentence “I totally love working on Christmas holiday”, detecting sarcasm depends on capturing the contrast between affective words and their context. Existing methods often focus on single-embedding levels, such as word-level or affective-level, neglecting the importance of multi-level context. In this paper, we propose SAWE (Sentence, Affect, and Word Embeddings), a framework that combines sentence-level, affect-level, and context-dependent word embeddings to improve sarcasm detection. We use pre-trained transformer models SBERT and RoBERTa, enhanced with a bidirectional GRU and self-attention, alongside SenticNet to extract affective words. The combined embeddings are processed through a CNN and classified using a multilayer perceptron (MLP). SAWE is evaluated on two benchmark datasets, Sarcasm Corpus V2 (SV2) and Self-Annotated Reddit Corpus 2.0 (SARC 2.0), outperforming previous methods, particularly on long texts, with a 4.2% improvement on F1-Score for SV2. Our results emphasize the importance of multi-level embeddings and contextual information in detecting sarcasm, demonstrating a new direction for future research.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics13224429</doi><orcidid>https://orcid.org/0000-0002-2958-6901</orcidid><orcidid>https://orcid.org/0000-0002-5071-7515</orcidid><orcidid>https://orcid.org/0000-0001-7283-5101</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2079-9292 |
ispartof | Electronics (Basel), 2024-11, Vol.13 (22), p.4429 |
issn | 2079-9292 2079-9292 |
language | eng |
recordid | cdi_proquest_journals_3133009571 |
source | MDPI - Multidisciplinary Digital Publishing Institute; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | Analysis Classification Computational linguistics Context Embedding Language processing Multilayer perceptrons Natural language interfaces Popularity Research methodology Semantics Sentences Sentiment analysis Social networks Words (language) |
title | A Multi-Level Embedding Framework for Decoding Sarcasm Using Context, Emotion, and Sentiment Feature |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T10%3A46%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Multi-Level%20Embedding%20Framework%20for%20Decoding%20Sarcasm%20Using%20Context,%20Emotion,%20and%20Sentiment%20Feature&rft.jtitle=Electronics%20(Basel)&rft.au=Najafabadi,%20Maryam%20Khanian&rft.date=2024-11-01&rft.volume=13&rft.issue=22&rft.spage=4429&rft.pages=4429-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics13224429&rft_dat=%3Cgale_proqu%3EA818202617%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3133009571&rft_id=info:pmid/&rft_galeid=A818202617&rfr_iscdi=true |