Multimodal Relational Tensor Network for Sentiment and Emotion Classification

Understanding Affect from video segments has brought researchers from the language, audio and video domains together. Most of the current multimodal research in this area deals with various techniques to fuse the modalities, and mostly treat the segments of a video independently. Motivated by the wo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sahay, Saurav, Kumar, Shachi H, Xia, Rui, Huang, Jonathan, Nachman, Lama
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Sahay, Saurav
Kumar, Shachi H
Xia, Rui
Huang, Jonathan
Nachman, Lama
description Understanding Affect from video segments has brought researchers from the language, audio and video domains together. Most of the current multimodal research in this area deals with various techniques to fuse the modalities, and mostly treat the segments of a video independently. Motivated by the work of (Zadeh et al., 2017) and (Poria et al., 2017), we present our architecture, Relational Tensor Network, where we use the inter-modal interactions within a segment (intra-segment) and also consider the sequence of segments in a video to model the inter-segment inter-modal interactions. We also generate rich representations of text and audio modalities by leveraging richer audio and linguistic context alongwith fusing fine-grained knowledge based polarity scores from text. We present the results of our model on CMU-MOSEI dataset and show that our model outperforms many baselines and state of the art methods for sentiment classification and emotion recognition.
doi_str_mv 10.48550/arxiv.1806.02923
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1806_02923</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1806_02923</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-b97ff17a4681a0746b66d4e896dd2ddc1ce9426b036480911ebe8c2e927e3fd43</originalsourceid><addsrcrecordid>eNotj8lOwzAYhH3hgAoPwKl-gQRv9XJEUVmkFiTIPfoT_5asOjFKwvb2pIHLzBxmRvoIueGsVHa3Y7cwfsfPklumSyackJfkePxIc-yzh0RfMcEc87DEGocpj_QZ5688nmhY8hsOS3ERCoOn-z6fq7RKME0xxG5dXpGLAGnC63_fkPp-X1ePxeHl4am6OxSgjSxaZ0LgBpS2HJhRutXaK7ROey-873iHTgndMqmVZY5zbNF2Ap0wKINXckO2f7crT_M-xh7Gn-bM1axc8hey9Uko</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Multimodal Relational Tensor Network for Sentiment and Emotion Classification</title><source>arXiv.org</source><creator>Sahay, Saurav ; Kumar, Shachi H ; Xia, Rui ; Huang, Jonathan ; Nachman, Lama</creator><creatorcontrib>Sahay, Saurav ; Kumar, Shachi H ; Xia, Rui ; Huang, Jonathan ; Nachman, Lama</creatorcontrib><description>Understanding Affect from video segments has brought researchers from the language, audio and video domains together. Most of the current multimodal research in this area deals with various techniques to fuse the modalities, and mostly treat the segments of a video independently. Motivated by the work of (Zadeh et al., 2017) and (Poria et al., 2017), we present our architecture, Relational Tensor Network, where we use the inter-modal interactions within a segment (intra-segment) and also consider the sequence of segments in a video to model the inter-segment inter-modal interactions. We also generate rich representations of text and audio modalities by leveraging richer audio and linguistic context alongwith fusing fine-grained knowledge based polarity scores from text. We present the results of our model on CMU-MOSEI dataset and show that our model outperforms many baselines and state of the art methods for sentiment classification and emotion recognition.</description><identifier>DOI: 10.48550/arxiv.1806.02923</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2018-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1806.02923$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1806.02923$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Sahay, Saurav</creatorcontrib><creatorcontrib>Kumar, Shachi H</creatorcontrib><creatorcontrib>Xia, Rui</creatorcontrib><creatorcontrib>Huang, Jonathan</creatorcontrib><creatorcontrib>Nachman, Lama</creatorcontrib><title>Multimodal Relational Tensor Network for Sentiment and Emotion Classification</title><description>Understanding Affect from video segments has brought researchers from the language, audio and video domains together. Most of the current multimodal research in this area deals with various techniques to fuse the modalities, and mostly treat the segments of a video independently. Motivated by the work of (Zadeh et al., 2017) and (Poria et al., 2017), we present our architecture, Relational Tensor Network, where we use the inter-modal interactions within a segment (intra-segment) and also consider the sequence of segments in a video to model the inter-segment inter-modal interactions. We also generate rich representations of text and audio modalities by leveraging richer audio and linguistic context alongwith fusing fine-grained knowledge based polarity scores from text. We present the results of our model on CMU-MOSEI dataset and show that our model outperforms many baselines and state of the art methods for sentiment classification and emotion recognition.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8lOwzAYhH3hgAoPwKl-gQRv9XJEUVmkFiTIPfoT_5asOjFKwvb2pIHLzBxmRvoIueGsVHa3Y7cwfsfPklumSyackJfkePxIc-yzh0RfMcEc87DEGocpj_QZ5688nmhY8hsOS3ERCoOn-z6fq7RKME0xxG5dXpGLAGnC63_fkPp-X1ePxeHl4am6OxSgjSxaZ0LgBpS2HJhRutXaK7ROey-873iHTgndMqmVZY5zbNF2Ap0wKINXckO2f7crT_M-xh7Gn-bM1axc8hey9Uko</recordid><startdate>20180607</startdate><enddate>20180607</enddate><creator>Sahay, Saurav</creator><creator>Kumar, Shachi H</creator><creator>Xia, Rui</creator><creator>Huang, Jonathan</creator><creator>Nachman, Lama</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20180607</creationdate><title>Multimodal Relational Tensor Network for Sentiment and Emotion Classification</title><author>Sahay, Saurav ; Kumar, Shachi H ; Xia, Rui ; Huang, Jonathan ; Nachman, Lama</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-b97ff17a4681a0746b66d4e896dd2ddc1ce9426b036480911ebe8c2e927e3fd43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Sahay, Saurav</creatorcontrib><creatorcontrib>Kumar, Shachi H</creatorcontrib><creatorcontrib>Xia, Rui</creatorcontrib><creatorcontrib>Huang, Jonathan</creatorcontrib><creatorcontrib>Nachman, Lama</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sahay, Saurav</au><au>Kumar, Shachi H</au><au>Xia, Rui</au><au>Huang, Jonathan</au><au>Nachman, Lama</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multimodal Relational Tensor Network for Sentiment and Emotion Classification</atitle><date>2018-06-07</date><risdate>2018</risdate><abstract>Understanding Affect from video segments has brought researchers from the language, audio and video domains together. Most of the current multimodal research in this area deals with various techniques to fuse the modalities, and mostly treat the segments of a video independently. Motivated by the work of (Zadeh et al., 2017) and (Poria et al., 2017), we present our architecture, Relational Tensor Network, where we use the inter-modal interactions within a segment (intra-segment) and also consider the sequence of segments in a video to model the inter-segment inter-modal interactions. We also generate rich representations of text and audio modalities by leveraging richer audio and linguistic context alongwith fusing fine-grained knowledge based polarity scores from text. We present the results of our model on CMU-MOSEI dataset and show that our model outperforms many baselines and state of the art methods for sentiment classification and emotion recognition.</abstract><doi>10.48550/arxiv.1806.02923</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1806.02923
ispartof
issn
language eng
recordid cdi_arxiv_primary_1806_02923
source arXiv.org
subjects Computer Science - Computation and Language
title Multimodal Relational Tensor Network for Sentiment and Emotion Classification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T17%3A09%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multimodal%20Relational%20Tensor%20Network%20for%20Sentiment%20and%20Emotion%20Classification&rft.au=Sahay,%20Saurav&rft.date=2018-06-07&rft_id=info:doi/10.48550/arxiv.1806.02923&rft_dat=%3Carxiv_GOX%3E1806_02923%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true