Multimodal Joint Emotion and Game Context Recognition in League of Legends Livestreams

Video game streaming provides the viewer with a rich set of audio-visual data, conveying information both with regards to the game itself, through game footage and audio, as well as the streamer's emotional state and behaviour via webcam footage and audio. Analysing player behaviour and discove...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ringer, Charles, Walker, James Alfred, Nicolaou, Mihalis A
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ringer, Charles
Walker, James Alfred
Nicolaou, Mihalis A
description Video game streaming provides the viewer with a rich set of audio-visual data, conveying information both with regards to the game itself, through game footage and audio, as well as the streamer's emotional state and behaviour via webcam footage and audio. Analysing player behaviour and discovering correlations with game context is crucial for modelling and understanding important aspects of livestreams, but comes with a significant set of challenges - such as fusing multimodal data captured by different sensors in uncontrolled ('in-the-wild') conditions. Firstly, we present, to our knowledge, the first data set of League of Legends livestreams, annotated for both streamer affect and game context. Secondly, we propose a method that exploits tensor decompositions for high-order fusion of multimodal representations. The proposed method is evaluated on the problem of jointly predicting game context and player affect, compared with a set of baseline fusion approaches such as late and early fusion.
doi_str_mv 10.48550/arxiv.1905.13694
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1905_13694</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1905_13694</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-c265c709a15da21aa3add2f2cbba2559533cae3712158b64f3b3d2578984463d3</originalsourceid><addsrcrecordid>eNotz81KxDAUBeBsXMiMD-DKvEBr83PTZjmUcVQ6CDK4LbdJWgJtIm1mGN9era7OgQMHPkLuWZHLCqB4xPnqLznTBeRMKC1vycfxPCY_RYsjfY0-JLqfYvIxUAyWHnBytI4huWui787EIfh19IE2Doezo7H_aYMLdqGNv7glzQ6nZUtuehwXd_efG3J62p_q56x5O7zUuyZDVcrMcAWmLDQysMgZokBrec9N1yEH0CCEQSdKxhlUnZK96ITlUFa6klIJKzbk4e92hbWfs59w_mp_ge0KFN85Wkth</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Multimodal Joint Emotion and Game Context Recognition in League of Legends Livestreams</title><source>arXiv.org</source><creator>Ringer, Charles ; Walker, James Alfred ; Nicolaou, Mihalis A</creator><creatorcontrib>Ringer, Charles ; Walker, James Alfred ; Nicolaou, Mihalis A</creatorcontrib><description>Video game streaming provides the viewer with a rich set of audio-visual data, conveying information both with regards to the game itself, through game footage and audio, as well as the streamer's emotional state and behaviour via webcam footage and audio. Analysing player behaviour and discovering correlations with game context is crucial for modelling and understanding important aspects of livestreams, but comes with a significant set of challenges - such as fusing multimodal data captured by different sensors in uncontrolled ('in-the-wild') conditions. Firstly, we present, to our knowledge, the first data set of League of Legends livestreams, annotated for both streamer affect and game context. Secondly, we propose a method that exploits tensor decompositions for high-order fusion of multimodal representations. The proposed method is evaluated on the problem of jointly predicting game context and player affect, compared with a set of baseline fusion approaches such as late and early fusion.</description><identifier>DOI: 10.48550/arxiv.1905.13694</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2019-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1905.13694$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1905.13694$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ringer, Charles</creatorcontrib><creatorcontrib>Walker, James Alfred</creatorcontrib><creatorcontrib>Nicolaou, Mihalis A</creatorcontrib><title>Multimodal Joint Emotion and Game Context Recognition in League of Legends Livestreams</title><description>Video game streaming provides the viewer with a rich set of audio-visual data, conveying information both with regards to the game itself, through game footage and audio, as well as the streamer's emotional state and behaviour via webcam footage and audio. Analysing player behaviour and discovering correlations with game context is crucial for modelling and understanding important aspects of livestreams, but comes with a significant set of challenges - such as fusing multimodal data captured by different sensors in uncontrolled ('in-the-wild') conditions. Firstly, we present, to our knowledge, the first data set of League of Legends livestreams, annotated for both streamer affect and game context. Secondly, we propose a method that exploits tensor decompositions for high-order fusion of multimodal representations. The proposed method is evaluated on the problem of jointly predicting game context and player affect, compared with a set of baseline fusion approaches such as late and early fusion.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz81KxDAUBeBsXMiMD-DKvEBr83PTZjmUcVQ6CDK4LbdJWgJtIm1mGN9era7OgQMHPkLuWZHLCqB4xPnqLznTBeRMKC1vycfxPCY_RYsjfY0-JLqfYvIxUAyWHnBytI4huWui787EIfh19IE2Doezo7H_aYMLdqGNv7glzQ6nZUtuehwXd_efG3J62p_q56x5O7zUuyZDVcrMcAWmLDQysMgZokBrec9N1yEH0CCEQSdKxhlUnZK96ITlUFa6klIJKzbk4e92hbWfs59w_mp_ge0KFN85Wkth</recordid><startdate>20190531</startdate><enddate>20190531</enddate><creator>Ringer, Charles</creator><creator>Walker, James Alfred</creator><creator>Nicolaou, Mihalis A</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190531</creationdate><title>Multimodal Joint Emotion and Game Context Recognition in League of Legends Livestreams</title><author>Ringer, Charles ; Walker, James Alfred ; Nicolaou, Mihalis A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-c265c709a15da21aa3add2f2cbba2559533cae3712158b64f3b3d2578984463d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Ringer, Charles</creatorcontrib><creatorcontrib>Walker, James Alfred</creatorcontrib><creatorcontrib>Nicolaou, Mihalis A</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ringer, Charles</au><au>Walker, James Alfred</au><au>Nicolaou, Mihalis A</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multimodal Joint Emotion and Game Context Recognition in League of Legends Livestreams</atitle><date>2019-05-31</date><risdate>2019</risdate><abstract>Video game streaming provides the viewer with a rich set of audio-visual data, conveying information both with regards to the game itself, through game footage and audio, as well as the streamer's emotional state and behaviour via webcam footage and audio. Analysing player behaviour and discovering correlations with game context is crucial for modelling and understanding important aspects of livestreams, but comes with a significant set of challenges - such as fusing multimodal data captured by different sensors in uncontrolled ('in-the-wild') conditions. Firstly, we present, to our knowledge, the first data set of League of Legends livestreams, annotated for both streamer affect and game context. Secondly, we propose a method that exploits tensor decompositions for high-order fusion of multimodal representations. The proposed method is evaluated on the problem of jointly predicting game context and player affect, compared with a set of baseline fusion approaches such as late and early fusion.</abstract><doi>10.48550/arxiv.1905.13694</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1905.13694
ispartof
issn
language eng
recordid cdi_arxiv_primary_1905_13694
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Multimodal Joint Emotion and Game Context Recognition in League of Legends Livestreams
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T06%3A33%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multimodal%20Joint%20Emotion%20and%20Game%20Context%20Recognition%20in%20League%20of%20Legends%20Livestreams&rft.au=Ringer,%20Charles&rft.date=2019-05-31&rft_id=info:doi/10.48550/arxiv.1905.13694&rft_dat=%3Carxiv_GOX%3E1905_13694%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true