EmotionFusion: A unified ensemble R-CNN approach for advanced facial emotion analysis
To assess non-verbal reactions to commodities, services, or products, sentiment analysis is the technique of identifying exhibited human emotions utilizing artificial intelligence-based technology. The facial muscles flex and contract differently in response to each facial expression that a person m...
Gespeichert in:
Veröffentlicht in: | Journal of intelligent & fuzzy systems 2023-12, Vol.45 (6), p.10141-10155 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 10155 |
---|---|
container_issue | 6 |
container_start_page | 10141 |
container_title | Journal of intelligent & fuzzy systems |
container_volume | 45 |
creator | Umamageswari, A. Deepa, S. Bhagyalakshmi, A. Sangari, A. Raja, K. |
description | To assess non-verbal reactions to commodities, services, or products, sentiment analysis is the technique of identifying exhibited human emotions utilizing artificial intelligence-based technology. The facial muscles flex and contract differently in response to each facial expression that a person makes, which facilitates the deep learning AI algorithms’ ability to identify an emotion. Facial emotion analysis has numerous applications across various industries and domains, leveraging the understanding of human emotions conveyed through facial expressions, so it is very much required in healthcare, security and survelliance, Forensics, Autism and cultural studies etc,.. In this study, facially expressed sentiments in real-time photographs as well as in an existing dataset are classified using object detection techniques based on deep learning. Fast Region-based Convolution Neural Network (R-CNN) is an object detection system that uses suggested areas to categorize facial expressions of emotion in real-time. Using a high-quality video collection made up of 24 actors who were photographed facially expressing eight distinct emotions (Happy, Sad, Disgust, Anger, Surprise, Fear, Contempt and Neutral). The Fast R-CNN and Mouth region-based feature extraction and Maximally Stable Extremal Regions (MSER) method used for classification and feature extraction respectively. In order to assess the deep network’s performance, the proposed work builds a confusion matrix. The network generalizes to new images rather well, as seen by the average recognition rate of 97.6% for eight emotions. The suggested deep network approach may deliver superior recognition performance when compared to CNN and SVM methods, and it can be applied to a variety of applications including online classrooms, video game testing, healthcare sectors, and automated industry. |
doi_str_mv | 10.3233/JIFS-233842 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2897587608</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2897587608</sourcerecordid><originalsourceid>FETCH-LOGICAL-c219t-7cb723c19ee145b42e75989dc75ed6f047f7df1c2bd546c798b5337bd208ff313</originalsourceid><addsrcrecordid>eNotkM1KAzEYRYMoWKsrXyDgUqL5nSTuSrFaKRXUrkMmPzhlOlOTjtC3N2XcfN9dHC6XA8AtwQ-MMvb4tlx8ohIUp2dgQpQUSOlKnpeMK44I5dUluMp5izGRguIJ2Dzv-kPTd4shl_sEZ3DomtgED0OXw65uA_xA8_Ua2v0-9dZ9w9gnaP2v7VyBonWNbWEYS6DtbHvMTb4GF9G2Odz8_ynYLJ6_5q9o9f6ynM9WyFGiD0i6WlLmiA6BcFFzGqTQSnsnRfBVxFxG6SNxtPaCV05qVQvGZO0pVjEywqbgbuwt236GkA9m2w-pjMiGKi2FkhVWhbofKZf6nFOIZp-anU1HQ7A5eTMnb2b0xv4AJ71fiw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2897587608</pqid></control><display><type>article</type><title>EmotionFusion: A unified ensemble R-CNN approach for advanced facial emotion analysis</title><source>Business Source Complete</source><creator>Umamageswari, A. ; Deepa, S. ; Bhagyalakshmi, A. ; Sangari, A. ; Raja, K.</creator><creatorcontrib>Umamageswari, A. ; Deepa, S. ; Bhagyalakshmi, A. ; Sangari, A. ; Raja, K.</creatorcontrib><description>To assess non-verbal reactions to commodities, services, or products, sentiment analysis is the technique of identifying exhibited human emotions utilizing artificial intelligence-based technology. The facial muscles flex and contract differently in response to each facial expression that a person makes, which facilitates the deep learning AI algorithms’ ability to identify an emotion. Facial emotion analysis has numerous applications across various industries and domains, leveraging the understanding of human emotions conveyed through facial expressions, so it is very much required in healthcare, security and survelliance, Forensics, Autism and cultural studies etc,.. In this study, facially expressed sentiments in real-time photographs as well as in an existing dataset are classified using object detection techniques based on deep learning. Fast Region-based Convolution Neural Network (R-CNN) is an object detection system that uses suggested areas to categorize facial expressions of emotion in real-time. Using a high-quality video collection made up of 24 actors who were photographed facially expressing eight distinct emotions (Happy, Sad, Disgust, Anger, Surprise, Fear, Contempt and Neutral). The Fast R-CNN and Mouth region-based feature extraction and Maximally Stable Extremal Regions (MSER) method used for classification and feature extraction respectively. In order to assess the deep network’s performance, the proposed work builds a confusion matrix. The network generalizes to new images rather well, as seen by the average recognition rate of 97.6% for eight emotions. The suggested deep network approach may deliver superior recognition performance when compared to CNN and SVM methods, and it can be applied to a variety of applications including online classrooms, video game testing, healthcare sectors, and automated industry.</description><identifier>ISSN: 1064-1246</identifier><identifier>EISSN: 1875-8967</identifier><identifier>DOI: 10.3233/JIFS-233842</identifier><language>eng</language><publisher>Amsterdam: IOS Press BV</publisher><subject>Algorithms ; Artificial intelligence ; Artificial neural networks ; Autism ; Computer & video games ; Data mining ; Deep learning ; Emotions ; Feature extraction ; Health care ; Machine learning ; Object recognition ; Real time ; Sentiment analysis</subject><ispartof>Journal of intelligent & fuzzy systems, 2023-12, Vol.45 (6), p.10141-10155</ispartof><rights>Copyright IOS Press BV 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c219t-7cb723c19ee145b42e75989dc75ed6f047f7df1c2bd546c798b5337bd208ff313</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids></links><search><creatorcontrib>Umamageswari, A.</creatorcontrib><creatorcontrib>Deepa, S.</creatorcontrib><creatorcontrib>Bhagyalakshmi, A.</creatorcontrib><creatorcontrib>Sangari, A.</creatorcontrib><creatorcontrib>Raja, K.</creatorcontrib><title>EmotionFusion: A unified ensemble R-CNN approach for advanced facial emotion analysis</title><title>Journal of intelligent & fuzzy systems</title><description>To assess non-verbal reactions to commodities, services, or products, sentiment analysis is the technique of identifying exhibited human emotions utilizing artificial intelligence-based technology. The facial muscles flex and contract differently in response to each facial expression that a person makes, which facilitates the deep learning AI algorithms’ ability to identify an emotion. Facial emotion analysis has numerous applications across various industries and domains, leveraging the understanding of human emotions conveyed through facial expressions, so it is very much required in healthcare, security and survelliance, Forensics, Autism and cultural studies etc,.. In this study, facially expressed sentiments in real-time photographs as well as in an existing dataset are classified using object detection techniques based on deep learning. Fast Region-based Convolution Neural Network (R-CNN) is an object detection system that uses suggested areas to categorize facial expressions of emotion in real-time. Using a high-quality video collection made up of 24 actors who were photographed facially expressing eight distinct emotions (Happy, Sad, Disgust, Anger, Surprise, Fear, Contempt and Neutral). The Fast R-CNN and Mouth region-based feature extraction and Maximally Stable Extremal Regions (MSER) method used for classification and feature extraction respectively. In order to assess the deep network’s performance, the proposed work builds a confusion matrix. The network generalizes to new images rather well, as seen by the average recognition rate of 97.6% for eight emotions. The suggested deep network approach may deliver superior recognition performance when compared to CNN and SVM methods, and it can be applied to a variety of applications including online classrooms, video game testing, healthcare sectors, and automated industry.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Artificial neural networks</subject><subject>Autism</subject><subject>Computer & video games</subject><subject>Data mining</subject><subject>Deep learning</subject><subject>Emotions</subject><subject>Feature extraction</subject><subject>Health care</subject><subject>Machine learning</subject><subject>Object recognition</subject><subject>Real time</subject><subject>Sentiment analysis</subject><issn>1064-1246</issn><issn>1875-8967</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNotkM1KAzEYRYMoWKsrXyDgUqL5nSTuSrFaKRXUrkMmPzhlOlOTjtC3N2XcfN9dHC6XA8AtwQ-MMvb4tlx8ohIUp2dgQpQUSOlKnpeMK44I5dUluMp5izGRguIJ2Dzv-kPTd4shl_sEZ3DomtgED0OXw65uA_xA8_Ua2v0-9dZ9w9gnaP2v7VyBonWNbWEYS6DtbHvMTb4GF9G2Odz8_ynYLJ6_5q9o9f6ynM9WyFGiD0i6WlLmiA6BcFFzGqTQSnsnRfBVxFxG6SNxtPaCV05qVQvGZO0pVjEywqbgbuwt236GkA9m2w-pjMiGKi2FkhVWhbofKZf6nFOIZp-anU1HQ7A5eTMnb2b0xv4AJ71fiw</recordid><startdate>20231202</startdate><enddate>20231202</enddate><creator>Umamageswari, A.</creator><creator>Deepa, S.</creator><creator>Bhagyalakshmi, A.</creator><creator>Sangari, A.</creator><creator>Raja, K.</creator><general>IOS Press BV</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20231202</creationdate><title>EmotionFusion: A unified ensemble R-CNN approach for advanced facial emotion analysis</title><author>Umamageswari, A. ; Deepa, S. ; Bhagyalakshmi, A. ; Sangari, A. ; Raja, K.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c219t-7cb723c19ee145b42e75989dc75ed6f047f7df1c2bd546c798b5337bd208ff313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Artificial neural networks</topic><topic>Autism</topic><topic>Computer & video games</topic><topic>Data mining</topic><topic>Deep learning</topic><topic>Emotions</topic><topic>Feature extraction</topic><topic>Health care</topic><topic>Machine learning</topic><topic>Object recognition</topic><topic>Real time</topic><topic>Sentiment analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Umamageswari, A.</creatorcontrib><creatorcontrib>Deepa, S.</creatorcontrib><creatorcontrib>Bhagyalakshmi, A.</creatorcontrib><creatorcontrib>Sangari, A.</creatorcontrib><creatorcontrib>Raja, K.</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of intelligent & fuzzy systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Umamageswari, A.</au><au>Deepa, S.</au><au>Bhagyalakshmi, A.</au><au>Sangari, A.</au><au>Raja, K.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>EmotionFusion: A unified ensemble R-CNN approach for advanced facial emotion analysis</atitle><jtitle>Journal of intelligent & fuzzy systems</jtitle><date>2023-12-02</date><risdate>2023</risdate><volume>45</volume><issue>6</issue><spage>10141</spage><epage>10155</epage><pages>10141-10155</pages><issn>1064-1246</issn><eissn>1875-8967</eissn><abstract>To assess non-verbal reactions to commodities, services, or products, sentiment analysis is the technique of identifying exhibited human emotions utilizing artificial intelligence-based technology. The facial muscles flex and contract differently in response to each facial expression that a person makes, which facilitates the deep learning AI algorithms’ ability to identify an emotion. Facial emotion analysis has numerous applications across various industries and domains, leveraging the understanding of human emotions conveyed through facial expressions, so it is very much required in healthcare, security and survelliance, Forensics, Autism and cultural studies etc,.. In this study, facially expressed sentiments in real-time photographs as well as in an existing dataset are classified using object detection techniques based on deep learning. Fast Region-based Convolution Neural Network (R-CNN) is an object detection system that uses suggested areas to categorize facial expressions of emotion in real-time. Using a high-quality video collection made up of 24 actors who were photographed facially expressing eight distinct emotions (Happy, Sad, Disgust, Anger, Surprise, Fear, Contempt and Neutral). The Fast R-CNN and Mouth region-based feature extraction and Maximally Stable Extremal Regions (MSER) method used for classification and feature extraction respectively. In order to assess the deep network’s performance, the proposed work builds a confusion matrix. The network generalizes to new images rather well, as seen by the average recognition rate of 97.6% for eight emotions. The suggested deep network approach may deliver superior recognition performance when compared to CNN and SVM methods, and it can be applied to a variety of applications including online classrooms, video game testing, healthcare sectors, and automated industry.</abstract><cop>Amsterdam</cop><pub>IOS Press BV</pub><doi>10.3233/JIFS-233842</doi><tpages>15</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1064-1246 |
ispartof | Journal of intelligent & fuzzy systems, 2023-12, Vol.45 (6), p.10141-10155 |
issn | 1064-1246 1875-8967 |
language | eng |
recordid | cdi_proquest_journals_2897587608 |
source | Business Source Complete |
subjects | Algorithms Artificial intelligence Artificial neural networks Autism Computer & video games Data mining Deep learning Emotions Feature extraction Health care Machine learning Object recognition Real time Sentiment analysis |
title | EmotionFusion: A unified ensemble R-CNN approach for advanced facial emotion analysis |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T00%3A49%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=EmotionFusion:%20A%20unified%20ensemble%20R-CNN%20approach%20for%20advanced%20facial%20emotion%20analysis&rft.jtitle=Journal%20of%20intelligent%20&%20fuzzy%20systems&rft.au=Umamageswari,%20A.&rft.date=2023-12-02&rft.volume=45&rft.issue=6&rft.spage=10141&rft.epage=10155&rft.pages=10141-10155&rft.issn=1064-1246&rft.eissn=1875-8967&rft_id=info:doi/10.3233/JIFS-233842&rft_dat=%3Cproquest_cross%3E2897587608%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2897587608&rft_id=info:pmid/&rfr_iscdi=true |