Computer-Aided Recognition Based on Decision-Level Multimodal Fusion for Depression
Aiming at the problem of depression recognition, this paper proposes a computer-aided recognition framework based on decision-level multimodal fusion. In Song Dynasty of China, the idea of multimodal fusion was contained in "one gets different impressions of a mountain when viewing it from the...
Gespeichert in:
Veröffentlicht in: | IEEE journal of biomedical and health informatics 2022-07, Vol.26 (7), p.3466-3477 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 3477 |
---|---|
container_issue | 7 |
container_start_page | 3466 |
container_title | IEEE journal of biomedical and health informatics |
container_volume | 26 |
creator | Zhang, Bingtao Cai, Hanshu Song, Yubo Tao, Lei Li, Yanlin |
description | Aiming at the problem of depression recognition, this paper proposes a computer-aided recognition framework based on decision-level multimodal fusion. In Song Dynasty of China, the idea of multimodal fusion was contained in "one gets different impressions of a mountain when viewing it from the front or sideways, at a close range or from afar" poetry. Objective and comprehensive analysis of depression can more accurately restore its essence, and multimodal can represent more information about depression compared to single modal. Linear electroencephalography (EEG) features based on adaptive auto regression (AR) model and typical nonlinear EEG features are extracted. EEG features related to depression and graph metric features in depression related brain regions are selected as the data basis of multimodal fusion to ensure data diversity. Based on the theory of multi-agent cooperation, the computer-aided depression recognition model of decision-level is realized. The experimental data comes from 24 depressed patients and 29 healthy controls (HC). The results of multi-group controlled trials show that compared with single modal or independent classifiers, the decision-level multimodal fusion method has a stronger ability to recognize depression, and the highest accuracy rate 92.13% was obtained. In addition, our results suggest that improving the brain region associated with information processing can help alleviate and treat depression. In the field of classification and recognition, our results clarify that there is no universal classifier suitable for any condition. |
doi_str_mv | 10.1109/JBHI.2022.3165640 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_35389872</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9751408</ieee_id><sourcerecordid>2682921945</sourcerecordid><originalsourceid>FETCH-LOGICAL-c349t-4537ae83048997104f36f5ca55bbfc83f310539787fabfc59d6cc0bc700fb3b53</originalsourceid><addsrcrecordid>eNpdkFtLwzAYhoMobsz9ABGk4I03nTk0bXK5TecmE8HDdWnTL9LRNjVpBf-9KZu7MDdJ3jzfS3gQuiR4RgiWd0-L9WZGMaUzRmIeR_gEjSmJRUgpFqd_ZyKjEZo6t8N-CR_J-ByNGGdCioSO0dvS1G3fgQ3nZQFF8ArKfDZlV5omWGTOJ_5wD6p0Pgm38A1V8NxXXVmbIquCVT_kgTbWQ60FN1wv0JnOKgfTwz5BH6uH9-U63L48bpbzbahYJLsw4izJQDAcCSkTgiPNYs1VxnmeayWYZgRzJhOR6MwHXBaxUjhXCcY6ZzlnE3S7722t-erBdWldOgVVlTVgepfS2Ddj3x959OYfujO9bfzvPCWoHDwNhWRPKWucs6DT1pZ1Zn9SgtNBejpITwfp6UG6n7k-NPd5DcVx4k-xB672QAkAx2eZcBJhwX4BJGqEOw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2682921945</pqid></control><display><type>article</type><title>Computer-Aided Recognition Based on Decision-Level Multimodal Fusion for Depression</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Bingtao ; Cai, Hanshu ; Song, Yubo ; Tao, Lei ; Li, Yanlin</creator><creatorcontrib>Zhang, Bingtao ; Cai, Hanshu ; Song, Yubo ; Tao, Lei ; Li, Yanlin</creatorcontrib><description>Aiming at the problem of depression recognition, this paper proposes a computer-aided recognition framework based on decision-level multimodal fusion. In Song Dynasty of China, the idea of multimodal fusion was contained in "one gets different impressions of a mountain when viewing it from the front or sideways, at a close range or from afar" poetry. Objective and comprehensive analysis of depression can more accurately restore its essence, and multimodal can represent more information about depression compared to single modal. Linear electroencephalography (EEG) features based on adaptive auto regression (AR) model and typical nonlinear EEG features are extracted. EEG features related to depression and graph metric features in depression related brain regions are selected as the data basis of multimodal fusion to ensure data diversity. Based on the theory of multi-agent cooperation, the computer-aided depression recognition model of decision-level is realized. The experimental data comes from 24 depressed patients and 29 healthy controls (HC). The results of multi-group controlled trials show that compared with single modal or independent classifiers, the decision-level multimodal fusion method has a stronger ability to recognize depression, and the highest accuracy rate 92.13% was obtained. In addition, our results suggest that improving the brain region associated with information processing can help alleviate and treat depression. In the field of classification and recognition, our results clarify that there is no universal classifier suitable for any condition.</description><identifier>ISSN: 2168-2194</identifier><identifier>EISSN: 2168-2208</identifier><identifier>DOI: 10.1109/JBHI.2022.3165640</identifier><identifier>PMID: 35389872</identifier><identifier>CODEN: IJBHA9</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Bioinformatics ; Brain ; Classifiers ; Clinical trials ; Computer aided decision processes ; Computer-aided recognition ; Data processing ; Decision theory ; Depression ; Diseases ; EEG ; Electrodes ; Electroencephalography ; Face recognition ; Feature extraction ; Information processing ; Mountains ; Multiagent systems ; multimodal fusion ; Recognition ; Regression models</subject><ispartof>IEEE journal of biomedical and health informatics, 2022-07, Vol.26 (7), p.3466-3477</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c349t-4537ae83048997104f36f5ca55bbfc83f310539787fabfc59d6cc0bc700fb3b53</citedby><cites>FETCH-LOGICAL-c349t-4537ae83048997104f36f5ca55bbfc83f310539787fabfc59d6cc0bc700fb3b53</cites><orcidid>0000-0002-8918-0591 ; 0000-0002-2104-9298 ; 0000-0003-3643-3580</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9751408$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9751408$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35389872$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Bingtao</creatorcontrib><creatorcontrib>Cai, Hanshu</creatorcontrib><creatorcontrib>Song, Yubo</creatorcontrib><creatorcontrib>Tao, Lei</creatorcontrib><creatorcontrib>Li, Yanlin</creatorcontrib><title>Computer-Aided Recognition Based on Decision-Level Multimodal Fusion for Depression</title><title>IEEE journal of biomedical and health informatics</title><addtitle>JBHI</addtitle><addtitle>IEEE J Biomed Health Inform</addtitle><description>Aiming at the problem of depression recognition, this paper proposes a computer-aided recognition framework based on decision-level multimodal fusion. In Song Dynasty of China, the idea of multimodal fusion was contained in "one gets different impressions of a mountain when viewing it from the front or sideways, at a close range or from afar" poetry. Objective and comprehensive analysis of depression can more accurately restore its essence, and multimodal can represent more information about depression compared to single modal. Linear electroencephalography (EEG) features based on adaptive auto regression (AR) model and typical nonlinear EEG features are extracted. EEG features related to depression and graph metric features in depression related brain regions are selected as the data basis of multimodal fusion to ensure data diversity. Based on the theory of multi-agent cooperation, the computer-aided depression recognition model of decision-level is realized. The experimental data comes from 24 depressed patients and 29 healthy controls (HC). The results of multi-group controlled trials show that compared with single modal or independent classifiers, the decision-level multimodal fusion method has a stronger ability to recognize depression, and the highest accuracy rate 92.13% was obtained. In addition, our results suggest that improving the brain region associated with information processing can help alleviate and treat depression. In the field of classification and recognition, our results clarify that there is no universal classifier suitable for any condition.</description><subject>Bioinformatics</subject><subject>Brain</subject><subject>Classifiers</subject><subject>Clinical trials</subject><subject>Computer aided decision processes</subject><subject>Computer-aided recognition</subject><subject>Data processing</subject><subject>Decision theory</subject><subject>Depression</subject><subject>Diseases</subject><subject>EEG</subject><subject>Electrodes</subject><subject>Electroencephalography</subject><subject>Face recognition</subject><subject>Feature extraction</subject><subject>Information processing</subject><subject>Mountains</subject><subject>Multiagent systems</subject><subject>multimodal fusion</subject><subject>Recognition</subject><subject>Regression models</subject><issn>2168-2194</issn><issn>2168-2208</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkFtLwzAYhoMobsz9ABGk4I03nTk0bXK5TecmE8HDdWnTL9LRNjVpBf-9KZu7MDdJ3jzfS3gQuiR4RgiWd0-L9WZGMaUzRmIeR_gEjSmJRUgpFqd_ZyKjEZo6t8N-CR_J-ByNGGdCioSO0dvS1G3fgQ3nZQFF8ArKfDZlV5omWGTOJ_5wD6p0Pgm38A1V8NxXXVmbIquCVT_kgTbWQ60FN1wv0JnOKgfTwz5BH6uH9-U63L48bpbzbahYJLsw4izJQDAcCSkTgiPNYs1VxnmeayWYZgRzJhOR6MwHXBaxUjhXCcY6ZzlnE3S7722t-erBdWldOgVVlTVgepfS2Ddj3x959OYfujO9bfzvPCWoHDwNhWRPKWucs6DT1pZ1Zn9SgtNBejpITwfp6UG6n7k-NPd5DcVx4k-xB672QAkAx2eZcBJhwX4BJGqEOw</recordid><startdate>20220701</startdate><enddate>20220701</enddate><creator>Zhang, Bingtao</creator><creator>Cai, Hanshu</creator><creator>Song, Yubo</creator><creator>Tao, Lei</creator><creator>Li, Yanlin</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>K9.</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-8918-0591</orcidid><orcidid>https://orcid.org/0000-0002-2104-9298</orcidid><orcidid>https://orcid.org/0000-0003-3643-3580</orcidid></search><sort><creationdate>20220701</creationdate><title>Computer-Aided Recognition Based on Decision-Level Multimodal Fusion for Depression</title><author>Zhang, Bingtao ; Cai, Hanshu ; Song, Yubo ; Tao, Lei ; Li, Yanlin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c349t-4537ae83048997104f36f5ca55bbfc83f310539787fabfc59d6cc0bc700fb3b53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Bioinformatics</topic><topic>Brain</topic><topic>Classifiers</topic><topic>Clinical trials</topic><topic>Computer aided decision processes</topic><topic>Computer-aided recognition</topic><topic>Data processing</topic><topic>Decision theory</topic><topic>Depression</topic><topic>Diseases</topic><topic>EEG</topic><topic>Electrodes</topic><topic>Electroencephalography</topic><topic>Face recognition</topic><topic>Feature extraction</topic><topic>Information processing</topic><topic>Mountains</topic><topic>Multiagent systems</topic><topic>multimodal fusion</topic><topic>Recognition</topic><topic>Regression models</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Bingtao</creatorcontrib><creatorcontrib>Cai, Hanshu</creatorcontrib><creatorcontrib>Song, Yubo</creatorcontrib><creatorcontrib>Tao, Lei</creatorcontrib><creatorcontrib>Li, Yanlin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing & Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE journal of biomedical and health informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Bingtao</au><au>Cai, Hanshu</au><au>Song, Yubo</au><au>Tao, Lei</au><au>Li, Yanlin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Computer-Aided Recognition Based on Decision-Level Multimodal Fusion for Depression</atitle><jtitle>IEEE journal of biomedical and health informatics</jtitle><stitle>JBHI</stitle><addtitle>IEEE J Biomed Health Inform</addtitle><date>2022-07-01</date><risdate>2022</risdate><volume>26</volume><issue>7</issue><spage>3466</spage><epage>3477</epage><pages>3466-3477</pages><issn>2168-2194</issn><eissn>2168-2208</eissn><coden>IJBHA9</coden><abstract>Aiming at the problem of depression recognition, this paper proposes a computer-aided recognition framework based on decision-level multimodal fusion. In Song Dynasty of China, the idea of multimodal fusion was contained in "one gets different impressions of a mountain when viewing it from the front or sideways, at a close range or from afar" poetry. Objective and comprehensive analysis of depression can more accurately restore its essence, and multimodal can represent more information about depression compared to single modal. Linear electroencephalography (EEG) features based on adaptive auto regression (AR) model and typical nonlinear EEG features are extracted. EEG features related to depression and graph metric features in depression related brain regions are selected as the data basis of multimodal fusion to ensure data diversity. Based on the theory of multi-agent cooperation, the computer-aided depression recognition model of decision-level is realized. The experimental data comes from 24 depressed patients and 29 healthy controls (HC). The results of multi-group controlled trials show that compared with single modal or independent classifiers, the decision-level multimodal fusion method has a stronger ability to recognize depression, and the highest accuracy rate 92.13% was obtained. In addition, our results suggest that improving the brain region associated with information processing can help alleviate and treat depression. In the field of classification and recognition, our results clarify that there is no universal classifier suitable for any condition.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>35389872</pmid><doi>10.1109/JBHI.2022.3165640</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-8918-0591</orcidid><orcidid>https://orcid.org/0000-0002-2104-9298</orcidid><orcidid>https://orcid.org/0000-0003-3643-3580</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2168-2194 |
ispartof | IEEE journal of biomedical and health informatics, 2022-07, Vol.26 (7), p.3466-3477 |
issn | 2168-2194 2168-2208 |
language | eng |
recordid | cdi_pubmed_primary_35389872 |
source | IEEE Electronic Library (IEL) |
subjects | Bioinformatics Brain Classifiers Clinical trials Computer aided decision processes Computer-aided recognition Data processing Decision theory Depression Diseases EEG Electrodes Electroencephalography Face recognition Feature extraction Information processing Mountains Multiagent systems multimodal fusion Recognition Regression models |
title | Computer-Aided Recognition Based on Decision-Level Multimodal Fusion for Depression |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T20%3A57%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Computer-Aided%20Recognition%20Based%20on%20Decision-Level%20Multimodal%20Fusion%20for%20Depression&rft.jtitle=IEEE%20journal%20of%20biomedical%20and%20health%20informatics&rft.au=Zhang,%20Bingtao&rft.date=2022-07-01&rft.volume=26&rft.issue=7&rft.spage=3466&rft.epage=3477&rft.pages=3466-3477&rft.issn=2168-2194&rft.eissn=2168-2208&rft.coden=IJBHA9&rft_id=info:doi/10.1109/JBHI.2022.3165640&rft_dat=%3Cproquest_RIE%3E2682921945%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2682921945&rft_id=info:pmid/35389872&rft_ieee_id=9751408&rfr_iscdi=true |