Interpretable Seizure Classification Using Unprocessed EEG With Multi-Channel Attentive Feature Fusion
Identification of seizure type plays a vital role during clinical diagnosis and treatment of epilepsy. However, the clinical evaluation of seizure type is highly dependent on the observed medical symptoms and the experience of the epileptologists who perform the evaluation. A key diagnostic tool is...
Gespeichert in:
Veröffentlicht in: | IEEE sensors journal 2021-09, Vol.21 (17), p.19186-19197 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 19197 |
---|---|
container_issue | 17 |
container_start_page | 19186 |
container_title | IEEE sensors journal |
container_volume | 21 |
creator | Priyasad, Darshana Fernando, Tharindu Denman, Simon Sridharan, Sridha Fookes, Clinton |
description | Identification of seizure type plays a vital role during clinical diagnosis and treatment of epilepsy. However, the clinical evaluation of seizure type is highly dependent on the observed medical symptoms and the experience of the epileptologists who perform the evaluation. A key diagnostic tool is the electroencephalogram (EEG), which captures brain activity and can be used to determine the type of seizure occurring. EEG channels show non-stationary and dynamic behavior following the onset of a seizure event, and each EEG channel can display unique characteristics based on the seizure type and the epileptic foci. This paper proposes a novel deep learning architecture with attention-driven data fusion using raw scalp EEG data from a 10-20 layout, where independent shallow deep networks are trained on each channel. Unlike most state-of-the-art methods that first employ a data engineering step, we directly pass the EEG signal from each channel through a deep convolutional network consisting of SincNet and Conv1D layers, which learn robust features directly from the input signals, increasing model interpretability. However, the importance of each channel and the temporal information varies based on conditions particular to the recording, and this can adversely affect the overall recognition. We propose an approach based on the attentive fusion of channels to ensure only salient features from individual channel encoders are captured, passing the fused information to a Deep Neural Network for classification. Our proposed method has obtained an average F1-score of 0.967 on the Temple University Hospital Seizure Corpus, the largest publicly available seizure dataset. |
doi_str_mv | 10.1109/JSEN.2021.3090062 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2568069848</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9458280</ieee_id><sourcerecordid>2568069848</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-ab7c71d6df8a4ca2c2e6bc6353151b1f365f2d125af50272e0279703ce0782223</originalsourceid><addsrcrecordid>eNo9kM1OwzAQhCMEEqXwAIiLJc4p_olj51hFaSkqcCgV3CLH2VBXwSm2gwRPT6IiLrt7mJkdfVF0TfCMEJzdPWyKpxnFlMwYzjBO6Uk0IZzLmIhEno43w3HCxNt5dOH9HmOSCS4mUbOyAdzBQVBVC2gD5qd3gPJWeW8ao1UwnUVbb-w72tqD6zR4DzUqiiV6NWGHHvs2mDjfKWuhRfMQwAbzBWgBKoxJi94PCZfRWaNaD1d_exptF8VLfh-vn5erfL6ONc1YiFUltCB1WjdSJVpRTSGtdMo4I5xUpGEpb2hNKFcNx1RQGEYmMNOAhaSUsml0e8wdmn724EO573pnh5cl5anEaSYTOajIUaVd572Dpjw486Hcd0lwOeIsR5zliLP8wzl4bo4eAwD_-izhkkrMfgF6PnFY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2568069848</pqid></control><display><type>article</type><title>Interpretable Seizure Classification Using Unprocessed EEG With Multi-Channel Attentive Feature Fusion</title><source>IEEE Electronic Library (IEL)</source><creator>Priyasad, Darshana ; Fernando, Tharindu ; Denman, Simon ; Sridharan, Sridha ; Fookes, Clinton</creator><creatorcontrib>Priyasad, Darshana ; Fernando, Tharindu ; Denman, Simon ; Sridharan, Sridha ; Fookes, Clinton</creatorcontrib><description>Identification of seizure type plays a vital role during clinical diagnosis and treatment of epilepsy. However, the clinical evaluation of seizure type is highly dependent on the observed medical symptoms and the experience of the epileptologists who perform the evaluation. A key diagnostic tool is the electroencephalogram (EEG), which captures brain activity and can be used to determine the type of seizure occurring. EEG channels show non-stationary and dynamic behavior following the onset of a seizure event, and each EEG channel can display unique characteristics based on the seizure type and the epileptic foci. This paper proposes a novel deep learning architecture with attention-driven data fusion using raw scalp EEG data from a 10-20 layout, where independent shallow deep networks are trained on each channel. Unlike most state-of-the-art methods that first employ a data engineering step, we directly pass the EEG signal from each channel through a deep convolutional network consisting of SincNet and Conv1D layers, which learn robust features directly from the input signals, increasing model interpretability. However, the importance of each channel and the temporal information varies based on conditions particular to the recording, and this can adversely affect the overall recognition. We propose an approach based on the attentive fusion of channels to ensure only salient features from individual channel encoders are captured, passing the fused information to a Deep Neural Network for classification. Our proposed method has obtained an average F1-score of 0.967 on the Temple University Hospital Seizure Corpus, the largest publicly available seizure dataset.</description><identifier>ISSN: 1530-437X</identifier><identifier>EISSN: 1558-1748</identifier><identifier>DOI: 10.1109/JSEN.2021.3090062</identifier><identifier>CODEN: ISJEAZ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Artificial neural networks ; Attention ; Brain modeling ; Channels ; Classification ; Coders ; Convolution ; Convulsions & seizures ; Cutoff frequency ; Data integration ; Deep learning ; Electroencephalography ; Epilepsy ; Evaluation ; Feature extraction ; Machine learning ; Mathematical model ; multi-channel fusion ; raw waveform ; seizure classification ; Seizures ; SincNet</subject><ispartof>IEEE sensors journal, 2021-09, Vol.21 (17), p.19186-19197</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-ab7c71d6df8a4ca2c2e6bc6353151b1f365f2d125af50272e0279703ce0782223</citedby><cites>FETCH-LOGICAL-c293t-ab7c71d6df8a4ca2c2e6bc6353151b1f365f2d125af50272e0279703ce0782223</cites><orcidid>0000-0003-4316-9001 ; 0000-0001-8431-4194 ; 0000-0002-6935-1816 ; 0000-0002-8515-6324 ; 0000-0002-0983-5480</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9458280$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9458280$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Priyasad, Darshana</creatorcontrib><creatorcontrib>Fernando, Tharindu</creatorcontrib><creatorcontrib>Denman, Simon</creatorcontrib><creatorcontrib>Sridharan, Sridha</creatorcontrib><creatorcontrib>Fookes, Clinton</creatorcontrib><title>Interpretable Seizure Classification Using Unprocessed EEG With Multi-Channel Attentive Feature Fusion</title><title>IEEE sensors journal</title><addtitle>JSEN</addtitle><description>Identification of seizure type plays a vital role during clinical diagnosis and treatment of epilepsy. However, the clinical evaluation of seizure type is highly dependent on the observed medical symptoms and the experience of the epileptologists who perform the evaluation. A key diagnostic tool is the electroencephalogram (EEG), which captures brain activity and can be used to determine the type of seizure occurring. EEG channels show non-stationary and dynamic behavior following the onset of a seizure event, and each EEG channel can display unique characteristics based on the seizure type and the epileptic foci. This paper proposes a novel deep learning architecture with attention-driven data fusion using raw scalp EEG data from a 10-20 layout, where independent shallow deep networks are trained on each channel. Unlike most state-of-the-art methods that first employ a data engineering step, we directly pass the EEG signal from each channel through a deep convolutional network consisting of SincNet and Conv1D layers, which learn robust features directly from the input signals, increasing model interpretability. However, the importance of each channel and the temporal information varies based on conditions particular to the recording, and this can adversely affect the overall recognition. We propose an approach based on the attentive fusion of channels to ensure only salient features from individual channel encoders are captured, passing the fused information to a Deep Neural Network for classification. Our proposed method has obtained an average F1-score of 0.967 on the Temple University Hospital Seizure Corpus, the largest publicly available seizure dataset.</description><subject>Artificial neural networks</subject><subject>Attention</subject><subject>Brain modeling</subject><subject>Channels</subject><subject>Classification</subject><subject>Coders</subject><subject>Convolution</subject><subject>Convulsions & seizures</subject><subject>Cutoff frequency</subject><subject>Data integration</subject><subject>Deep learning</subject><subject>Electroencephalography</subject><subject>Epilepsy</subject><subject>Evaluation</subject><subject>Feature extraction</subject><subject>Machine learning</subject><subject>Mathematical model</subject><subject>multi-channel fusion</subject><subject>raw waveform</subject><subject>seizure classification</subject><subject>Seizures</subject><subject>SincNet</subject><issn>1530-437X</issn><issn>1558-1748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kM1OwzAQhCMEEqXwAIiLJc4p_olj51hFaSkqcCgV3CLH2VBXwSm2gwRPT6IiLrt7mJkdfVF0TfCMEJzdPWyKpxnFlMwYzjBO6Uk0IZzLmIhEno43w3HCxNt5dOH9HmOSCS4mUbOyAdzBQVBVC2gD5qd3gPJWeW8ao1UwnUVbb-w72tqD6zR4DzUqiiV6NWGHHvs2mDjfKWuhRfMQwAbzBWgBKoxJi94PCZfRWaNaD1d_exptF8VLfh-vn5erfL6ONc1YiFUltCB1WjdSJVpRTSGtdMo4I5xUpGEpb2hNKFcNx1RQGEYmMNOAhaSUsml0e8wdmn724EO573pnh5cl5anEaSYTOajIUaVd572Dpjw486Hcd0lwOeIsR5zliLP8wzl4bo4eAwD_-izhkkrMfgF6PnFY</recordid><startdate>20210901</startdate><enddate>20210901</enddate><creator>Priyasad, Darshana</creator><creator>Fernando, Tharindu</creator><creator>Denman, Simon</creator><creator>Sridharan, Sridha</creator><creator>Fookes, Clinton</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0003-4316-9001</orcidid><orcidid>https://orcid.org/0000-0001-8431-4194</orcidid><orcidid>https://orcid.org/0000-0002-6935-1816</orcidid><orcidid>https://orcid.org/0000-0002-8515-6324</orcidid><orcidid>https://orcid.org/0000-0002-0983-5480</orcidid></search><sort><creationdate>20210901</creationdate><title>Interpretable Seizure Classification Using Unprocessed EEG With Multi-Channel Attentive Feature Fusion</title><author>Priyasad, Darshana ; Fernando, Tharindu ; Denman, Simon ; Sridharan, Sridha ; Fookes, Clinton</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-ab7c71d6df8a4ca2c2e6bc6353151b1f365f2d125af50272e0279703ce0782223</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Attention</topic><topic>Brain modeling</topic><topic>Channels</topic><topic>Classification</topic><topic>Coders</topic><topic>Convolution</topic><topic>Convulsions & seizures</topic><topic>Cutoff frequency</topic><topic>Data integration</topic><topic>Deep learning</topic><topic>Electroencephalography</topic><topic>Epilepsy</topic><topic>Evaluation</topic><topic>Feature extraction</topic><topic>Machine learning</topic><topic>Mathematical model</topic><topic>multi-channel fusion</topic><topic>raw waveform</topic><topic>seizure classification</topic><topic>Seizures</topic><topic>SincNet</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Priyasad, Darshana</creatorcontrib><creatorcontrib>Fernando, Tharindu</creatorcontrib><creatorcontrib>Denman, Simon</creatorcontrib><creatorcontrib>Sridharan, Sridha</creatorcontrib><creatorcontrib>Fookes, Clinton</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE sensors journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Priyasad, Darshana</au><au>Fernando, Tharindu</au><au>Denman, Simon</au><au>Sridharan, Sridha</au><au>Fookes, Clinton</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Interpretable Seizure Classification Using Unprocessed EEG With Multi-Channel Attentive Feature Fusion</atitle><jtitle>IEEE sensors journal</jtitle><stitle>JSEN</stitle><date>2021-09-01</date><risdate>2021</risdate><volume>21</volume><issue>17</issue><spage>19186</spage><epage>19197</epage><pages>19186-19197</pages><issn>1530-437X</issn><eissn>1558-1748</eissn><coden>ISJEAZ</coden><abstract>Identification of seizure type plays a vital role during clinical diagnosis and treatment of epilepsy. However, the clinical evaluation of seizure type is highly dependent on the observed medical symptoms and the experience of the epileptologists who perform the evaluation. A key diagnostic tool is the electroencephalogram (EEG), which captures brain activity and can be used to determine the type of seizure occurring. EEG channels show non-stationary and dynamic behavior following the onset of a seizure event, and each EEG channel can display unique characteristics based on the seizure type and the epileptic foci. This paper proposes a novel deep learning architecture with attention-driven data fusion using raw scalp EEG data from a 10-20 layout, where independent shallow deep networks are trained on each channel. Unlike most state-of-the-art methods that first employ a data engineering step, we directly pass the EEG signal from each channel through a deep convolutional network consisting of SincNet and Conv1D layers, which learn robust features directly from the input signals, increasing model interpretability. However, the importance of each channel and the temporal information varies based on conditions particular to the recording, and this can adversely affect the overall recognition. We propose an approach based on the attentive fusion of channels to ensure only salient features from individual channel encoders are captured, passing the fused information to a Deep Neural Network for classification. Our proposed method has obtained an average F1-score of 0.967 on the Temple University Hospital Seizure Corpus, the largest publicly available seizure dataset.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSEN.2021.3090062</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-4316-9001</orcidid><orcidid>https://orcid.org/0000-0001-8431-4194</orcidid><orcidid>https://orcid.org/0000-0002-6935-1816</orcidid><orcidid>https://orcid.org/0000-0002-8515-6324</orcidid><orcidid>https://orcid.org/0000-0002-0983-5480</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1530-437X |
ispartof | IEEE sensors journal, 2021-09, Vol.21 (17), p.19186-19197 |
issn | 1530-437X 1558-1748 |
language | eng |
recordid | cdi_proquest_journals_2568069848 |
source | IEEE Electronic Library (IEL) |
subjects | Artificial neural networks Attention Brain modeling Channels Classification Coders Convolution Convulsions & seizures Cutoff frequency Data integration Deep learning Electroencephalography Epilepsy Evaluation Feature extraction Machine learning Mathematical model multi-channel fusion raw waveform seizure classification Seizures SincNet |
title | Interpretable Seizure Classification Using Unprocessed EEG With Multi-Channel Attentive Feature Fusion |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T11%3A34%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Interpretable%20Seizure%20Classification%20Using%20Unprocessed%20EEG%20With%20Multi-Channel%20Attentive%20Feature%20Fusion&rft.jtitle=IEEE%20sensors%20journal&rft.au=Priyasad,%20Darshana&rft.date=2021-09-01&rft.volume=21&rft.issue=17&rft.spage=19186&rft.epage=19197&rft.pages=19186-19197&rft.issn=1530-437X&rft.eissn=1558-1748&rft.coden=ISJEAZ&rft_id=info:doi/10.1109/JSEN.2021.3090062&rft_dat=%3Cproquest_RIE%3E2568069848%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2568069848&rft_id=info:pmid/&rft_ieee_id=9458280&rfr_iscdi=true |