Towards Ultrasound Tongue Image prediction from EEG during speech production
Previous initial research has already been carried out to propose speech-based BCI using brain signals (e.g. non-invasive EEG and invasive sEEG / ECoG), but there is a lack of combined methods that investigate non-invasive brain, articulation, and speech signals together and analyze the cognitive pr...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-10 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Tamás Gábor Csapó Frigyes Viktor Arthur Nagy, Péter Boncz, Ádám |
description | Previous initial research has already been carried out to propose speech-based BCI using brain signals (e.g. non-invasive EEG and invasive sEEG / ECoG), but there is a lack of combined methods that investigate non-invasive brain, articulation, and speech signals together and analyze the cognitive processes in the brain, the kinematics of the articulatory movement and the resulting speech signal. In this paper, we describe our multimodal (electroencephalography, ultrasound tongue imaging, and speech) analysis and synthesis experiments, as a feasibility study. We extend the analysis of brain signals recorded during speech production with ultrasound-based articulation data. From the brain signal measured with EEG, we predict ultrasound images of the tongue with a fully connected deep neural network. The results show that there is a weak but noticeable relationship between EEG and ultrasound tongue images, i.e. the network can differentiate articulated speech and neutral tongue position. |
doi_str_mv | 10.48550/arxiv.2306.05374 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2306_05374</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2824146859</sourcerecordid><originalsourceid>FETCH-LOGICAL-a954-3c6092f4865824caed248107e4708715d72078cb9161d27a8ae05d933c3e01223</originalsourceid><addsrcrecordid>eNotj8tOwzAURC0kJKrSD2CFJdYp19d27CxRFUqlSmzCOjKxE1I1cbATHn9P2rKazdHMHELuGKyFlhIeTfhpv9bIIV2D5EpckQVyzhItEG_IKsYDAGCqUEq-IPvCf5tgI307jsFEP_WWFr5vJkd3nWkcHYKzbTW2vqd18B3N8y21U2j7hsbBuepjJrydzsQtua7NMbrVfy5J8ZwXm5dk_7rdbZ72icmkSHiVQoa10KnUKCrjLArNQDmhQCsmrUJQunrPWMosKqONA2kzzivugCHyJbm_1J5VyyG0nQm_5Um5PCvPxMOFmL99Ti6O5cFPoZ8_lThvMpFqmfE_TOtX8g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2824146859</pqid></control><display><type>article</type><title>Towards Ultrasound Tongue Image prediction from EEG during speech production</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Tamás Gábor Csapó ; Frigyes Viktor Arthur ; Nagy, Péter ; Boncz, Ádám</creator><creatorcontrib>Tamás Gábor Csapó ; Frigyes Viktor Arthur ; Nagy, Péter ; Boncz, Ádám</creatorcontrib><description>Previous initial research has already been carried out to propose speech-based BCI using brain signals (e.g. non-invasive EEG and invasive sEEG / ECoG), but there is a lack of combined methods that investigate non-invasive brain, articulation, and speech signals together and analyze the cognitive processes in the brain, the kinematics of the articulatory movement and the resulting speech signal. In this paper, we describe our multimodal (electroencephalography, ultrasound tongue imaging, and speech) analysis and synthesis experiments, as a feasibility study. We extend the analysis of brain signals recorded during speech production with ultrasound-based articulation data. From the brain signal measured with EEG, we predict ultrasound images of the tongue with a fully connected deep neural network. The results show that there is a weak but noticeable relationship between EEG and ultrasound tongue images, i.e. the network can differentiate articulated speech and neutral tongue position.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2306.05374</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Brain ; Computer Science - Sound ; Electroencephalography ; Feasibility studies ; Kinematics ; Physics - Medical Physics ; Speech ; Tongue</subject><ispartof>arXiv.org, 2023-10</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.05374$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.21437/Interspeech.2023-40$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Tamás Gábor Csapó</creatorcontrib><creatorcontrib>Frigyes Viktor Arthur</creatorcontrib><creatorcontrib>Nagy, Péter</creatorcontrib><creatorcontrib>Boncz, Ádám</creatorcontrib><title>Towards Ultrasound Tongue Image prediction from EEG during speech production</title><title>arXiv.org</title><description>Previous initial research has already been carried out to propose speech-based BCI using brain signals (e.g. non-invasive EEG and invasive sEEG / ECoG), but there is a lack of combined methods that investigate non-invasive brain, articulation, and speech signals together and analyze the cognitive processes in the brain, the kinematics of the articulatory movement and the resulting speech signal. In this paper, we describe our multimodal (electroencephalography, ultrasound tongue imaging, and speech) analysis and synthesis experiments, as a feasibility study. We extend the analysis of brain signals recorded during speech production with ultrasound-based articulation data. From the brain signal measured with EEG, we predict ultrasound images of the tongue with a fully connected deep neural network. The results show that there is a weak but noticeable relationship between EEG and ultrasound tongue images, i.e. the network can differentiate articulated speech and neutral tongue position.</description><subject>Artificial neural networks</subject><subject>Brain</subject><subject>Computer Science - Sound</subject><subject>Electroencephalography</subject><subject>Feasibility studies</subject><subject>Kinematics</subject><subject>Physics - Medical Physics</subject><subject>Speech</subject><subject>Tongue</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURC0kJKrSD2CFJdYp19d27CxRFUqlSmzCOjKxE1I1cbATHn9P2rKazdHMHELuGKyFlhIeTfhpv9bIIV2D5EpckQVyzhItEG_IKsYDAGCqUEq-IPvCf5tgI307jsFEP_WWFr5vJkd3nWkcHYKzbTW2vqd18B3N8y21U2j7hsbBuepjJrydzsQtua7NMbrVfy5J8ZwXm5dk_7rdbZ72icmkSHiVQoa10KnUKCrjLArNQDmhQCsmrUJQunrPWMosKqONA2kzzivugCHyJbm_1J5VyyG0nQm_5Um5PCvPxMOFmL99Ti6O5cFPoZ8_lThvMpFqmfE_TOtX8g</recordid><startdate>20231018</startdate><enddate>20231018</enddate><creator>Tamás Gábor Csapó</creator><creator>Frigyes Viktor Arthur</creator><creator>Nagy, Péter</creator><creator>Boncz, Ádám</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231018</creationdate><title>Towards Ultrasound Tongue Image prediction from EEG during speech production</title><author>Tamás Gábor Csapó ; Frigyes Viktor Arthur ; Nagy, Péter ; Boncz, Ádám</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a954-3c6092f4865824caed248107e4708715d72078cb9161d27a8ae05d933c3e01223</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial neural networks</topic><topic>Brain</topic><topic>Computer Science - Sound</topic><topic>Electroencephalography</topic><topic>Feasibility studies</topic><topic>Kinematics</topic><topic>Physics - Medical Physics</topic><topic>Speech</topic><topic>Tongue</topic><toplevel>online_resources</toplevel><creatorcontrib>Tamás Gábor Csapó</creatorcontrib><creatorcontrib>Frigyes Viktor Arthur</creatorcontrib><creatorcontrib>Nagy, Péter</creatorcontrib><creatorcontrib>Boncz, Ádám</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tamás Gábor Csapó</au><au>Frigyes Viktor Arthur</au><au>Nagy, Péter</au><au>Boncz, Ádám</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Ultrasound Tongue Image prediction from EEG during speech production</atitle><jtitle>arXiv.org</jtitle><date>2023-10-18</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Previous initial research has already been carried out to propose speech-based BCI using brain signals (e.g. non-invasive EEG and invasive sEEG / ECoG), but there is a lack of combined methods that investigate non-invasive brain, articulation, and speech signals together and analyze the cognitive processes in the brain, the kinematics of the articulatory movement and the resulting speech signal. In this paper, we describe our multimodal (electroencephalography, ultrasound tongue imaging, and speech) analysis and synthesis experiments, as a feasibility study. We extend the analysis of brain signals recorded during speech production with ultrasound-based articulation data. From the brain signal measured with EEG, we predict ultrasound images of the tongue with a fully connected deep neural network. The results show that there is a weak but noticeable relationship between EEG and ultrasound tongue images, i.e. the network can differentiate articulated speech and neutral tongue position.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2306.05374</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2306_05374 |
source | arXiv.org; Free E- Journals |
subjects | Artificial neural networks Brain Computer Science - Sound Electroencephalography Feasibility studies Kinematics Physics - Medical Physics Speech Tongue |
title | Towards Ultrasound Tongue Image prediction from EEG during speech production |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T06%3A32%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Ultrasound%20Tongue%20Image%20prediction%20from%20EEG%20during%20speech%20production&rft.jtitle=arXiv.org&rft.au=Tam%C3%A1s%20G%C3%A1bor%20Csap%C3%B3&rft.date=2023-10-18&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2306.05374&rft_dat=%3Cproquest_arxiv%3E2824146859%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2824146859&rft_id=info:pmid/&rfr_iscdi=true |