Focused Audification and the optimization of its parameters

We present a sonification method which we call Focused Audification (FA; previously: Augmented Audification) that allows to expand pure audification in a flexible way. It is based on a combination of single-side-band modulation and a pitch modulation of the original data stream. Based on two free pa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal on multimodal user interfaces 2020-06, Vol.14 (2), p.187-198
Hauptverfasser: Groß-Vogt, Katharina, Frank, Matthias, Höldrich, Robert
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 198
container_issue 2
container_start_page 187
container_title Journal on multimodal user interfaces
container_volume 14
creator Groß-Vogt, Katharina
Frank, Matthias
Höldrich, Robert
description We present a sonification method which we call Focused Audification (FA; previously: Augmented Audification) that allows to expand pure audification in a flexible way. It is based on a combination of single-side-band modulation and a pitch modulation of the original data stream. Based on two free parameters, the sonification’s frequency range is adjustable to the human hearing range and allows to interactively zoom into the data set at any scale. The parameters have been adjusted in a multimodal experiment on cardiac data by laypeople. Following from these results we suggest a procedure for parameter optimization to achieve an optimal listening range for any data set, adjusted to human speech.
doi_str_mv 10.1007/s12193-019-00317-8
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2397319926</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2397319926</sourcerecordid><originalsourceid>FETCH-LOGICAL-c363t-a5105cf5d9810bbbddaf80fee738063549aee9122c904a32cc3aebde96558c9b3</originalsourceid><addsrcrecordid>eNp9kE9LxDAQxYMouKz7BTwVPEfzp20SPC2Lq8KCFz2HNJ1oFrepSXrQT2-0C96cywyP92aGH0KXlFxTQsRNoowqjglVmBBOBZYnaEGF5FgKLk-Ps2iFOEerlPakFGdFqxfodhvslKCv1lPvnbcm-zBUZuir_AZVGLM_-K9ZDK7yOVWjieYAGWK6QGfOvCdYHfsSvWzvnjcPePd0_7hZ77DlLc_YNJQ01jW9kpR0Xdf3xkniAMpzpOVNrQyAooxZRWrDmbXcQNeDaptGWtXxJbqa944xfEyQst6HKQ7lpGZcCU6VYm1xsdllY0gpgtNj9AcTPzUl-oeTnjnpwkn_ctKyhPgcSsU8vEL8W_1P6hsCp2rD</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2397319926</pqid></control><display><type>article</type><title>Focused Audification and the optimization of its parameters</title><source>Springer Nature - Complete Springer Journals</source><creator>Groß-Vogt, Katharina ; Frank, Matthias ; Höldrich, Robert</creator><creatorcontrib>Groß-Vogt, Katharina ; Frank, Matthias ; Höldrich, Robert</creatorcontrib><description>We present a sonification method which we call Focused Audification (FA; previously: Augmented Audification) that allows to expand pure audification in a flexible way. It is based on a combination of single-side-band modulation and a pitch modulation of the original data stream. Based on two free parameters, the sonification’s frequency range is adjustable to the human hearing range and allows to interactively zoom into the data set at any scale. The parameters have been adjusted in a multimodal experiment on cardiac data by laypeople. Following from these results we suggest a procedure for parameter optimization to achieve an optimal listening range for any data set, adjusted to human speech.</description><identifier>ISSN: 1783-7677</identifier><identifier>EISSN: 1783-8738</identifier><identifier>DOI: 10.1007/s12193-019-00317-8</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Computer Science ; Data transmission ; Datasets ; Frequency ranges ; Image Processing and Computer Vision ; Modulation ; Optimization ; Original Paper ; Parameters ; Signal,Image and Speech Processing ; User Interfaces and Human Computer Interaction</subject><ispartof>Journal on multimodal user interfaces, 2020-06, Vol.14 (2), p.187-198</ispartof><rights>The Author(s) 2019</rights><rights>The Author(s) 2019. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c363t-a5105cf5d9810bbbddaf80fee738063549aee9122c904a32cc3aebde96558c9b3</citedby><cites>FETCH-LOGICAL-c363t-a5105cf5d9810bbbddaf80fee738063549aee9122c904a32cc3aebde96558c9b3</cites><orcidid>0000-0002-0924-5579</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s12193-019-00317-8$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s12193-019-00317-8$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51298</link.rule.ids></links><search><creatorcontrib>Groß-Vogt, Katharina</creatorcontrib><creatorcontrib>Frank, Matthias</creatorcontrib><creatorcontrib>Höldrich, Robert</creatorcontrib><title>Focused Audification and the optimization of its parameters</title><title>Journal on multimodal user interfaces</title><addtitle>J Multimodal User Interfaces</addtitle><description>We present a sonification method which we call Focused Audification (FA; previously: Augmented Audification) that allows to expand pure audification in a flexible way. It is based on a combination of single-side-band modulation and a pitch modulation of the original data stream. Based on two free parameters, the sonification’s frequency range is adjustable to the human hearing range and allows to interactively zoom into the data set at any scale. The parameters have been adjusted in a multimodal experiment on cardiac data by laypeople. Following from these results we suggest a procedure for parameter optimization to achieve an optimal listening range for any data set, adjusted to human speech.</description><subject>Computer Science</subject><subject>Data transmission</subject><subject>Datasets</subject><subject>Frequency ranges</subject><subject>Image Processing and Computer Vision</subject><subject>Modulation</subject><subject>Optimization</subject><subject>Original Paper</subject><subject>Parameters</subject><subject>Signal,Image and Speech Processing</subject><subject>User Interfaces and Human Computer Interaction</subject><issn>1783-7677</issn><issn>1783-8738</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><recordid>eNp9kE9LxDAQxYMouKz7BTwVPEfzp20SPC2Lq8KCFz2HNJ1oFrepSXrQT2-0C96cywyP92aGH0KXlFxTQsRNoowqjglVmBBOBZYnaEGF5FgKLk-Ps2iFOEerlPakFGdFqxfodhvslKCv1lPvnbcm-zBUZuir_AZVGLM_-K9ZDK7yOVWjieYAGWK6QGfOvCdYHfsSvWzvnjcPePd0_7hZ77DlLc_YNJQ01jW9kpR0Xdf3xkniAMpzpOVNrQyAooxZRWrDmbXcQNeDaptGWtXxJbqa944xfEyQst6HKQ7lpGZcCU6VYm1xsdllY0gpgtNj9AcTPzUl-oeTnjnpwkn_ctKyhPgcSsU8vEL8W_1P6hsCp2rD</recordid><startdate>20200601</startdate><enddate>20200601</enddate><creator>Groß-Vogt, Katharina</creator><creator>Frank, Matthias</creator><creator>Höldrich, Robert</creator><general>Springer International Publishing</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-0924-5579</orcidid></search><sort><creationdate>20200601</creationdate><title>Focused Audification and the optimization of its parameters</title><author>Groß-Vogt, Katharina ; Frank, Matthias ; Höldrich, Robert</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c363t-a5105cf5d9810bbbddaf80fee738063549aee9122c904a32cc3aebde96558c9b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science</topic><topic>Data transmission</topic><topic>Datasets</topic><topic>Frequency ranges</topic><topic>Image Processing and Computer Vision</topic><topic>Modulation</topic><topic>Optimization</topic><topic>Original Paper</topic><topic>Parameters</topic><topic>Signal,Image and Speech Processing</topic><topic>User Interfaces and Human Computer Interaction</topic><toplevel>online_resources</toplevel><creatorcontrib>Groß-Vogt, Katharina</creatorcontrib><creatorcontrib>Frank, Matthias</creatorcontrib><creatorcontrib>Höldrich, Robert</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><jtitle>Journal on multimodal user interfaces</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Groß-Vogt, Katharina</au><au>Frank, Matthias</au><au>Höldrich, Robert</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Focused Audification and the optimization of its parameters</atitle><jtitle>Journal on multimodal user interfaces</jtitle><stitle>J Multimodal User Interfaces</stitle><date>2020-06-01</date><risdate>2020</risdate><volume>14</volume><issue>2</issue><spage>187</spage><epage>198</epage><pages>187-198</pages><issn>1783-7677</issn><eissn>1783-8738</eissn><abstract>We present a sonification method which we call Focused Audification (FA; previously: Augmented Audification) that allows to expand pure audification in a flexible way. It is based on a combination of single-side-band modulation and a pitch modulation of the original data stream. Based on two free parameters, the sonification’s frequency range is adjustable to the human hearing range and allows to interactively zoom into the data set at any scale. The parameters have been adjusted in a multimodal experiment on cardiac data by laypeople. Following from these results we suggest a procedure for parameter optimization to achieve an optimal listening range for any data set, adjusted to human speech.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><doi>10.1007/s12193-019-00317-8</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-0924-5579</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1783-7677
ispartof Journal on multimodal user interfaces, 2020-06, Vol.14 (2), p.187-198
issn 1783-7677
1783-8738
language eng
recordid cdi_proquest_journals_2397319926
source Springer Nature - Complete Springer Journals
subjects Computer Science
Data transmission
Datasets
Frequency ranges
Image Processing and Computer Vision
Modulation
Optimization
Original Paper
Parameters
Signal,Image and Speech Processing
User Interfaces and Human Computer Interaction
title Focused Audification and the optimization of its parameters
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T10%3A55%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Focused%20Audification%20and%20the%20optimization%20of%20its%20parameters&rft.jtitle=Journal%20on%20multimodal%20user%20interfaces&rft.au=Gro%C3%9F-Vogt,%20Katharina&rft.date=2020-06-01&rft.volume=14&rft.issue=2&rft.spage=187&rft.epage=198&rft.pages=187-198&rft.issn=1783-7677&rft.eissn=1783-8738&rft_id=info:doi/10.1007/s12193-019-00317-8&rft_dat=%3Cproquest_cross%3E2397319926%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2397319926&rft_id=info:pmid/&rfr_iscdi=true