A model of auditory streaming

An essential feature of intelligent sensory processing is the ability to focus on the part of the signal of interest against a background of distracting signals, and to be able to direct this focus at will. In this paper the problem of auditory streaming is considered and a model of the early stages...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Journal of the Acoustical Society of America 1997-03, Vol.101 (3), p.1611-1621
Hauptverfasser: McCabe, Susan L., Denham, Michael J.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1621
container_issue 3
container_start_page 1611
container_title The Journal of the Acoustical Society of America
container_volume 101
creator McCabe, Susan L.
Denham, Michael J.
description An essential feature of intelligent sensory processing is the ability to focus on the part of the signal of interest against a background of distracting signals, and to be able to direct this focus at will. In this paper the problem of auditory streaming is considered and a model of the early stages of the process is proposed. The behavior of the model is shown to be in agreement with a number of well-known psychophysical results, including the relationship between presentation rate, frequency separation and streaming, the temporal development of streaming, and the effect of background organization on streaming. The principal contribution of this model is that it demonstrates how streaming might result from interactions between the tonotopic patterns of activity of incoming signals and traces of previous activity which feed back and influence the way in which subsequent signals are processed. The significance of these results for auditory scene analysis is considered and a framework for the integration of simultaneous and sequential grouping cues in the perception of auditory objects is proposed.
doi_str_mv 10.1121/1.418176
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1121_1_418176</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1121_1_418176</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-a5e548e613831dab8888a95fb0ebe5dcb17cf1ee42bd9c5b3fd8aef0ac9e99ca3</originalsourceid><addsrcrecordid>eNotj81KAzEURoNYcGwLvoAwSzepufmZJstStAoFN7oON8lNGek4koyLvr2VejYfZ_PBYewOxApAwiOsNFhYd1esASMFt0bqa9YIIYBr13U37LbWz7Maq1zD7jftMCY6tmNu8Sf101hObZ0K4dB_HRZslvFYafm_c_bx_PS-feH7t93rdrPnUTqYOBoy2lIHyipIGOwZdCYHQYFMigHWMQORliG5aILKySJlgdGRcxHVnD1cfmMZay2U_XfpBywnD8L_ZXnwlyz1C3SQPus</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A model of auditory streaming</title><source>AIP Acoustical Society of America</source><creator>McCabe, Susan L. ; Denham, Michael J.</creator><creatorcontrib>McCabe, Susan L. ; Denham, Michael J.</creatorcontrib><description>An essential feature of intelligent sensory processing is the ability to focus on the part of the signal of interest against a background of distracting signals, and to be able to direct this focus at will. In this paper the problem of auditory streaming is considered and a model of the early stages of the process is proposed. The behavior of the model is shown to be in agreement with a number of well-known psychophysical results, including the relationship between presentation rate, frequency separation and streaming, the temporal development of streaming, and the effect of background organization on streaming. The principal contribution of this model is that it demonstrates how streaming might result from interactions between the tonotopic patterns of activity of incoming signals and traces of previous activity which feed back and influence the way in which subsequent signals are processed. The significance of these results for auditory scene analysis is considered and a framework for the integration of simultaneous and sequential grouping cues in the perception of auditory objects is proposed.</description><identifier>ISSN: 0001-4966</identifier><identifier>EISSN: 1520-8524</identifier><identifier>DOI: 10.1121/1.418176</identifier><language>eng</language><ispartof>The Journal of the Acoustical Society of America, 1997-03, Vol.101 (3), p.1611-1621</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-a5e548e613831dab8888a95fb0ebe5dcb17cf1ee42bd9c5b3fd8aef0ac9e99ca3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>207,314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>McCabe, Susan L.</creatorcontrib><creatorcontrib>Denham, Michael J.</creatorcontrib><title>A model of auditory streaming</title><title>The Journal of the Acoustical Society of America</title><description>An essential feature of intelligent sensory processing is the ability to focus on the part of the signal of interest against a background of distracting signals, and to be able to direct this focus at will. In this paper the problem of auditory streaming is considered and a model of the early stages of the process is proposed. The behavior of the model is shown to be in agreement with a number of well-known psychophysical results, including the relationship between presentation rate, frequency separation and streaming, the temporal development of streaming, and the effect of background organization on streaming. The principal contribution of this model is that it demonstrates how streaming might result from interactions between the tonotopic patterns of activity of incoming signals and traces of previous activity which feed back and influence the way in which subsequent signals are processed. The significance of these results for auditory scene analysis is considered and a framework for the integration of simultaneous and sequential grouping cues in the perception of auditory objects is proposed.</description><issn>0001-4966</issn><issn>1520-8524</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1997</creationdate><recordtype>article</recordtype><recordid>eNotj81KAzEURoNYcGwLvoAwSzepufmZJstStAoFN7oON8lNGek4koyLvr2VejYfZ_PBYewOxApAwiOsNFhYd1esASMFt0bqa9YIIYBr13U37LbWz7Maq1zD7jftMCY6tmNu8Sf101hObZ0K4dB_HRZslvFYafm_c_bx_PS-feH7t93rdrPnUTqYOBoy2lIHyipIGOwZdCYHQYFMigHWMQORliG5aILKySJlgdGRcxHVnD1cfmMZay2U_XfpBywnD8L_ZXnwlyz1C3SQPus</recordid><startdate>19970301</startdate><enddate>19970301</enddate><creator>McCabe, Susan L.</creator><creator>Denham, Michael J.</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>19970301</creationdate><title>A model of auditory streaming</title><author>McCabe, Susan L. ; Denham, Michael J.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-a5e548e613831dab8888a95fb0ebe5dcb17cf1ee42bd9c5b3fd8aef0ac9e99ca3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1997</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>McCabe, Susan L.</creatorcontrib><creatorcontrib>Denham, Michael J.</creatorcontrib><collection>CrossRef</collection><jtitle>The Journal of the Acoustical Society of America</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>McCabe, Susan L.</au><au>Denham, Michael J.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A model of auditory streaming</atitle><jtitle>The Journal of the Acoustical Society of America</jtitle><date>1997-03-01</date><risdate>1997</risdate><volume>101</volume><issue>3</issue><spage>1611</spage><epage>1621</epage><pages>1611-1621</pages><issn>0001-4966</issn><eissn>1520-8524</eissn><abstract>An essential feature of intelligent sensory processing is the ability to focus on the part of the signal of interest against a background of distracting signals, and to be able to direct this focus at will. In this paper the problem of auditory streaming is considered and a model of the early stages of the process is proposed. The behavior of the model is shown to be in agreement with a number of well-known psychophysical results, including the relationship between presentation rate, frequency separation and streaming, the temporal development of streaming, and the effect of background organization on streaming. The principal contribution of this model is that it demonstrates how streaming might result from interactions between the tonotopic patterns of activity of incoming signals and traces of previous activity which feed back and influence the way in which subsequent signals are processed. The significance of these results for auditory scene analysis is considered and a framework for the integration of simultaneous and sequential grouping cues in the perception of auditory objects is proposed.</abstract><doi>10.1121/1.418176</doi><tpages>11</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0001-4966
ispartof The Journal of the Acoustical Society of America, 1997-03, Vol.101 (3), p.1611-1621
issn 0001-4966
1520-8524
language eng
recordid cdi_crossref_primary_10_1121_1_418176
source AIP Acoustical Society of America
title A model of auditory streaming
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T04%3A19%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20model%20of%20auditory%20streaming&rft.jtitle=The%20Journal%20of%20the%20Acoustical%20Society%20of%20America&rft.au=McCabe,%20Susan%20L.&rft.date=1997-03-01&rft.volume=101&rft.issue=3&rft.spage=1611&rft.epage=1621&rft.pages=1611-1621&rft.issn=0001-4966&rft.eissn=1520-8524&rft_id=info:doi/10.1121/1.418176&rft_dat=%3Ccrossref%3E10_1121_1_418176%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true