An explicit time-frequency characterization of synchrony in an auditory model

Computational models of the auditory periphery exist, but progress beyond the periphery requires appropriate representations of auditory data. This paper describes a novel time-frequency representation of speech derived from a characterization of synchronous activity in an auditory model. Both spati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer speech & language 1992-04, Vol.6 (2), p.153-173
1. Verfasser: Cooke, M.P.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Computational models of the auditory periphery exist, but progress beyond the periphery requires appropriate representations of auditory data. This paper describes a novel time-frequency representation of speech derived from a characterization of synchronous activity in an auditory model. Both spatial and temporal groupings of activity are made explicit. The representation, which we call synchrony strands, is the result of tackling an auditory temporal correspondence problem. The adequacy of the representation has been informally tested by resynthesis from a wide variety of speech and non-speech material. The application of synchrony strands in tasks such as computational auditory scene analysis is discussed.
ISSN:0885-2308
1095-8363
DOI:10.1016/0885-2308(92)90024-X