The vowel tango: Rethinking vowel-inherent spectral change

Vowel-inherent spectral change (VISC) refers to vowel-intrinsic formant movement across a vowel steady state. VISC has been shown to (1) be consistent across talkers within a given dialect, (2) vary regularly across vowels within a dialect, (3) vary regularly across dialects, and (4) be necessary fo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Journal of the Acoustical Society of America 2011-04, Vol.129 (4_Supplement), p.2454-2454
Hauptverfasser: Rogers, Catherine L., Glasbrenner, Merete M., DeMasi, Teresa M., Bianchi, Michelle
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Vowel-inherent spectral change (VISC) refers to vowel-intrinsic formant movement across a vowel steady state. VISC has been shown to (1) be consistent across talkers within a given dialect, (2) vary regularly across vowels within a dialect, (3) vary regularly across dialects, and (4) be necessary for peak vowel-identification accuracy. Hence, VISC has become accepted as a phonetic feature of monophthong vowels of North American English. VISC is typically portrayed using averages across tokens and talkers, highlighting regularity but potentially masking individual differences. To understand vowel production by second-language learners, we were particularly interested in such individual variation. In analyzing individual differences for neighboring target vowels, we found no single time point at which all sets of target vowel tokens were well distinguished from one another. However, looking across three time points, all native-speaker vowel sets were well distinguished from each possible neighbor set at some time point. Thus, VISC can be seen as the steps in a sort of dance, as each vowel moves to avoid overlapping with another, ultimately causing overlap with another and then more movement. This perspective is compatible with models of efficient coding and stochastic and/or exemplar based models of speech production and perception. [NIH-NIDCD #1R03DC005561-01A1.]
ISSN:0001-4966
1520-8524
DOI:10.1121/1.3588060