Gaze Patterns and Audiovisual Speech Enhancement
Purpose: In this study, the authors sought to quantify the relationships between speech intelligibility (perception) and gaze patterns under different auditory-visual conditions. Method: Eleven subjects listened to low-context sentences spoken by a single talker while viewing the face of one or more...
Gespeichert in:
Veröffentlicht in: | Journal of speech, language, and hearing research language, and hearing research, 2013-04, Vol.56 (2), p.471-480 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 480 |
---|---|
container_issue | 2 |
container_start_page | 471 |
container_title | Journal of speech, language, and hearing research |
container_volume | 56 |
creator | Yi, Astrid Wong, Willy Eizenman, Moshe |
description | Purpose: In this study, the authors sought to quantify the relationships between speech intelligibility (perception) and gaze patterns under different auditory-visual conditions. Method: Eleven subjects listened to low-context sentences spoken by a single talker while viewing the face of one or more talkers on a computer display. Subjects either maintained their gaze at a specific distance (0 degrees, 2.5 degrees, 5 degrees, 10 degrees, and 15 degrees) from the center of the talker's mouth (CTM) or moved their eyes freely on the computer display. Eye movements were monitored with an eye-tracking system, and speech intelligibility was evaluated by the mean percentage of correctly perceived. Results: With a single talker and a fixed point of gaze, speech intelligibility was similar for all fixations within 10 degrees of the CTM. With visual cues from two talker faces and a speech signal from one of the talkers, speech intelligibility was similar to that of a single talker for fixations within 2.5 degrees of the CTM. With natural viewing of a single talker, gaze strategy changed with speech-signal-to-noise ratio (SNR). For low speech-SNR, a strategy that brought the point of gaze directly to within 2.5 degrees of the CTM was used in approximately 80% of trials, whereas in high speech-SNR it was used in only approximately 50% of trials. Conclusions: With natural viewing of a single talker and high speech-SNR, subjects can shift their gaze between points on the talker's face without compromising speech intelligibility. With low-speech SNR, subjects change their gaze patterns to fixate primarily on points that are in close proximity to the talker's mouth. The latter strategy is essential to optimize speech intelligibility in situations where there are simultaneous visual cues from multiple talkers (i.e., when some of the visual cues are distracters). (Contains 7 figures and 1 footnote.) |
doi_str_mv | 10.1044/1092-4388(2012/10-0288) |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_miscellaneous_1496986390</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A338892829</galeid><ericid>EJ1015545</ericid><sourcerecordid>A338892829</sourcerecordid><originalsourceid>FETCH-LOGICAL-c530t-b9e82d1ac6b04322dc9bfe08c4e99433fbc78ad6ae5b57ba816eacb3f44359053</originalsourceid><addsrcrecordid>eNqFkl1rFDEUhoNYbK3-BHVAkPZiar43uVzK2g8KCq3XIZM5002ZSdZkRtBfb4ZtaysLJhdJTp73hXN4EfpA8AnBnH8mWNOaM6WOKCa0PGtMlTp-gQ6IEKrWBNOX5f5A7aPXOd_hsgiXr9A-ZXQhmOYHCJ_Z31B9s-MIKeTKhrZaTq2PP32ebF9dbwDculqFtQ0OBgjjG7TX2T7D2_vzEH3_sro5Pa-vvp5dnC6vaicYHutGg6ItsU42mDNKW6ebDrByHLTmjHWNWyjbSguiEYvGKiLBuoZ1nDOhsWCH6Gjru0nxxwR5NIPPDvreBohTNoRrqZVkGv8fZaVXJSVfFPTjP-hdnFIojRRDoqTmQuq_1K3twfjQxTFZN5uaJSvj1FTRmap3ULcQINk-Buh8KT_jT3bwZbcweLdT8OmJYA22H9c59tPoY8jPQbkFXYo5J-jMJvnBpl-GYDPnxcxJMHMSzJyXuTznpQjf349jagZoH2UP-SjAuy0AybvH79UlwSVnXLA_u1C_Ng</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1418694569</pqid></control><display><type>article</type><title>Gaze Patterns and Audiovisual Speech Enhancement</title><source>MEDLINE</source><source>Education Source</source><creator>Yi, Astrid ; Wong, Willy ; Eizenman, Moshe</creator><creatorcontrib>Yi, Astrid ; Wong, Willy ; Eizenman, Moshe</creatorcontrib><description>Purpose: In this study, the authors sought to quantify the relationships between speech intelligibility (perception) and gaze patterns under different auditory-visual conditions. Method: Eleven subjects listened to low-context sentences spoken by a single talker while viewing the face of one or more talkers on a computer display. Subjects either maintained their gaze at a specific distance (0 degrees, 2.5 degrees, 5 degrees, 10 degrees, and 15 degrees) from the center of the talker's mouth (CTM) or moved their eyes freely on the computer display. Eye movements were monitored with an eye-tracking system, and speech intelligibility was evaluated by the mean percentage of correctly perceived. Results: With a single talker and a fixed point of gaze, speech intelligibility was similar for all fixations within 10 degrees of the CTM. With visual cues from two talker faces and a speech signal from one of the talkers, speech intelligibility was similar to that of a single talker for fixations within 2.5 degrees of the CTM. With natural viewing of a single talker, gaze strategy changed with speech-signal-to-noise ratio (SNR). For low speech-SNR, a strategy that brought the point of gaze directly to within 2.5 degrees of the CTM was used in approximately 80% of trials, whereas in high speech-SNR it was used in only approximately 50% of trials. Conclusions: With natural viewing of a single talker and high speech-SNR, subjects can shift their gaze between points on the talker's face without compromising speech intelligibility. With low-speech SNR, subjects change their gaze patterns to fixate primarily on points that are in close proximity to the talker's mouth. The latter strategy is essential to optimize speech intelligibility in situations where there are simultaneous visual cues from multiple talkers (i.e., when some of the visual cues are distracters). (Contains 7 figures and 1 footnote.)</description><identifier>ISSN: 1092-4388</identifier><identifier>EISSN: 1558-9102</identifier><identifier>DOI: 10.1044/1092-4388(2012/10-0288)</identifier><identifier>PMID: 23275394</identifier><language>eng</language><publisher>United States: American Speech-Language-Hearing Association (ASHA)</publisher><subject>Acoustic Stimulation - methods ; Adult ; Adults ; Auditory Perception ; Cues ; Experiments ; Eye Movements ; Eye Movements - physiology ; Female ; Fixation, Ocular - physiology ; Gaze ; Humans ; Intelligibility ; Lipreading ; Listening Comprehension ; Male ; Mouth ; Noise ; Photic Stimulation - methods ; Sentences ; Speech ; Speech - physiology ; Speech disorders ; Speech Intelligibility - physiology ; Speech Perception - physiology ; Speech, Intelligibility of ; Studies ; Visual perception ; Visual Perception - physiology ; Visual Stimuli ; Young Adult</subject><ispartof>Journal of speech, language, and hearing research, 2013-04, Vol.56 (2), p.471-480</ispartof><rights>COPYRIGHT 2013 American Speech-Language-Hearing Association</rights><rights>Copyright American Speech-Language-Hearing Association Apr 2013</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c530t-b9e82d1ac6b04322dc9bfe08c4e99433fbc78ad6ae5b57ba816eacb3f44359053</citedby><cites>FETCH-LOGICAL-c530t-b9e82d1ac6b04322dc9bfe08c4e99433fbc78ad6ae5b57ba816eacb3f44359053</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids><backlink>$$Uhttp://eric.ed.gov/ERICWebPortal/detail?accno=EJ1015545$$DView record in ERIC$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/23275394$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Yi, Astrid</creatorcontrib><creatorcontrib>Wong, Willy</creatorcontrib><creatorcontrib>Eizenman, Moshe</creatorcontrib><title>Gaze Patterns and Audiovisual Speech Enhancement</title><title>Journal of speech, language, and hearing research</title><addtitle>J Speech Lang Hear Res</addtitle><description>Purpose: In this study, the authors sought to quantify the relationships between speech intelligibility (perception) and gaze patterns under different auditory-visual conditions. Method: Eleven subjects listened to low-context sentences spoken by a single talker while viewing the face of one or more talkers on a computer display. Subjects either maintained their gaze at a specific distance (0 degrees, 2.5 degrees, 5 degrees, 10 degrees, and 15 degrees) from the center of the talker's mouth (CTM) or moved their eyes freely on the computer display. Eye movements were monitored with an eye-tracking system, and speech intelligibility was evaluated by the mean percentage of correctly perceived. Results: With a single talker and a fixed point of gaze, speech intelligibility was similar for all fixations within 10 degrees of the CTM. With visual cues from two talker faces and a speech signal from one of the talkers, speech intelligibility was similar to that of a single talker for fixations within 2.5 degrees of the CTM. With natural viewing of a single talker, gaze strategy changed with speech-signal-to-noise ratio (SNR). For low speech-SNR, a strategy that brought the point of gaze directly to within 2.5 degrees of the CTM was used in approximately 80% of trials, whereas in high speech-SNR it was used in only approximately 50% of trials. Conclusions: With natural viewing of a single talker and high speech-SNR, subjects can shift their gaze between points on the talker's face without compromising speech intelligibility. With low-speech SNR, subjects change their gaze patterns to fixate primarily on points that are in close proximity to the talker's mouth. The latter strategy is essential to optimize speech intelligibility in situations where there are simultaneous visual cues from multiple talkers (i.e., when some of the visual cues are distracters). (Contains 7 figures and 1 footnote.)</description><subject>Acoustic Stimulation - methods</subject><subject>Adult</subject><subject>Adults</subject><subject>Auditory Perception</subject><subject>Cues</subject><subject>Experiments</subject><subject>Eye Movements</subject><subject>Eye Movements - physiology</subject><subject>Female</subject><subject>Fixation, Ocular - physiology</subject><subject>Gaze</subject><subject>Humans</subject><subject>Intelligibility</subject><subject>Lipreading</subject><subject>Listening Comprehension</subject><subject>Male</subject><subject>Mouth</subject><subject>Noise</subject><subject>Photic Stimulation - methods</subject><subject>Sentences</subject><subject>Speech</subject><subject>Speech - physiology</subject><subject>Speech disorders</subject><subject>Speech Intelligibility - physiology</subject><subject>Speech Perception - physiology</subject><subject>Speech, Intelligibility of</subject><subject>Studies</subject><subject>Visual perception</subject><subject>Visual Perception - physiology</subject><subject>Visual Stimuli</subject><subject>Young Adult</subject><issn>1092-4388</issn><issn>1558-9102</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNqFkl1rFDEUhoNYbK3-BHVAkPZiar43uVzK2g8KCq3XIZM5002ZSdZkRtBfb4ZtaysLJhdJTp73hXN4EfpA8AnBnH8mWNOaM6WOKCa0PGtMlTp-gQ6IEKrWBNOX5f5A7aPXOd_hsgiXr9A-ZXQhmOYHCJ_Z31B9s-MIKeTKhrZaTq2PP32ebF9dbwDculqFtQ0OBgjjG7TX2T7D2_vzEH3_sro5Pa-vvp5dnC6vaicYHutGg6ItsU42mDNKW6ebDrByHLTmjHWNWyjbSguiEYvGKiLBuoZ1nDOhsWCH6Gjru0nxxwR5NIPPDvreBohTNoRrqZVkGv8fZaVXJSVfFPTjP-hdnFIojRRDoqTmQuq_1K3twfjQxTFZN5uaJSvj1FTRmap3ULcQINk-Buh8KT_jT3bwZbcweLdT8OmJYA22H9c59tPoY8jPQbkFXYo5J-jMJvnBpl-GYDPnxcxJMHMSzJyXuTznpQjf349jagZoH2UP-SjAuy0AybvH79UlwSVnXLA_u1C_Ng</recordid><startdate>201304</startdate><enddate>201304</enddate><creator>Yi, Astrid</creator><creator>Wong, Willy</creator><creator>Eizenman, Moshe</creator><general>American Speech-Language-Hearing Association (ASHA)</general><general>American Speech-Language-Hearing Association</general><scope>7SW</scope><scope>BJH</scope><scope>BNH</scope><scope>BNI</scope><scope>BNJ</scope><scope>BNO</scope><scope>ERI</scope><scope>PET</scope><scope>REK</scope><scope>WWN</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>0-V</scope><scope>3V.</scope><scope>7RV</scope><scope>7T9</scope><scope>7X7</scope><scope>7XB</scope><scope>88B</scope><scope>88E</scope><scope>88G</scope><scope>88I</scope><scope>88J</scope><scope>8A4</scope><scope>8AF</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ALSLI</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>CJNVE</scope><scope>CPGLG</scope><scope>CRLPW</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>KB0</scope><scope>M0P</scope><scope>M0S</scope><scope>M1P</scope><scope>M2M</scope><scope>M2O</scope><scope>M2P</scope><scope>M2R</scope><scope>MBDVC</scope><scope>NAPCQ</scope><scope>PQEDU</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>Q9U</scope><scope>S0X</scope><scope>7X8</scope></search><sort><creationdate>201304</creationdate><title>Gaze Patterns and Audiovisual Speech Enhancement</title><author>Yi, Astrid ; Wong, Willy ; Eizenman, Moshe</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c530t-b9e82d1ac6b04322dc9bfe08c4e99433fbc78ad6ae5b57ba816eacb3f44359053</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Acoustic Stimulation - methods</topic><topic>Adult</topic><topic>Adults</topic><topic>Auditory Perception</topic><topic>Cues</topic><topic>Experiments</topic><topic>Eye Movements</topic><topic>Eye Movements - physiology</topic><topic>Female</topic><topic>Fixation, Ocular - physiology</topic><topic>Gaze</topic><topic>Humans</topic><topic>Intelligibility</topic><topic>Lipreading</topic><topic>Listening Comprehension</topic><topic>Male</topic><topic>Mouth</topic><topic>Noise</topic><topic>Photic Stimulation - methods</topic><topic>Sentences</topic><topic>Speech</topic><topic>Speech - physiology</topic><topic>Speech disorders</topic><topic>Speech Intelligibility - physiology</topic><topic>Speech Perception - physiology</topic><topic>Speech, Intelligibility of</topic><topic>Studies</topic><topic>Visual perception</topic><topic>Visual Perception - physiology</topic><topic>Visual Stimuli</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yi, Astrid</creatorcontrib><creatorcontrib>Wong, Willy</creatorcontrib><creatorcontrib>Eizenman, Moshe</creatorcontrib><collection>ERIC</collection><collection>ERIC (Ovid)</collection><collection>ERIC</collection><collection>ERIC</collection><collection>ERIC (Legacy Platform)</collection><collection>ERIC( SilverPlatter )</collection><collection>ERIC</collection><collection>ERIC PlusText (Legacy Platform)</collection><collection>Education Resources Information Center (ERIC)</collection><collection>ERIC</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Social Sciences Premium Collection【Remote access available】</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Nursing and Allied Health Journals</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><collection>ProQuest Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Education Database (Alumni Edition)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Psychology Database (Alumni)</collection><collection>Science Database (Alumni Edition)</collection><collection>Social Science Database (Alumni Edition)</collection><collection>Education Periodicals</collection><collection>STEM Database</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Social Science Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Databases</collection><collection>ProQuest One Community College</collection><collection>Education Collection (Proquest) (PQ_SDU_P3)</collection><collection>Linguistics Collection</collection><collection>Linguistics Database</collection><collection>ProQuest Central</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Nursing & Allied Health Database (Alumni Edition)</collection><collection>ProQuest Education Journals</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>ProQuest Psychology</collection><collection>Research Library</collection><collection>ProQuest Science Journals</collection><collection>Social Science Database</collection><collection>Research Library (Corporate)</collection><collection>Nursing & Allied Health Premium</collection><collection>ProQuest One Education</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>ProQuest Central Basic</collection><collection>SIRS Editorial</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of speech, language, and hearing research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yi, Astrid</au><au>Wong, Willy</au><au>Eizenman, Moshe</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><ericid>EJ1015545</ericid><atitle>Gaze Patterns and Audiovisual Speech Enhancement</atitle><jtitle>Journal of speech, language, and hearing research</jtitle><addtitle>J Speech Lang Hear Res</addtitle><date>2013-04</date><risdate>2013</risdate><volume>56</volume><issue>2</issue><spage>471</spage><epage>480</epage><pages>471-480</pages><issn>1092-4388</issn><eissn>1558-9102</eissn><abstract>Purpose: In this study, the authors sought to quantify the relationships between speech intelligibility (perception) and gaze patterns under different auditory-visual conditions. Method: Eleven subjects listened to low-context sentences spoken by a single talker while viewing the face of one or more talkers on a computer display. Subjects either maintained their gaze at a specific distance (0 degrees, 2.5 degrees, 5 degrees, 10 degrees, and 15 degrees) from the center of the talker's mouth (CTM) or moved their eyes freely on the computer display. Eye movements were monitored with an eye-tracking system, and speech intelligibility was evaluated by the mean percentage of correctly perceived. Results: With a single talker and a fixed point of gaze, speech intelligibility was similar for all fixations within 10 degrees of the CTM. With visual cues from two talker faces and a speech signal from one of the talkers, speech intelligibility was similar to that of a single talker for fixations within 2.5 degrees of the CTM. With natural viewing of a single talker, gaze strategy changed with speech-signal-to-noise ratio (SNR). For low speech-SNR, a strategy that brought the point of gaze directly to within 2.5 degrees of the CTM was used in approximately 80% of trials, whereas in high speech-SNR it was used in only approximately 50% of trials. Conclusions: With natural viewing of a single talker and high speech-SNR, subjects can shift their gaze between points on the talker's face without compromising speech intelligibility. With low-speech SNR, subjects change their gaze patterns to fixate primarily on points that are in close proximity to the talker's mouth. The latter strategy is essential to optimize speech intelligibility in situations where there are simultaneous visual cues from multiple talkers (i.e., when some of the visual cues are distracters). (Contains 7 figures and 1 footnote.)</abstract><cop>United States</cop><pub>American Speech-Language-Hearing Association (ASHA)</pub><pmid>23275394</pmid><doi>10.1044/1092-4388(2012/10-0288)</doi><tpages>10</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1092-4388 |
ispartof | Journal of speech, language, and hearing research, 2013-04, Vol.56 (2), p.471-480 |
issn | 1092-4388 1558-9102 |
language | eng |
recordid | cdi_proquest_miscellaneous_1496986390 |
source | MEDLINE; Education Source |
subjects | Acoustic Stimulation - methods Adult Adults Auditory Perception Cues Experiments Eye Movements Eye Movements - physiology Female Fixation, Ocular - physiology Gaze Humans Intelligibility Lipreading Listening Comprehension Male Mouth Noise Photic Stimulation - methods Sentences Speech Speech - physiology Speech disorders Speech Intelligibility - physiology Speech Perception - physiology Speech, Intelligibility of Studies Visual perception Visual Perception - physiology Visual Stimuli Young Adult |
title | Gaze Patterns and Audiovisual Speech Enhancement |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T19%3A07%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Gaze%20Patterns%20and%20Audiovisual%20Speech%20Enhancement&rft.jtitle=Journal%20of%20speech,%20language,%20and%20hearing%20research&rft.au=Yi,%20Astrid&rft.date=2013-04&rft.volume=56&rft.issue=2&rft.spage=471&rft.epage=480&rft.pages=471-480&rft.issn=1092-4388&rft.eissn=1558-9102&rft_id=info:doi/10.1044/1092-4388(2012/10-0288)&rft_dat=%3Cgale_proqu%3EA338892829%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1418694569&rft_id=info:pmid/23275394&rft_galeid=A338892829&rft_ericid=EJ1015545&rfr_iscdi=true |