Skip to page navigation menu Skip entire header
Brown University
Skip 13 subheader links

Top-Down Effects on Speech Perception: An Integrated Computational and Behavioral Approach

Description

Abstract:
During auditory language comprehension, bottom-up acoustic cues in the sensory signal are critical to the recognition of spoken words. However, listeners are also sensitive to higher-level processing; in general, identification of ambiguous targets is biased by prior expectations (e.g., words over non-words, contextually consistent words over inconsistent words). Although it is clear that such top-down cues influence word recognition, how they do so is less clear. The present work examines several questions about the computational principles underlying top-down effects on speech perception, focusing primarily on the influence of a preceding sentential context (e.g., Valerie hated the... vs. Brett hated to...) on the identification of phonetically ambiguous targets from voice-onset time continua (e.g., between bay and pay). Chapter 1 considers a longstanding debate: do top-down effects result from interactive modulation of perceptual processing or from entirely autonomous, decision-level processing? Some research has suggested that the time course of top-down effects is incompatible with interactive models. However, two experiments illustrate that, with appropriate controls, the predictions of interactive models are supported. Ultimately, though, two weaknesses of existing spoken word recognition models (whether interactive or autonomous) are that they ignore the role of sentential context and that they ignore the enormous variability in the size of top-down effects. To address these gaps, Chapter 2 introduces BIASES (short for Bayesian Integration of Acoustic and Sentential Evidence in Speech), a newly developed computational model of speech perception. Chapter 3 demonstrates BIASES’ ability to predict and explain fine-grained variability and asymmetries in both novel experimental data and in previously published work. Finally, Chapter 4 employs BIASES to examine top-down processing in patients with aphasia. Model-based analysis of previously published data and new data utilizing stimuli from Chapter 1 suggest that patients experience both bottom-up processing deficits and lexical processing deficits. Importantly, those impairments differ as a function of patients’ clinical diagnoses. In sum, this work offers new insights into the computations occurring at the interface between the perceptual processing of speech and the cognitive and linguistic processing of language.
Notes:
Thesis (Ph.D. -- Brown University (2016)

Access Conditions

Rights
In Copyright
Restrictions on Use
Collection is open for research.

Citation

Fox, Neal P., "Top-Down Effects on Speech Perception: An Integrated Computational and Behavioral Approach" (2016). Cognitive Sciences Theses and Dissertations. Brown Digital Repository. Brown University Library. https://doi.org/10.7301/Z0833QDS

Relations

Collection: