<mods:mods xmlns:mods="http://www.loc.gov/mods/v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ID="etd1645" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-2.xsd">
    <mods:titleInfo>
        <mods:title>Top-Down Effects on Speech Perception: An Integrated Computational and Behavioral Approach</mods:title>
    </mods:titleInfo><mods:name type="personal">
        <mods:namePart>Fox, Neal P</mods:namePart>
    <mods:role>
        <mods:roleTerm type="text">creator</mods:roleTerm>
    </mods:role>
    </mods:name>
<mods:originInfo>
    <mods:copyrightDate>2016</mods:copyrightDate>
</mods:originInfo>
<mods:physicalDescription>
        <mods:extent>xv, 266 p.</mods:extent>
        <mods:digitalOrigin>born digital</mods:digitalOrigin>
</mods:physicalDescription>
<mods:note>Thesis (Ph.D. -- Brown University (2016)</mods:note>
<mods:name type="personal">
<mods:namePart>Blumstein, Sheila</mods:namePart>
<mods:role>
<mods:roleTerm type="text">Director</mods:roleTerm>
</mods:role>
</mods:name>

<mods:name type="personal">
<mods:namePart>Frank, Michael</mods:namePart>
<mods:role>
<mods:roleTerm type="text">Reader</mods:roleTerm>
</mods:role>
</mods:name>

<mods:name type="personal">
<mods:namePart>Morgan, James</mods:namePart>
<mods:role>
<mods:roleTerm type="text">Reader</mods:roleTerm>
</mods:role>
</mods:name>
<mods:name type="corporate">
        <mods:namePart>Brown University. Cognitive and Linguistic Sciences: Cognitive Sciences</mods:namePart>
        <mods:role>
            <mods:roleTerm type="text">sponsor</mods:roleTerm>
        </mods:role>
        </mods:name>
    <mods:genre authority="aat">theses</mods:genre>
    <mods:abstract>During auditory language comprehension, bottom-up acoustic cues in the sensory signal are critical to the recognition of spoken words. However, listeners are also sensitive to higher-level processing; in general, identification of ambiguous targets is biased by prior expectations (e.g., words over non-words, contextually consistent words over inconsistent words). Although it is clear that such top-down cues influence word recognition, how they do so is less clear. The present work examines several questions about the computational principles underlying top-down effects on speech perception, focusing primarily on the influence of a preceding sentential context (e.g., Valerie hated the... vs. Brett hated to...) on the identification of phonetically ambiguous targets from voice-onset time continua (e.g., between bay and pay).

Chapter 1 considers a longstanding debate: do top-down effects result from interactive modulation of perceptual processing or from entirely autonomous, decision-level processing? Some research has suggested that the time course of top-down effects is incompatible with interactive models. However, two experiments illustrate that, with appropriate controls, the predictions of interactive models are supported.

Ultimately, though, two weaknesses of existing spoken word recognition models (whether interactive or autonomous) are that they ignore the role of sentential context and that they ignore the enormous variability in the size of top-down effects. To address these gaps, Chapter 2 introduces BIASES (short for Bayesian Integration of Acoustic and Sentential Evidence in Speech), a newly developed computational model of speech perception. Chapter 3 demonstrates BIASES’ ability to predict and explain fine-grained variability and asymmetries in both novel experimental data and in previously published work.

Finally, Chapter 4 employs BIASES to examine top-down processing in patients with aphasia. Model-based analysis of previously published data and new data utilizing stimuli from Chapter 1 suggest that patients experience both bottom-up processing deficits and lexical processing deficits. Importantly, those impairments differ as a function of patients’ clinical diagnoses.

In sum, this work offers new insights into the computations occurring at the interface between the perceptual processing of speech and the cognitive and linguistic processing of language.</mods:abstract>

    <mods:subject>
        <mods:topic>computational modeling</mods:topic>
    </mods:subject>

    <mods:subject>
        <mods:topic>spoken word recognition</mods:topic>
    </mods:subject>

    <mods:subject>
        <mods:topic>top-down processing</mods:topic>
    </mods:subject>

    <mods:subject>
        <mods:topic>sentential context</mods:topic>
    </mods:subject>

    <mods:subject>
        <mods:topic>voice-onset time (VOT)</mods:topic>
    </mods:subject>

    <mods:subject>
        <mods:topic>Bayesian modeling</mods:topic>
    </mods:subject>

    <mods:subject xmlns:xlink="http://www.w3.org/1999/xlink" authority="FAST" authorityURI="http://id.worldcat.org/fast" valueURI="http://id.worldcat.org/fast/1129230"><mods:topic>Speech perception</mods:topic></mods:subject><mods:subject xmlns:xlink="http://www.w3.org/1999/xlink" authority="FAST" authorityURI="http://id.worldcat.org/fast" valueURI="http://id.worldcat.org/fast/811278"><mods:topic>Aphasia</mods:topic></mods:subject><mods:recordInfo>
        <mods:recordContentSource authority="marcorg">RPB</mods:recordContentSource>
        <mods:recordCreationDate encoding="iso8601">20160629</mods:recordCreationDate>        
    </mods:recordInfo>
<mods:language xmlns:xlink="http://www.w3.org/1999/xlink"><mods:languageTerm type="code" authority="iso639-2b">eng</mods:languageTerm><mods:languageTerm type="text">English</mods:languageTerm></mods:language><mods:identifier xmlns:xlink="http://www.w3.org/1999/xlink" type="doi">10.7301/Z0833QDS</mods:identifier><mods:accessCondition xmlns:xlink="http://www.w3.org/1999/xlink" type="rights statement" xlink:href="http://rightsstatements.org/vocab/InC/1.0/">In Copyright</mods:accessCondition><mods:accessCondition type="restriction on access">Collection is open for research.</mods:accessCondition><mods:typeOfResource xmlns:xlink="http://www.w3.org/1999/xlink" authority="primo">dissertations</mods:typeOfResource></mods:mods>