Off-campus UMass Amherst users: To download dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.
Non-UMass Amherst users, please click the view more button below to purchase a copy of this dissertation from Proquest.
(Some titles may also be available free of charge in our Open Access Dissertation Collection, so please check there first.)
Phonological constraints on the segmentation of continuous speech
In this dissertation, I develop a model of word segmentation in which systematic grammatical knowledge guides division of the speech stream into words. When the speaker's intended syllabification is unambiguously signaled by allophonic variation and phonotactic constraints, this information is used to segment the input. However, in the absence of phonotactic and allophonic cues to word boundaries, listeners still assign structure to the incoming acoustic signal. Language-specific rankings of a small set of universal constraints on syllable well-formedness are used to determine privileged alignment points for lexical search. As soon as a syllable onset is identified, the cohort of words consistent with that syllable onset is activated. This is a more efficient segmentation strategy than initiating lexical access at each phoneme, since a syllabic strategy results in comparatively fewer wasted access attempts. ^ Supporting evidence for the grammatical model of word segmentation is presented in a series of wordspotting experiments. English listeners are shown to resolve allophonic and phonotactic ambiguity by using stress to determine syllabification. A stressed syllable can attract one or more consonants into its coda if followed by a stressless syllable, otherwise onsets are maximized. The Metrical Segmentation Strategy (Cutler & Norris, 1988) fails to account for these results since it ignores the effect of stress on syllabification. ^ An important difference between the grammatical model and other current models of word segmentation, such as TRACE and Shortlist, is the claim that listeners use the grammar to parse the input into syllables, even in the absence of statistical and acoustic cues. TRACE does not recognize any level of structure between the phoneme and the word. Although Shortlist recognizes explicit cues to word boundaries, such as phonotactics, allophonics, and vowel quality, when such cues are absent lexical access is attempted at each phoneme. ^
Cecilia Jennifer Kirk,
"Phonological constraints on the segmentation of continuous speech"
(January 1, 2001).
Electronic Doctoral Dissertations for UMass Amherst.