Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

N/A

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Linguistics

Year Degree Awarded

2016

Month Degree Awarded

February

First Advisor

Lyn Frazier

Second Advisor

Joe Pater

Third Advisor

John Kingston

Fourth Advisor

Lisa Sanders

Subject Categories

Phonetics and Phonology | Psycholinguistics and Neurolinguistics

Abstract

This dissertation investigates the cognitive mechanism underlying language users' ability to generalize probabilistic phonological patterns in their lexicon to novel words. Specifically, do speakers represent probabilistic patterns using abstract grammatical constraints? If so, this system of constraints would, like categorical phonological generalizations (a) be limited in the space of possible generalizations it can represent, and (b) apply to known and novel words alike without reference to specific known words. I examine these two predictions, comparing them to the predictions of alternative models. Analogical models are specifically considered. In chapter 3 I examine speakers' productions of novel words without near lexical neighbors. Speakers' productions of these novel words are compared to actual (relatively distant) words which could serve as an analogical base. Participants successfully extended a probabilistic trend in the lexicon to novel words, and did not use the analogical bases to do so: the contents of an analogical base for a given nonword did not predict participants' behavior on that nonword. In chapter 4 I discuss a case of mismatch with the lexicon - participants extend a near-categorical trend in the lexicon to novel words, but they undermatch the distribution found in the lexicon. This undermatching would not be predicted if learners could induce arbitrarily complex constraints. I argue instead that the trend is represented grammatically, and that the mismatch arises because of a bias for simpler constraints either in learning or in the structure of the grammar itself. If probabilistic phonological generalizations are represented abstractly, how do they interact with the lexicon of stored word forms? I address this issue in chapter 2 by looking at the perception of known and novel forms. ERP data demonstrates that a productive probabilistic trend influences the early stages of the lexical access process, specifically in known words. I consider two possible mechanisms for this: (1) that the lexical entries of known exceptional forms differ from known trend-observing forms, or (2) that the process of accessing an exceptional form involves a violation of expectations imposed by the grammar, and thus requires more processing power than the process of accessing a trend-observing form.

DOI

https://doi.org/10.7275/7949476.0

Share

COinS