Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Document Type

Open Access Dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Linguistics

Year Degree Awarded

Fall 2014

First Advisor

Joe Pater

Second Advisor

John J. McCarthy

Third Advisor

John Kingston

Fourth Advisor

Sridhar Mahadevan

Subject Categories

Computational Linguistics | Phonetics and Phonology

Abstract

This dissertation demonstrates a strong connection between the frequency of stress patterns and their relative learnability under a wide class of learning algorithms. These frequency results follow from hypotheses about the learner's available representations and the distribution of input data. Such hypotheses are combined with a model of learning to derive distinctions between classes of stress patterns, addressing frequency biases not modeled by traditional generative theory.

I present a series of results for error-driven learners of constraint-based grammars. These results are shown both for single learners and learners in an iterated learning model. First, I show that with general n-gram constraints, learners show biases in their learning of stress patterns, mirroring frequency effects in the observed typology. These include biases toward full alternation and fixed stress near word edges. I show that these effects arise from the learner's representation of the consistency and distinctiveness of learning data. I formalize this notion within error-driven, constraint-based learners.

I show how specific representational assumptions can lead to distinct predictions about frequency, potentially adjudicating between theories. Languages with primary stress placement independent of word parity are shown to be---with the right constraint set---more consistent and thus more readily learned, offering an explanation for their relative frequency. This explanation is especially valuable because, while parity-dependent languages exist, they are a small minority. I continue by showing how such a model predicts biases in the size of stress windows and discuss the role of this approach in deciding the nature of potentially ``accidental" gaps.

I demonstrate that such a model can incorporate sources of bias outside the learner's representations. I give a model of a perceptual nonfinality effect based on probabilistic misperception. This modification is shown to help account for typological skews in the edge of fixed stress and windows, as well as foot type for iterative stress.

The methods used and conclusions drawn in this dissertation are potentially extendable to a wide range of linguistic phenomenon. This foundation is a way of approaching some otherwise-unexplained frequency biases by grounding them in theories of linguistic representation and learning.

Share

COinS