Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.
Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.
Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.
Author ORCID Identifier
N/A
AccessType
Open Access Dissertation
Document Type
dissertation
Degree Name
Doctor of Philosophy (PhD)
Degree Program
Linguistics
Year Degree Awarded
2016
Month Degree Awarded
September
First Advisor
Gaja Jarosz
Second Advisor
Joe Pater
Third Advisor
John McCarthy
Fourth Advisor
Kristine Yu
Subject Categories
Computational Linguistics | Linguistics | Phonetics and Phonology
Abstract
This dissertation explores new perspectives in phonological hidden structure learning (inferring structure not present in the speech signal that is necessary for phonological analysis; Tesar 1998, Jarosz 2013a, Boersma and Pater 2016), and extends this type of learning towards the domain of phonological features, towards derivations in Stratal OT (Bermúdez-Otero 1999), and towards exceptionality indices in probabilistic OT. Two more specific themes also come out: the possibility of inducing instead of pre-specifying the space of possible hidden structures, and the importance of cues in the data for triggering the use of hidden structure. In chapters 2 and 4, phonological features and exception groupings are induced by an unsupervised procedure that finds units not explicitly given to the learner. In chapters 2 and 3, there is an effect of non-specification or underspecification on the hidden level whenever the data does not give enough cues for that hidden level to be used. When features are hidden structure (chapter 2), they are only used for patterns that generalize across multiple segments. When intermediate derivational levels are hidden structure (chapter 3), the hidden structure necessary for opaque interactions is found more often when additional cues for the stratal affiliation of the opaque process are present in the data.
Chapter 1 motivates and explains the central questions in this dissertation. Chapter 2 shows that phonological features can be induced from groupings of segments (which is motivated by phonetic non-transparency of feature assignment, see, e.g., Anderson 1981), and that patterns that do not generalize across segments are formulated in terms of segments in such a model. Chapter 3 implements a version of Stratal OT (Bermúdez-Otero 1999), and confirms Kiparsky’s (2000) hypothesis that evidence for an opaque process’ stratal affiliation makes it easier to learn an opaque interaction, even when opaque interactions are more difficult to learn than their transparent counterparts. Chapter 4 proposes a probabilistic (instead of non-probabilistic; e.g. Pater 2010) learner for lexically indexed constraints (Pater 2000) in Expectation Driven Learning (Jarosz submitted), and demonstrates its effectiveness on Dutch stress (van der Hulst 1984, Kager 1989, Nouveau 1994, van Oostendorp 1997).
DOI
https://doi.org/10.7275/9054817.0
Recommended Citation
Nazarov, Aleksei I., "Extending Hidden Structure Learning: Features, Opacity, and Exceptions" (2016). Doctoral Dissertations. 782.
https://doi.org/10.7275/9054817.0
https://scholarworks.umass.edu/dissertations_2/782
Full Code of Feature Learner as Described in Chapter 2