Adam Albright (MIT) will give the department colloquium on Friday, October 25, in Machmer E-37 at 3:30. A title and abstract follow.
Modeling the acquisition of phonological alternations with learning biases
[joint work with Young Ah Do, Georgetown University]
What expectations do learners bring to the task of acquiring alternations? We provide evidence for three biases: (1) a bias against alternations, favoring uniform paradigms (McCarthy 1998); (2) a bias in favor of alternations that target broader classes of segments (Peperkamp et al. 2006); (3) a substantive bias against perceptually salient alternations (Steriade 2001).
Learners’ biases were probed using Artificial Grammar experiments, in which adult English speakers were taught singular~plural pairs in a “Martian language”, and were then asked to produce or rate plural forms. In the artificial language, obstruent-final stems exhibited either voicing alternations (dap~dabi) or continuancy alternations (brup~brufi), for both labial-final and coronal-final stems. By manipulating the frequency of labials vs. coronals, we were able to vary the amount of data concerning different segmental alternations. If learners are biased to expect non-alternation, then we expect fewer alternating responses for the rarer segment, and this is indeed what we observe: participants often produce non-alternating responses for the rarer segment, even though non-alternation was unattested in the training data.
By manipulating the relative proportion of voicing vs. continuancy alternations across the two places of articulation, we were able to pit frequency of segmental alternations (p~f > p~b) against feature-level frequency (voicing > continuancy). If learners expect alternations to target natural classes, rather than individual segments, then we expect subjects to prefer the alternation that is overall more frequent across both classes. In general, this is indeed what we find: subjects prefer to generalize the featurally more frequent alternation, even if it is less frequent for particular segment pairs. However, the effect of feature-level frequency differs significantly depending on the feature. Comparing across experiments, we find that subjects more readily extend voicing alternations across different places of articulation than continuancy alternations. We attribute this to a learning bias against certain featural alternations. Finally, we show how these relative preferences can be modeled using a regularized maximum entropy model of constraint weighting.