Naomi Feldman (University of Maryland) gives the department colloquium this Friday, February 28, in Machmer E-37 at 3:30. Her talk is entitled “Interactive learning of sounds and words.” An abstract follows:
Infants begin learning words during the same period as they learn phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. This work uses computational and behavioral methods to test the hypothesis that infants’ developing knowledge of words provides useful information for learning about phonetic categories. A first set of simulations examines the potential benefit of a developing lexicon that contains no semantic information. A Bayesian model is constructed that learns to categorize speech sounds and words simultaneously, and this model outperforms a model that is solely focused on learning phonetic categories. Artificial language learning experiments demonstrate that human learners can use word-level information to constrain phonetic learning, and that they are sensitive to this information at the same age when they are learning phonetic categories. A second set of simulations examines the potential role of weak semantic knowledge in constraining sound and word learning. The model is given weak semantic information about the situations in which words appear, and this situational semantic information is shown to be particularly beneficial for phonetic learning when the developing lexicon contains many similar-sounding words. Together, these results point to a critical role for the developing lexicon in phonetic category acquisition and highlight the importance of considering how children integrate statistical information across multiple layers of linguistic structure.