Joe Pater will present joint work with Jennifer Culbertson, Coral Hughto and Robert Staubs at a joint meeting of the syntax and phonology workshop this Friday at 2:30 in ILC N458. A title and abstract of his talk follow.
Title: Typological Prediction with Learning: Grammatical Agent-Based Modeling of Typology
Abstract: What effect does learning have on the typological predictions of a theory of grammar? One way to answer this question is to examine the output of agent-based models (ABMs), in which learning can shape the distribution over languages that result from agent interaction. Prior research on ABMs and language has tended to assume relatively simple agent-internal representations of language, with the goal of showing how linguistic structure can emerge without being postulated a priori (e.g. Kirby and Hurford 2002, Wedel 2007). In this paper we show that when agents operate with more articulated grammatical representations, typological skews emerge in the output of the models that are not directly encoded in the grammatical system itself. This of course has deep consequences for grammatical theory construction, which often makes fairly direct inferences from typology to properties of the grammatical system. We argue that abstracting from learning may lead to missed opportunities in typological explanation, as well as to faulty inferences about the nature of grammar. By adding learning to typological explanation, grammatical ABMs allow for accounts of typological tendencies, such as the tendency toward uniform syntactic headedness (Greenberg 1963, Dryer 1992). In addition, incorporating learning can lead to predicted near-zeros in typology. We show this with the case of unrealistically large stress windows, which can be generated by a weighted constraint system, but which have near-zero frequency in the output of our ABM incorporating the same constraints. The too-large-window prediction is one of the few in the extant literature arguing for Optimality Theory’s ranked constraints over weighted ones.