Theories of generative grammar are evaluated in terms of their fit to typology: the extent to which they succeed in generating all and only the linguistic systems observed cross-linguistically. Theories of learning in generative grammar are evaluated in terms of their success in finding a correct grammar for any language in the space of systems defined by a given grammatical theory. In this standard approach, learning does not play a role in typological modeling itself. This talk presents an alternative approach that uses generative grammars as a component of agent-based models (ABMs), in which learning can shape the distribution over languages that result from agent interaction. By adding learning to typological explanation, grammatical ABMs allow for accounts of typological tendencies, such as the tendency toward uniform syntactic headedness (Greenberg 1963, Dryer 1992). In addition, incorporating learning can lead to predicted near-zeros in typology. We show this with the case of unrealistically large stress windows, which can be generated by a weighted constraint system, but which have near-zero frequency in the output of our ABM incorporating the same constraints. The too-large-window prediction is one of the few in the extant literature arguing for Optimality Theory’s ranked constraints over weighted ones.

Joint work with Jennifer Culbertson, Coral Hughto and Robert Staubs