The structure of memory meets memory for structure in linguistic cognition

Matt Wagers

This dissertation is concerned with the problem of how structured linguistic representations interact with the architecture of human memory. Much recent work has attempted to unify real-time linguistic memory with a general content-addressable architecture (Lewis & Vasishth, 2005; McElree, 2006). Because grammatical principles and constraints are strongly relational in nature, and linguistic representation hierarchical, this kind of architecture is not well suited to restricting the search of memory to grammatically-licensed constituents alone. This dissertation investigates under what conditions real-time language comprehension is grammatically accurate. Two kinds of grammatical dependencies were examined in reading time and speeded grammaticality experiments: subject-verb agreement licensing in agreement attraction configurations ("The runners who the driver wave to ..."; Kimball & Aissen, 1971, Bock & Miller, 1991), and active completion of wh-dependencies. We develop a simple formal model of agreement attraction in an associative memory that makes accurate predictions across different structures. We conclude that dependencies that can only be licensed exclusively retrospectively, by searching the memory to generate candidate analyses, are the most prone to grammatical infidelity. The exception may be retrospective searches with especially strong contextual restrictions, as in reflexive anaphora. However dependencies that can be licensed principally by a prospective search, like wh-dependencies or backwards anaphora, are highly grammatically accurate.