Events

For some initial set of words in a child’s lexicon, acquiring word meanings must be accomplished via observation of the co-occurring referent world (i.e., map ‘dog’ onto the visually present referent of a dog, and infer the meaning DOG). However, all naturally-occurring referential scenes are rich, complex, and thus highly ambiguous: there are usually many possible referents present and always many possible meanings to infer from a referent. One currently popular explanation for how learning is accomplished under these conditions is that the learner gradually converges on the correct meaning by tracking in parallel all co-occurring word-meaning pairings afforded by each of multiple word exposures. And eventually (somehow), the learner converges on the correct meaning via an associative learning mechanism that gradually strengthens and weakens word-meaning pairings. In this talk, I question the feasibility of this account and offer experimental evidence that learners do something more sophisticated and yet far simpler: on a given learning instance, learners pick just a single plausible meaning from the context and commit that to memory; when they hear the word again, they see if that meaning continues to fit the usage; if so, confidence in that meaning is strengthened; if not, a new meaning may be selected. This ‘propose-but-verify’ learning procedure, developed over the past several years in a joint collaboration with Lila Gleitman, is motivated by the fact that the range of possible meanings for a word is essentially an open set; each person eventually learns 50,000 to 100,000 words, which typically map onto complex combinations of simpler concepts, some of which are culturally-specific. It is therefore unlikely that a gradually converging network of word-to-meaning associations would ever converge on a stable, accurate lexicon. Finally, our experimental evidence has implications for accounts of learning more generally; the findings show that so-called ‘gradual’ learning curves are actually the result of averaging across individual trials in which different subjects on different occasions abruptly learned the correct mapping -- an observation that appears to hold for learning studies generally. It is therefore incorrect for learning theorists to enter a gradual error correction term into their models; they are erroneously assuming that the individual behaves like the group average.