Events

Learning the meanings of words with visible referents is hard (Quine 1960). The task gets much harder when the word's referent is not even in principle visible (Gleitman 1990, Gillette et al. 1999). Predicates referring to mental states (e.g. think), desires (e.g. want), and speech acts (e.g. say)—known as attitude verbs—fall squarely into the second category. How do learners converge on the correct meanings for these words?

One proposal is that learners can use evidence from cooccuring linguistic cues to zero in on the meaning. For example, verbs like want tend to show up in contexts like (1) and (2), but not (3), while verbs like think tend to show up in contexts like (3), but not (1) and (2). This contrast with a word like say which can show up in contexts like (2) and (3), but not (1).

  1. Paul ____ed the beer.
  2. Paul ____ed to drink the beer.
  3. Paul ____ed that Sam drank the beer.

Many of these accounts rely on the fact that attitudes verbs tend to have rich (highly varied) syntactic distributions. But while there is evidence that, at a very high level, syntactic distribution carries information about meaning (Fisher et al. 1991, Lederer et al. 1993), it is an open question how fine-grained this information really is. Put another way: given a word's syntactic distribution, how much exactly do we know about its meaning? In this talk, I present quantitative evidence that syntactic distribution provides a high resolution picture of attitude verb semantics. I further delve into what the components of this picture are by presenting statistical models aimed at predicting semantic category from syntactic distribution.