Testing force variability in modals: A word learning experiment
How do children figure out the meaning of modals like “may" and “must"? These words are hard to learn for three reasons. First, the same modal might be used to talk about different 'flavors’ of possibilities (epistemic, teleological, deontic, etc.), as in English. Second, the same modal might be used with either weak or strong force, to make a claim of either possibility or necessity, respectively, as in Nez Percé (Deal, 2011) or St’at’imcets (Rullman & al, 2008). Third, claims about what is merely possible often give rise to scalar implicatures: they may be used to conversationally implicate stronger claims about what is furthermore not necessary, exploiting the Horn Scale
In preparation for studies with children, we implemented a novel word learning paradigm with adults to assess baseline expectations of what sorts of modals will be used, weak or strong, in various situations. We tested whether adults would be more willing to accept a modal learned as a possibility modal in a necessity context (i.e., extending from P to N) than a necessity modal to a possibility context (extending from N to P), comparing two flavors of modality (epistemic and teleological). The experiment was implemented online using IBEX Farm. 48 participants per condition were tested (186 total), recruited via Mechanical Turk. Our results show that indeed, extending from P to N is easier than to N to P, for both epistemic and teleological flavour.
We further investigated the effect of having learned a scalemate on the likelihood of extending these meanings, comparing the results with cases where people had already learned a word for a possibility (respectively, necessity) modal previously in the experiment. In line with previous results on contrastive effects in the computation of scalar implicatures, our results show that having learned a scale-mate diminishes the likelihood to extend the meaning.
Distinguishing first- from second-order Specifications of Each, Every, and All
The quantifiers each, every, and all are expressible using the tools of first-order logic, in which relations are defined over members of the domain, and second-order logic, in which relations are defined over sets of them. So, how are they in fact represented in speakers’ minds? Default strategies for verifying sentences like “every big dot is blue” provide one way to explore this question (Lidz et al., 2011; Pietroski et al., 2011). Holding all else equal, we take preferences for individual-based or set-based verification strategies to reflect underlying first- and second-order representational formats, respectively.
We presented participants with quantificational statements and pictures of dots. After responding true or false, they were asked to guess the cardinality of subsets (e.g., “how many big dots were there?”). In general, attending to sets and forming representations of them (as opposed to representing individuals as such) leads to more accurate estimates of summary statistics, like cardinality (Halberda et al., 2006). So using a set-based strategy should yield better performance on relevant “how many” questions than an individual-based strategy.
We find that after evaluating most-statements like “most of the big dots are blue”, participants are accurate and precise at estimating the cardinality of the set denoted by the internal argument (big dots), but not at guessing unmentioned sets’ cardinalities (e.g., small dots). The same participants fail to show this pattern after evaluating existential statements. Instead, their cardinality estimates for all sets resemble guessing performance (established in an independent experiment). As with most, we find that participants always show the signature of using a set-based strategy to evaluate every- and all-statements. In contrast, each-statements, despite being truth-conditionally equivalent, largely pattern like the existential statements, suggesting an underlying first-order representation