Humans are unique in their knowledge of natural number, and much recent developmental work has been geared to understanding how children develop the representations central to this knowledge. One prominent view holds that this development partially depends on utilizing the representational resources underlying natural language quantification (Carey 2010). Thus it may be that understanding the acquisition and representation of quantifier meanings is logically prior to understanding those for natural number. I address two questions: (1) How do kids know when to quantify? Observing the distributional properties of quantifier words, we hypothesized that kids use syntactic information to determine when a novel word refers to quantities rather than to properties of individuals, and present an experiment supporting this hypothesis. (2) What are quantifier meanings? Under the Interface Transparency Thesis (ITT; Lidz et al 2009), linguistic meanings are, cognitively, instructions to build complex concepts. Their particular format has dramatically different consequences for how we apply such concepts to our representations of the world. We present the results of a study that show that, given trials where a "most" and a "more" question are true and false in exactly the same circumstances: (i) children perform better on a "most" question when two sets of objects are spatially intermixed and worse when they are spatially separated (with the reverse for "more"), supporting the ITT; and, (ii) while these children provably have full counting ability, they make use of Approximate Number System representations to verify the question. While the nature of the developmental change from using approximate to exact number representations in this domain remains a mystery, we propose that by examining issues at the interfaces (syntax-semantics, and semantics-extralinguistic cognition), we can precisify some of the issues leading to its eventual solution.