To learn the words and the grammar of their native language, children must analyze sentences into their constituent parts, and work out how the composition and arrangement of these constituents convey 'who does what to whom'. Traditional accounts assume that children solve this problem largely because they can infer the meanings of input sentences. In contrast, the syntactic-bootstrapping account proposes that children use partial knowledge of syntax to guide early sentence interpretation. My colleagues and I have explored the origins of syntactic bootstrapping, asking how infants could find aspects of syntactic structure meaningful, even before learning much about their native language's syntax. On our account, children begin with an unlearned bias toward one-to-one mapping between nouns in sentences and participant-roles in events. In this talk, I will briefly summarize our account and then discuss stern challenges that are posed by the complexity and ambiguity of ordinary sentences. For example, in many languages, such as Korean or Japanese, verbs’ arguments are often missing, making it puzzling how children learn each verb’s argument structure from such data. I will present new evidence that sheds light on how learners solve this problem. Specifically, I will argue that children integrate probabilistic distributional learning and an expectation of discourse continuity to estimate the intended structure of input sentences, thereby working out their constituent parts.