Spoken language is an inherently ambiguous and variable signal; yet, the human experience is typically one of effortless comprehension. How the brain transforms such impoverished sensory input into a coherent linguistic message is currently unknown. In this talk, I address a bite-sized chunk of this question from the perspective of contextual integration; asking how surrounding phonemes, morphemes and words guide the perception of both proceeding and preceding speech sounds. I present data from a series of magneto-encephalography experiments that investigate how such context is integrated in service to spoken word recognition. I focus on the use of this context for resolving ambiguity, and briefly touch upon its utility in adapting to accented speech. The emerging picture is one of a dynamic processing system that not just permits, but actually capitalises upon, the bi-directional exchange of information across and within levels of linguistic description. I discuss these results in light of a hierarchical framework of language structure, drawing connections to linguistic theory and existing models of language comprehension.