I will discuss two new studies from my lab that focus on general questions about the cognitive science and neural implementation of speech and language. I come to (currently) unpopular conclusions about both questions. Based on a first set of experiments, using fMRI and exploiting the temporal statistics of speech, I argue for the existence of a speech-specific processing stage that implicates a particular neuronal substrate (the superior temporal sulcus). In a second set of experiments, using MEG, I go on to develop how temporal encoding can form the basis for more abstract, structural processing. The results demonstrate that, during listening to connected speech, cortical activity of different time scales is entrained concurrently to track the time course of linguistic structures at different hierarchical levels. Critically, entrainment to hierarchical linguistic structures is dissociated from the neural encoding of acoustic cues and from processing the predictability of incoming words. These results demonstrate syntax-driven, internal construction of hierarchical linguistic structure via entrainment of hierarchical cortical dynamics. The conclusions — that speech is special and language syntactic-structure-driven — provide new neurobiological provocations to the prevailing view that speech is hearing and that language is statistics.