The cognitive neuroscience of language has made only limited contact with syntactic theory, in part because the relationship between grammatical knowledge and neural signals is indirect and often under-specified. This talk describes how neuro-computational models of sentence processing can be used to rigorously link theories of grammar with brain mechanisms. Such models explicitly describe the cognitive representations that are constructed during sentence comprehension, and quantify how such representations modulate measurable brain signals. The talk presents a set of such of models which parametrically vary in how grammatical knowledge constrains expectations about upcoming linguistic input, and how complex phrase-structures are built incrementally. These models are tested against electroencephalography and electocorticography data that are collected while participants perform a simple and natural task: listening to an audiobook story. In comparing the fit between alternative models and the measured brain data, the results constrain theories of both the syntactic representations that are created incrementally during comprehension, and the algorithms by which such representations are formed.