Anticipating the featural content and the time point of a particular sensory event ('what happens when?') is beneficial for processing. Current frameworks of brain functions espouse the notion of prediction and prediction error and suggest minimally redundant representations. Linguistic theory, particularly within Generative Phonology, has also stressed the advantage of abstract (underspecified) representations, but hitherto, the two concepts exist relatively independent from each other.
In this talk, I try to bridge the views from Neuropsychology and Linguistics, by focusing on predictive processing that appears to be of particular relevance for speech, where the bottom-up signal is hardly ever optimal. Experimental evidence will be provided from the level of the phoneme, the word, and the sentence. The suggested framework provides specific roles for predictions that are derived from both the sensory input (bottom-up) as well as from the specificity of speech sound representations (top-down).