When speaking, we somehow manage to (mostly) accurately encode potentially distant linguistic relations with limited working memory resources. On one hand, accurate encoding of linguistic relations between multiple words suggests parallel processing of those words, and hence suggests a non-trivial ‘look-ahead’ mechanism. On the other hand, relatively severe limitation of working memory capacity (along with conversational time-pressure) suggests serial, largely word-by-word sentence production that demotes the look-ahead mechanism to an auxiliary function. In this talk, I will present a series of experiments suggesting that speakers perform selective look-ahead according to grammatical needs, focusing on the dependency relation between verbs and their arguments. I will argue that such a grammatically motivated selective look-ahead mechanism might reconcile the seeming tension between parallel and serial views of sentence production.