I will present a series of experiments aimed at understanding how listeners process repair disfluencies in real time (e.g., “The woman went to the animal shelter and brought home a dog uh I mean a rabbit.”). Using a visual-world eye-tracking paradigm, we have shown that listeners actively predict the upcoming repair during the “uh I mean” portion of the utterance (i.e., there is a strong tendency for listeners to look at a picture of a cat in this example, before hearing the word “rabbit”). This pattern is much stronger than the tendency to generate similar predictions in the context of noun phrase coordination (e.g., “…a dog and also a rabbit”), but is similar to the tendency to generate predictions in the context of contrastive focus (e.g., “…not a dog but rather a rabbit”). I will also present work showing that the tendency to generate predictions during the processing of repair disfluencies is disrupted when the speaker has a stutter, as well as work showing that listeners can exploit linguistic cues of plausibility and speaker certainty to rapidly anticipate a speech error even before the speaker becomes disfluent. This work is interpreted within a noisy channel framework of language comprehension, according to which listeners actively model the communicative intentions of the speaker, combining the linguistic input with assumptions about the speaker’s meaning and mentally correcting any perceived errors.