If you've ever used an automated voice recognition system, like the ones that process you through directories of corporations, you have a sense of how hard it is to understand the speech of someone else. If we can't teach a machine to do it, how is it that we humans can understand sentences spoken at a rate of about 300 words per minute? As if life couldn't get more challenging, speech changes, too. Speaker-by-speaker and dialect-by-dialect, people don't all speak in the same way. My research focuses on plasticity in speech perception. In particular, I look at learning and adaptation in speech perception. We learn new phonetic categories when we learn a new language. We adapt to variation in the speech of others in our native language. These two processes require our brain to be plastic, to change with experience. I study how it does that.
Much of my research involves behavioral tasks. Basically, I bring people into the lab and have them do simple tasks where they learn new categories or write down sentences. Here at UConn, I am picking up brain scanning techniques, primarily ones that use magnetic resonance imaging (MRI).
© 2010-2017 Chris Heffner; Updated 2017-09-02