In our lab, we investigate how vision informs both speech perception and language development.  For example, we know that in a noisy restaurant, we often rely on the speaker’s face to help us disentangle and comprehend speech.  We are interested in the mechanisms that enable this as well as the benefits of visual speech in other domains, such as speech segmentation and phoneme categorization.