Research

Research in the MSP lab is focused on examining how perceptual and learning mechanisms operate within a multisensory environment. In particular, we are interested in how cross-modal relationships may help perceivers extract structured representations from highly variable input. To investigate these issues, we employ a number of behavioral techniques, including reaction time studies, artificial language learning, phoneme categorization, and eye-tracking methods (we use a Tobii T-60). Our current research projects are described below.

Projects:

Visual speech segmentation: This line of research investigates the role of visual input, particularly facial cues, in speech segmentation.  Speech segmentation research has focused primarily on the nature and availability of auditory word boundary cues (e.g. stress, phonotactics, distributional properties).  However, our research suggests that visual cues may enhance auditory cues and, further, seem to provide a independent cue to word boundaries.  Current research in this area is examining the extent of visual influence, the specific facial features providing segmentation cues, and how environmental and listener factors may influence the recruitment of visual cues.

Related publication: Mitchel, A.D. & Weiss, D.J. (2013). Visual speech segmentation: Using facial cues to locate word boundaries in continuous speech. Language and Cognitive Processes. doi:10.1080/01690965.2013.791703.
_____________________________________________________________________________________

Multimodal statistical learning: Theories of statistical learning have posited independent computational mechanisms for audio, visual, and tactile sensory modalities.  We test this claim by examining the effect of manipulating cross-modal relationships in multisensory statistical learning paradigms (e.g. learning shapes and tones simultaneously).  Our previous research has established the presence of such cross-modal effects and we are currently exploring the conditions under which these effects arise.

Related publication: Mitchel, A.D., & Weiss, D.J. (2011). Learning across senses: Cross-modal effects in multisensory statistical learning. Journal of Experimental Psychology: Learning, Memory, & Cognition, 37, 1081-1091.
_____________________________________________________________________________________

Individual differences in multisensory integration (MI): This research looks to uncover the underlying factors leading to the variability in mutlisensory integration abilities. In this nascent line of research, we are investigating whether susceptibility to multisensory illusions (e.g. McGurk effect) is related to individual differences in perceptual, social-cognitive, and genetic factors.
_____________________________________________________________________________________