Home Research Research Highlights
The Neural Basis of Audio/Visual Event Perception
Researchers from the Temporal Dynamics Learning Center are working as a team to explore how the brain puts together the sounds and sights of unfolding events (such as ripping paper or a bouncing ball). To better understand how such audio-visual stimuli come to be perceived as a single integrated event, Michael Tarr from Brown University, and colleagues from Brown, University of Colorado Boulder, and UCSD created a functional magnetic resonance imaging (fMRI) experiment in which participants view movies of events presented unimodally (just the movie or just the sound) or multimodally (congruent movie and sound, or incongruent movie and sound). The results to date reveal brain areas of strong activation within both visual and auditory processing areas, as well as robust multimodal integration areas in which activity for multimodal stimuli is greater than unimodal stimuli. They will also be looking to see if any brain area shows an effect of congruency (both semantic and temporal) within these specific regions. The team wishes to elucidate the computational mechanisms that "figure out" how a time-unfolding sound and visual action go together (integration) and hypothesized that these same brain mechanisms are the ones most likely to be sensitive to the fact that a particular sound does not match a particular event. Because fMRI is much better at telling us which brain area is active than when it is engaged, the team is planning to use the same basic research paradigm using a different method, event related potentials. Finally, other members of the team are working towards computational models capable of accounting for how disparate sensory information is bound into coherent percepts.