Exit Menu

Multi-sensory processing of faces and voices

Ongoing

Project description

Functional magnetic resonance imaging (fMRI) has revealed regions in the occipital and temporal lobes of the brain that consistently respond more strongly to images of faces than to images of other objects. Voice-responsive regions in the lateral temporal lobes show similar selectivity, responding more to vocal sounds compared to other auditory stimuli. However, we still have a limited understanding of how the brain combines and integrates information processed in these face-responsive and voice-responsive regions to recognise someone’s identity. 

Two major models have been put forward. The Multimodal Processing Model proposes that there are multimodal brain regions that process information about people, and that these receive input from face-responsive regions (visual) and voice-responsive regions (auditory). The Coupling of Face and Voice Processing Model proposes that there are direct connections between face- responsive regions and voice-responsive regions. According to this view, the direct coupling between these regions is crucial for the integration of information from faces and voices. 

The ‘multimodal’ and ‘coupling’ mechanisms proposed in the two models above are not mutually exclusive and, with this project, we aim to investigate the relative contribution of these integration mechanisms to the recognition of person identity. We will use behavioural and fMRI methods (in particular, Representational Similarity Analysis) to address these questions.