The team of Luc Arnal and Diane Lazard aims to understand the cerebral operations underlying the perception of communication signals — verbal, non-verbal or musical — in humans. By combining psychoacoustic methods, neuroimaging (fMRI, I/M-EEG) and neurocomputational modeling, the work of this group is revealing how the auditory system integrates auditory, visual and emotional information to develop appropriate reactions. These studies are paving the way for new translational perspectives in the context of sensory deficiencies (poor hearing) and their involvement in certain neurodegenerative diseases (Alzheimer’s disease).
Predictive coding and role of neuronal oscillations in the processing of words and music
The capacity of humans to process continuous sound stimuli, such as words or music, is dependent on proactive control of the oscillatory mechanisms of the auditory system. How do these neuronal mechanisms function when the perception of sounds is degraded, in the case of slight or profound hearing impairment, for example? What strategies does the human brain use in the short and long term to compensate for these sensory deficits and process such signals? The researchers of this team are addressing these fundamental and clinical questions through a series of experimental approaches in individuals with normal hearing and individuals with hearing impairment, to study:
- The role of neuronal oscillations in speech sampling.
- The role of predictive mechanisms in sound processing.
- The long-term cortical reorganizations involved in adaptation to a loss of auditory communication capacity.
The training of cerebral responses via classical and non-classical auditory circuits, in normal and pathological brains
The human auditory system is not equally sensitive to all frequencies of the audible spectrum. Certain sounds are capable of triggering stereotypical emotional responses, suggesting that, over and above individual esthetic preferences, our perception and reactions to sound are determined by factors of neurobiological origin.
In this series of experiments combining psychoacoustic and neuroimaging (fMRI, EEG, and intracranial electrophysiology) approaches, the “Auditory cognition and communication” team has recently discovered that “rough” sounds target not only the classical auditory system, but also the “non-classical” subcortical and limbic systems, with effects on the levels of stress and alertness of the listener. Surprisingly, the cerebral processing of these sounds seems to be affected in a certain number of neurodevelopmental and neurodegenerative diseases. With the aim of improving our understanding of why and how these sounds can target these circuits, the researchers are comparing the training of electrophysiological responses in the human brain (normal and pathological) and in animal models.
Audiovisual interaction in hearing-impaired individuals
The exploration of audiovisual cooperation is a major axis of the research of this team. Lip-reading, despite being a major element of communication in hearing-impaired individuals, has been little explored in this population. We hypothesize that the capacity for audiovisual interaction is determined in early childhood and has little potential for improvement. An experimental paradigm will be used to test multisensory illusions in subjects with normal and impaired hearing.
The effect of learning to lip read will also be analyzed. Behavioral and neuroimaging (EEG, fMRI) analyses will be performed to identify the neuronal mechanisms involved in audiovisual integration and to improve our understanding of its interindividual variability and its consequences in cases of deafness.