Link to HAL – pasteur-04627434
Link to DOI – 10.21203/rs.3.rs-4436764/v1
2024
Unraveling how humans effortlessly grasp speech despite diverse environmental challenges has long intrigued researchers in systems and cognitive neuroscience. The interplay between semantic and phonological language structures has been a subject of debate in the linguistics and neurolinguistics literature that, so far, has not been resolved. We seek to understand the neural intricacies underpinning semantic-acoustic interplay in robust speech comprehension. To do so, we construct a computational mechanistic proof for the hypothesis, proposing a pivotal role for rhythmic predictive top-down contextualization facilitated by the delta rhythm in achieving time-invariant speech processing. Our Brain-Rhythm-based Inference model, BRyBI, integrates three key rhythmic processes — theta-gamma interactions for parsing phoneme sequences, dynamic delta rhythm for inferred prosodic-phrase context, and resilient speech representations. Demonstrating mechanistic proof-of-principle, BRyBI replicates human behavioral experiments, showcasing its ability to handle pitch variations, time-warped speech, interruptions, and silences in non-comprehensible contexts. Intriguingly, the model aligns with human experiments, revealing optimal silence time scales in the theta- and delta-frequency ranges. Comparative analysis with deep neural network language models highlights distinctive performance patterns, emphasizing the unique capabilities of a rhythmic framework. In essence, our study sheds light on the neural underpinnings of speech processing, emphasizing the role of rhythmic brain mechanisms in structured temporal signal processing — an insight that challenges prevailing artificial intelligence paradigms and hints at potential advancements in compact and robust computing architectures.