Presentations
I am a theoretical physicist working at the frontier of statistical physics and computational biology. I combine analytical and numerical approaches to understand biological systems using physico-mathematical models.
PhD research
My PhD work focused on complex random walks with long range memory effects with the goal to provide new frameworks to model and understand the random walks of biomolecules, chemical reactions and animal taxis. I had two major topics: 1) first passage problems for non-Markovian random walkers and 2) characterizing the properties of self-reinforced random walks.
The first passage time (FPT), defined as the time needed for a random walker to reach a given target point, is a key quantity for characterizing dynamic processes in a wealth of real-world systems (transport-limited reactions, neuron firing dynamics). I studied strongly non-Markovian random walks (whose dynamics depend on their entire history) as a proxy to model complex environments. Among them, self-reinforced random walks (SRRWs) are a class of random walks where memory effects emerge from the interaction of the random walker with the territory that it has visited at earlier times. They are often used to model complex processes in ecology, epidemiology and computer science and are notoriously difficult to characterize.
Past postdoctoral research
From local memory-based decisions without cognition, my interest shifted towards developing new generic frameworks to model and understand cognitive decision-making. An essential point for me is to develop such decision frameworks while providing analytical mathematical solutions avoiding current trends in accepting black box solutions such as the ones often provided in machine learning.
I chose the multi-armed bandit (MAB) problem as a general formalism because this mathematical problem embodies the issues of exploration and exploitation tradeoff. The MAB model is a simple slot machine game where the goal is to maximize the payout by finding and playing the best arms. Since pulling sub-optimal solutions is costly, MAB algorithms must carefully quantify their exploration time and have to be robust to noisy inputs. As a direct consequence, this abstract framework finds application across a wide spectrum of domains, comprising neuroscience, reinforcement learning, and pharmaceutical trials.
I developed a new approach derived from physical principles, by optimizing a functional over the global bandit games, enabling its extension beyond its usual scopes.