Elephants can always remember: exact long-range memory effects in a non-Markovian random walk.

We consider a discrete-time random walk where the random increment at time step t depends on the full history of the process. We calculate exactly the mean and variance of the position and discuss its dependence on the initial condition and on the memory parameter p . At a critical value p((1) )(c ) =1/2 where memory effects vanish there is a transition from a weakly localized regime [where the walker (elephant) returns to its starting point] to an escape regime. Inside the escape regime there is a second critical value where the random walk becomes superdiffusive. The probability distribution is shown to be governed by a non-Markovian Fokker-Planck equation with hopping rates that depend both on time and on the starting position of the walk. On large scales the memory organizes itself into an effective harmonic oscillator potential for the random walker with a time-dependent spring constant k=(2p-1)/t . The solution of this problem is a Gaussian distribution with time-dependent mean and variance which both depend on the initiation of the process.

[1]  N. Madras,et al.  THE SELF-AVOIDING WALK , 2006 .