Random walks with `back buttons'

We introduce backoff processes, an idealized stochastic model of browsing on the world-wide web, which incorporates both hyperlink traversals and use of the “back button.” With some probability the next state is generated by a distribution over out-edges from the current state, as in a traditional Markov chain. With the remaining probability, however, the next state is generated by clicking on the back button, and returning to the state from which the current state was entered by a “forward move”. Repeated clicks on the back button require access to increasingly distant history. We show that this process has fascinating similarities to and differences from Markov chains. In particular, we prove that like Markov chains, backoff processes always have a limit distribution, and we give algorithms to compute this distribution. Unlike Markov chains, the limit distribution may depend on the start state.

[1]  Russ Bubley,et al.  Randomized algorithms , 1995, CSUR.

[2]  Ravi Kumar,et al.  On targeting Markov segments , 1999, STOC '99.

[3]  Martin Grötschel,et al.  Geometric Algorithms and Combinatorial Optimization , 1988, Algorithms and Combinatorics.

[4]  Henry Lieberman,et al.  Letizia: An Agent That Assists Web Browsing , 1995, IJCAI.

[5]  G. Birkhoff Extensions of Jentzsch’s theorem , 1957 .

[6]  J. Kemeny,et al.  Denumerable Markov chains , 1969 .

[7]  Samuel Karlin,et al.  A First Course on Stochastic Processes , 1968 .

[8]  Paul P. Maglio,et al.  How to personalize the Web , 1997, CHI.

[9]  M. Lewin On nonnegative matrices , 1971 .

[10]  Charles R. Johnson,et al.  Matrix analysis , 1985, Statistical Inference for Engineers and Data Scientists.

[11]  L. Lovász,et al.  Geometric Algorithms and Combinatorial Optimization , 1981 .

[12]  Ted Selker,et al.  COACH: a teaching agent that learns , 1994, CACM.