Network Topology and Efficiency of Observational Social Learning

This paper explores the relationship between the topology of a network of agents and how efficiently they can learn a common unknown parameter. Agents repeatedly make private observations which are possibly informative about the unknown parameter; they also communicate their beliefs over the set of conceivable parameter values to their neighbors. It has been shown that for agents to learn the realized state, it is sufficient that they incorporate in their beliefs their private observations in a Bayesian way and the beliefs of their neighbors using a fixed linear rule. In this paper we establish upper and lower bounds on the rate by which agents performing such an update learn the realized state, and show that the bounds can be tight. These bounds enable us to compare efficiency of different networks in aggregating dispersed information. Our analysis yields an important insight: while for agents in large “balanced” social networks learning is much slower compared to that of a central observer, unbalanced networks could result in near efficient learning.