Towards Developmental AI: The paradox of Ravenous Intelligent Agents

In spite of extraordinary achievements in specific tasks, nowadays intelligent agents are still striving for acquiring a truly ability to deal with many challenging human cognitive processes, especially when a mutable environment is involved. In the last few years, the progressive awareness on that critical issue has led to develop interesting bridging mechanisms between symbolic and sub-symbolic representations and to develop new theories to reduce the huge gap between most approaches to learning and reasoning. While the search for such a unified view of intelligent processes might still be an obliged path to follow in the years to come, in this paper, we claim that we are still trapped in the insidious paradox that feeding the agent with the available information, all at once, might be a major reason of failure when aspiring to achieve human-like cognitive capabilities. We claim that the children developmental path, as well as that of primates, mammals, and of most animals might not be primarily the outcome of biologic laws, but that it could be instead the consequence of a more general complexity principle, according to which the environmental information must properly be filtered out so as to focus attention on “easy tasks.” We claim that this leads necessarily to stage-based developmental strategies that any intelligent agent must follow, regardless of its body.