Algorithms for the self-organisation of distributed, multi-user networks. Possible application to the future World Wide Web

In spite of these similarities, the WWW lacks some important functional attributes typical for biological and artificial neural networks. First, neural networks are nor mally not intended to merely store information, but to control and guide goal-directed behaviour. The WWW, however, does not perform any tasks except information storage. Second, most neural networks are equipped with mechanisms to adapt the knowledge and models they contain. This phenomenon lies at the heart of an error-correcting feedback loop, characterizing biological as well as artificial neural networks [Mc. Clelland & Rumelhart, 1986]: ‘knowledge → behaviour → effect → perception → knowledge adjustment’. The WWW does not have any such error-correcting mechanisms. It evolves, but does not adapt. The World Wide Web has a number of striking similarities with other learning networks (natural or artificial). Its structure is that of a distributed network of nodes and links. It also evolves and adapts by being continuously updated and expanded by its contributors and users. This paper describes our attempts to devise a number of algorithms that can make distributed hypertext networks such as the World Wide Web selforganise according to their users' knowledge. A number of experiments were conducted in which experimental networks of English nouns were being browsed via the Internet by several thousands of participants. These experimental networks evolved into a stable state which more or less represented the participants shared knowledge structure and associations. One might argue that it is not the WWW’s goal to simulate brains or neural networks, but to provide a reli able and user-friendly access to stored knowledge. But it is questionable whether the present WWW—and the hypermedia paradigm in general [Nielsen, 1990]— succeeds in this [Jonassen, 1989; 1993]. The WWW’s content is presently expanding at an enormous pace, but the quality of its structure does not seem to improve. This should not surprise us, as the only mechanism for network restructuring at present is the contributions of individual web-designers, each adding their own, often poorly designed, sub-networks to the WWW. The WWW, being not more than the sum of its parts, can achieve no better quality of structure than that of these sub-networks. This causes the WWW to be, in general, very poorly or ganized, which in its turn seriously hampers efficient and user-friendly retrieval of information [Hamond, 1993]. With an ever-expandin g amount of information being added to the WWW, this problem can only be expected to worsen within the present set-up.