This article presents an optimised and parallelised variant of the network of self-organised hyperplans HYPSOM [1], meant for the approximation of multivariable functions. This network intially equipped with a fixed structure, has been the subject of several studies whose aim was to give a growing structure [2]. This study allowed the validation of a learning algorithm, based on the addition and elimination of neurons, thus inducing the adaptation of the network structure to the arbitrary complexity of a function. Based on this study, this article shall aim at reducing the learning time of an HYPSOM by suggesting a parallel realisation, developed in an SCI environment. The latter makes it possible to use the intrinsic parallelism of the learning algorithm and to improve time. A comparison between sequential and parallel versions is also presented.