Networked telerobots transmit data from its sensors to the remote controller. To provide guarantees on the time requirements of these systems it is mandatory to keep the transmission time delays below a given threshold, and to that end we should predict them. In this paper we tackle the parallelization of a procedure that models these stochastic time delays. More precisely, we focus on fitting the time delay signal using a three-parametrical log-logistic distribution. Since, the robot and the controller are powered by multicore processors and, mainly on the robot, the energy consumption is a relevant issue, we study different alternatives to optimize both performance and energy usage of the aforesaid algorithm. Two quad-core processors are considered: a low power Intel Core i7 (45W TDP) and a ultra low power Samsung Exynos 5 (6W TDP). Results show that parallelism is beneficial, but that not all the cores should be exploited if the system is targeted at optimizing a performance-energy tradeoff.