Designing a High Performance Parallel Personal Cluster

Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientific and engineering communities are turning their interests to the possibility of implementing energy-efficient servers utilizing low-power CPUs for computing-intensive tasks. In this paper we introduce a novel approach, which was recently presented at Linux Conference Europe 2015, based on the Beowulf concept and utilizing single board computers (SBC). We present a low-energy consumption architecture capable to tackle heavily demanding scientific computational problems. Additionally, our goal is to provide a low cost personal solution for scientists and engineers. In order to evaluate the performance of the proposed architecture we ran several standard benchmarking tests. Furthermore, we assess the reliability of the machine in real life situations by performing two benchmark tools involving practical TCAD for physicist and engineers in the semiconductor industry.

[1]  Luiz André Barroso,et al.  The Price of Performance , 2005, ACM Queue.

[2]  Jean Michel D. Sellier,et al.  A signed particle formulation of non-relativistic quantum mechanics , 2015, J. Comput. Phys..

[3]  C. Jacoboni,et al.  The Monte Carlo method for the solution of charge transport in semiconductors with applications to covalent materials , 1983 .

[4]  Mateo Valero,et al.  Supercomputing with commodity CPUs: Are mobile SoCs ready for HPC? , 2013, 2013 SC - International Conference for High Performance Computing, Networking, Storage and Analysis (SC).

[5]  J. M. Sellier,et al.  An introduction to applied quantum mechanics in the Wigner Monte Carlo formalism , 2015 .

[6]  Ben H. H. Juurlink,et al.  Leakage-Aware Multiprocessor Scheduling , 2009, J. Signal Process. Syst..

[7]  Steven J. Johnston,et al.  Iridis-pi: a low-cost, compact demonstration cluster , 2014, Cluster Computing.

[8]  Feng Pan,et al.  Exploring the energy-time tradeoff in MPI programs on a power-scalable cluster , 2005, 19th IEEE International Parallel and Distributed Processing Symposium.

[9]  Thomas L. Sterling,et al.  BEOWULF: A Parallel Workstation for Scientific Computation , 1995, ICPP.

[10]  Feng Liu,et al.  A survey of the practice of computational science , 2011, 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC).

[11]  Ivan Tomov Dimov,et al.  On the simulation of indistinguishable fermions in the many-body Wigner formalism , 2015, J. Comput. Phys..

[12]  Albert G. Greenberg,et al.  The cost of a cloud: research problems in data center networks , 2008, CCRV.

[13]  Joshua Kiepert Creating a Raspberry Pi-Based Beowulf Cluster , 2013 .