Research Computing Desktops: Demystifying research computing for non-Linux users

Many members of the current generation of students and researchers are accustomed to intuitive computing devices and never had to learn how to use command-line based systems, which comprise the majority of high-performance computing environments in use. In the 2013-14 time frame, both Indiana University and Purdue university separately launched virtual desktop front-ends for their high performance computing clusters with the aim of offering an easier on-ramp to new users. In the last five years we iterated on and refined these approaches, and we now have over two thousand annual active users combined. Over 75% of those users say that the desktop services are either moderately or extremely important for their ability to use HPC resources. In this paper, we share our experience bootstrapping this new service framework, bringing in the end-users, dealing with runaway success, and making this service a sustainable offering. This paper offers a comprehensive picture of the driving motivations for desktops at each institution, reasons users like desktops, and ways of getting started.

[1]  Dejan S. Milojicic,et al.  Modeling remote desktop systems in utility environments with application to QoS management , 2009, 2009 IFIP/IEEE International Symposium on Integrated Network Management.

[2]  David Walling,et al.  XSEDE Technology Investigation Service (TIS) , 2017, PEARC.

[3]  Patrice Calégari,et al.  Web Portals for High-performance Computing , 2019, ACM Trans. Web.

[4]  C. Martin 2015 , 2015, Les 25 ans de l’OMC: Une rétrospective en photos.

[5]  Teguh Bharata Adji,et al.  Implementation and performance analysis of MPI cluster system in distributed rendering , 2016, 2016 International Conference on Computational Intelligence and Cybernetics.

[6]  Xiao Zhu,et al.  Scholar: A Campus HPC Resource to Enable Computational Literacy , 2016, 2016 Workshop on Education for High-Performance Computing (EduHPC).

[7]  A G Demenev Perm State University HPC-hardware and software services: capabilities for aircraft engine aeroacoustics problems solving , 2018 .

[8]  Jon Stearley,et al.  Bridging the Gaps: Joining Information Sources with Splunk , 2010, SLAML.

[9]  Thomas A. Finholt,et al.  Growing an infrastructure: the role of gateway organizations in cultivating new communities of users , 2007, GROUP.

[10]  Tatu Ylönen,et al.  The Secure Shell (SSH) Protocol Architecture , 2006, RFC.

[11]  Scott Lathrop,et al.  Best practices for management and operation of large HPC installations , 2018, Concurr. Comput. Pract. Exp..

[12]  James C. Browne,et al.  Open XDMoD: A Tool for the Comprehensive Management of High-Performance Computing Resources , 2015, Computing in Science & Engineering.

[13]  Richard H. Grant,et al.  Windows-based Workflows on Linux-based Beowulf Clusters , 2018, PEARC.

[14]  Thomas Hauser,et al.  Sandstone HPC: A Domain-General Gateway for New HPC Users , 2017, PEARC.

[15]  Andy Hopper,et al.  Virtual Network Computing , 1998, IEEE Internet Comput..

[16]  David E. Hudak,et al.  Supporting distributed, interactive Jupyter and RStudio in a scheduled HPC environment with Spark using Open OnDemand , 2018, PEARC.

[17]  Abhinav Thota Introducing graphical access to IU's supercomputers with Karst Desktop Beta , 2016 .

[18]  Filippo Spiga,et al.  An overview of Remote 3 D visualisation VDI technologies , 2014 .

[19]  David E. Hudak,et al.  Open OnDemand: Transforming Computational Science Through Omnidisciplinary Software Cyberinfrastructure , 2016, XSEDE.

[20]  David E. Culler,et al.  The ganglia distributed monitoring system: design, implementation, and experience , 2004, Parallel Comput..

[21]  M A A Neil,et al.  Accelerating single molecule localization microscopy through parallel processing on a high‐performance computing cluster , 2018, Journal of microscopy.