GENRE (GPU Elastic-Net REgression): A CUDA-Accelerated Package for Massively Parallel Linear Regression with Elastic-Net Regularization

GENRE (GPU Elastic-Net REgression) is a package that allows for many instances of linear regression with elastic-net regularization to be processed in parallel on a GPU by using the C programming language and NVIDIA’s (NVIDIA Corporation, Santa Clara, CA, USA) Compute Unified Device Architecture (CUDA) parallel programming framework. Linear regression with elastic-net regularization (Zou & Hastie, 2005) is a widely utilized tool when performing model-based analyses. The basis of this method is that it allows for a combination of L1regularization and L2-regularization to be applied to a given regression problem. Therefore, feature selection and coefficient shrinkage are performed while still allowing for the presence of groups of correlated features. The process of performing these model fits can be computationally expensive, and one of the fastest packages that is currently available is glmnet (Friedman, Hastie, & Tibshirani, 2010; Hastie & Qian, 2014; Qian, Hastie, Tibshirani, & Simon, 2013). This package provides highly efficient Fortran implementations of several different types of regression. In the case of its implementation of linear regression with elastic-net regularization, the objective function shown in (eq. 1) is minimized.