Work-in-Progress: Hierarchical Ensemble Learning for Resource-Aware FPGA Computing

Recent years witness the rapid development in hardware/software codesign that integrates machine learning (ML) models with hardware systems [1]–[4]. Despite the booming trend in building neuromorphic computing systems [2], decision trees still attract much attention in ML communities, from both software [5], [6] and hardware [1], [7], [8] approaches. A single tree model may not be as accurate as desired. One effective way to remedy this is to apply ensemble methods [5]–[7], [9]–[11], such as the Random Forest (RF) [5]. Larger tree ensembles typically improve accuracy, but demand more computation resources, especially for hardware implementation. Many methods have been proposed to address this issue. The motivation arouses from the fact that tree models usually demands much less computation resource comparing to neural networks, making them extremely potential for wearable devices and embedded systems at the edge nodes of the IoT [7], [11], [12]. Moreover, due to their inherent structures, tree ensembles are ideal for exploiting the computational parallelism on FPGA [1], [4], [7]–[9], [11], [13].

[1]  James Myers,et al.  F1: Intelligent energy-efficient systems at the edge of IoT , 2018, 2018 IEEE International Solid - State Circuits Conference - (ISSCC).

[2]  Peng Zhang,et al.  Automated systolic array architecture synthesis for high throughput CNN inference on FPGAs , 2017, 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC).

[3]  John Langford,et al.  Scaling up machine learning: parallel and distributed approaches , 2011, KDD '11 Tutorials.

[4]  R. D. Blanton,et al.  Detection of illegitimate access to JTAG via statistical learning in chip , 2015, 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[5]  R. D. Blanton,et al.  Ensemble Reduction via Logic Minimization , 2016, TODE.

[6]  Rastislav J. R. Struharik,et al.  Decision tree ensemble hardware accelerators for embedded applications , 2015, 2015 IEEE 13th International Symposium on Intelligent Systems and Informatics (SISY).

[7]  Xiang Lin,et al.  Random Forest Architectures on FPGA for Multiple Applications , 2017, ACM Great Lakes Symposium on VLSI.

[8]  Christos-Savvas Bouganis,et al.  Accelerating Random Forest training process using FPGA , 2013, 2013 23rd International Conference on Field programmable Logic and Applications.

[9]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[10]  Ji Feng,et al.  Deep Forest: Towards An Alternative to Deep Neural Networks , 2017, IJCAI.

[11]  Lior Rokach,et al.  Ensemble-based classifiers , 2010, Artificial Intelligence Review.

[12]  Alok N. Choudhary,et al.  An FPGA Implementation of Decision Tree Classification , 2007, 2007 Design, Automation & Test in Europe Conference & Exhibition.

[13]  Swagath Venkataramani,et al.  Invited: Accelerator design for deep learning training , 2017, 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC).