There are a wide variety of training approaches for spiking neural networks for neuromorphic deployment. However, it is often not clear how these training algorithms perform or compare when applied across multiple neuromorphic hardware platforms and multiple datasets. In this work, we present a software framework for comparing performance across four neuromorphic training algorithms across three neuromorphic simulators and four simple classification tasks. We introduce an approach for training a spiking neural network using a decision tree, and we compare this approach to training algorithms based on evolutionary algorithms, back-propagation, and reservoir computing. We present a hyperparameter optimization approach to tune the hyperparameters of the algorithm, and show that these optimized hyperparameters depend on the processor, algorithm, and classification task. Finally, we compare the performance of the optimized algorithms across multiple metrics, including accuracy, training time, and resulting network size, and we show that there is not one best training algorithm across all datasets and performance metrics.