Decision trees work better than feed-forward back-prop neural nets for a specific class of problems

Feed forward, back propagation neural networks are known to be universal approximators in a certain theoretical sense. They can be time-consuming to train and require significant parameter tuning. Decision trees are generally faster and simpler to train, but are widely assumed to not offer predictive accuracy as good as feed forward, back propagation neural networks. We have noticed in previous work that decision trees tended to outperform feed forward, back propagation neural networks on a certain dataset. We provide a description of a class of problems that are extremely difficult for feed forward, back propagation neural networks but relatively simple for decision trees. Experiments with synthetic datasets illustrate the class of problems. The importance of this result lies in making decisions about when to employ what type of classifier in practice.