Improving Performance of Decision Tree Algorithms with Multi-edited Nearest Neighbor Rule

The paper proposed a new method based on the multi-edited nearest neighbor rule to prevent decision tree algorithms from growing a tree of unnecessary large size and hence partially alleviate the problem of "over-training". For this purpose, two useful prosperities of the multi-edited nearest neighbor rule are investigated. Experiments show that the method proposed could drastically reduce the size of resulting trees, significantly enhance their understandability, and meanwhile improve the test accuracy when the control parameter takes an appropriate value.