Random Forests--random Features
暂无分享,去创建一个
Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost, but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. These ideas are al;so applicable to regression.
[1] Yoav Freund,et al. Experiments with a New Boosting Algorithm , 1996, ICML.
[2] Yoav Freund,et al. Boosting the margin: A new explanation for the effectiveness of voting methods , 1997, ICML.
[3] Yali Amit,et al. Shape Quantization and Recognition with Randomized Trees , 1997, Neural Computation.
[4] L. Breiman. USING ADAPTIVE BAGGING TO DEBIAS REGRESSIONS , 1999 .