Free Access
Issue
ESAIM: PS
Volume 22, 2018
Page(s) 96 - 128
DOI https://doi.org/10.1051/ps/2018008
Published online 14 December 2018
  1. S. Arlot and R. Genuer, Analysis of Purely Random Forests Bias. Preprint arXiv:1407.3939 (2014). [Google Scholar]
  2. G. Biau, Analysis of a random forests model. J. Mach. Learn. Res. 13 (2012) 1063–1095. [Google Scholar]
  3. G. Biau and L. Devroye, Cellular tree classifiers, in Algorithmic Learning Theory. Springer, Cham (2014) 8–17. [Google Scholar]
  4. G. Biau, L. Devroye and G. Lugosi, Consistency of random forests and other averaging classifiers. J. Mach. Learn. Res. 9 (2008) 2015–2033. [Google Scholar]
  5. L. Breiman, Random forests. Mach. Learn. 45 (2001) 5–32. [Google Scholar]
  6. L. Breiman, J.H. Friedman, R.A. Olshen and C.J. Stone, Classification and Regression Trees. Chapman & Hall, CRC, Boca Raton (1984). [Google Scholar]
  7. P. Bühlmann, Bagging, boosting and ensemble methods, in Handbook of Computational Statistics. Springer, Berlin, Heidelberg (2012) 985–1022. [CrossRef] [Google Scholar]
  8. M. Denil, D. Matheson and N. de Freitas, Consistency of Online Random Forests. Vol. 28 of Proc. of ICML’13 Proceedings of the 30th International Conference on International Conference on Machine Learning, Atlanta, GA, USA June 6–21 (2013) 1256–1264. [Google Scholar]
  9. M. Denil, D. Matheson and N. de Freitas, Narrowing the gap: random forests in theory and in practice, in International Conference on Machine Learning (ICML) (2014). [Google Scholar]
  10. L. Devroye, L. Györfi and G. Lugosi, A Probabilistic Theory of Pattern Recognition. Springer, New York (1996). [CrossRef] [Google Scholar]
  11. R. Díaz-Uriarte and S. Alvarez de Andrés, Gene selection and classification of microarray data using random forest. BMC Bioinform. 7 (2006) 1–13. [Google Scholar]
  12. M. Fernández-Delgado, E. Cernadas, S. Barro and D. Amorim, Do we need hundreds of classifiers to solve real world classification problems. J. Mach. Learn. Res. 15 (2014) 3133–3181. [Google Scholar]
  13. R. Genuer, Variance reduction in purely random forests. J. Nonparametric Stat. 24 (2012) 543–562. [Google Scholar]
  14. R. Genuer, J. Poggi and C. Tuleau-Malot, Variable selection using random forests. Pattern Recognit. Lett. 31 (2010) 2225–2236. [Google Scholar]
  15. H. Ishwaran and U.B. Kogalur, Consistency of random survival forests. Stat. Probab. Lett. 80 (2010) 1056–1064. [Google Scholar]
  16. L. Meier, S. Van de Geer and P. Bühlmann, High-dimensional additive modeling. Ann. Stat. 37 (2009) 3779–3821. [Google Scholar]
  17. L. Mentch and G. Hooker, Quantifying uncertainty in random forests via confidence intervals and hypothesis tests. J. Mach. Learn. Res. 17 (2015) 841–881. [Google Scholar]
  18. Y. Qi, Random forest for bioinformatics, in Ensemble Machine Learning. Springer, Boston, MA (2012) 307–323. [CrossRef] [Google Scholar]
  19. G. Rogez, J. Rihan, S. Ramalingam, C. Orrite and P. H. Torr, Randomized trees for human pose detection, in IEEE Conference on Computer Vision and Pattern Recognition (2008) 1–8. [Google Scholar]
  20. M. Sabzevari, G. Martínez-Muñoz and A. Suárez, Improving the Robustness of Bagging with Reduced Sampling Size. Universitécatholique de Louvain (2014). [Google Scholar]
  21. E. Scornet, On the asymptotics of random forests. J. Multivar. Anal. 146 (2016) 72–83. [Google Scholar]
  22. E. Scornet, G. Biau and J.-P. Vert, Consistency of random forests. Ann. Stat. 43 (2015) 1716–1741. [Google Scholar]
  23. C.J. Stone, Optimal rates of convergence for nonparametric estimators. Ann. Stat. 8 (1980) 1348–1360. [Google Scholar]
  24. C.J. Stone, Optimal global rates of convergence for nonparametric regression. Ann. Stat. 10 (1982) 1040–1053. [Google Scholar]
  25. M. van der Laan, E.C. Polley and A.E. Hubbard, Super learner. Stat. Appl. Genet. Mol. Biol. 6 (2007). [Google Scholar]
  26. S. Wager, Asymptotic Theory for Random Forests. Preprint arXiv:1405.0352 (2014). [Google Scholar]
  27. S. Wager and S. Athey, Estimation and inference of heterogeneous treatment effects using random forests. J. Am. Stat. Assoc. (2018) 1–15. [Google Scholar]
  28. S. Wager and G. Walther., Adaptive Concentration of Regression Trees, With Application to Random Forests (2015). [Google Scholar]
  29. F. Zaman and H. Hirose, Effect of subsampling rate on subbagging and related ensembles of stable classifiers, in International Conference on Pattern Recognition and Machine Intelligence. Springer (2009) 44–49. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.