Free Access
Issue |
ESAIM: PS
Volume 18, 2014
|
|
---|---|---|
Page(s) | 770 - 798 | |
DOI | https://doi.org/10.1051/ps/2014006 | |
Published online | 22 October 2014 |
- S. Arlot and P. Bartlett, Margin adaptive model selection in statistical learning. Bernoulli 17 (2011) 687–713. [CrossRef] [MathSciNet] [Google Scholar]
- L. Birgé and P. Massart, Minimal penalties for gaussian model selection. Probab. Theory Relat. Fields 138 (2007) 33–73. [Google Scholar]
- L. Breiman, Random forests. Mach. Learn. 45 (2001) 5–32. [CrossRef] [Google Scholar]
- L. Breiman and A. Cutler, Random forests. http://www.stat.berkeley.edu/users/breiman/RandomForests/ (2005). [Google Scholar]
- L. Breiman, J. Friedman, R. Olshen and C. Stone, Classification and Regression Trees. Chapman et Hall (1984). [Google Scholar]
- R. Díaz-Uriarte and S. Alvarez de Andrés, Gene selection and classification of microarray data using random forest. BMC Bioinform. 7 (2006) 1–13. [CrossRef] [Google Scholar]
- B. Efron, T. Hastie, I. Johnstone and R. Tibshirani, Least angle regression. Ann. Stat. 32 (2004) 407–499. [CrossRef] [MathSciNet] [Google Scholar]
- J. Fan and J. Lv, A selective overview of variable selection in high dimensional feature space. Stat. Sin. 20 (2010) 101–148. [PubMed] [Google Scholar]
- G.M. Furnival and R.W. Wilson, Regression by leaps and bounds. Technometrics 16 (1974) 499–511. [CrossRef] [Google Scholar]
- R. Genuer, J.M. Poggi and C. Tuleau-Malot, Variable selection using random forests. Pattern Recognit. Lett. 31 (2010) 2225–2236. [CrossRef] [Google Scholar]
- S. Gey, Margin adaptive risk bounds for classification trees, hal-00362281. [Google Scholar]
- S. Gey and E. Nédélec, Model Selection for CART Regression Trees. IEEE Trans. Inf. Theory 51 (2005) 658–670. [CrossRef] [Google Scholar]
- B. Ghattas and A. Ben Ishak, Sélection de variables pour la classification binaire en grande dimension: comparaisons et application aux données de biopuces. Journal de la société française de statistique 149 (2008) 43–66. [Google Scholar]
- U. Grömping, Estimators of relative importance in linear regression based on variance decomposition. The American Statistician 61 (2007) 139–147. [CrossRef] [Google Scholar]
- I. Guyon and A. Elisseff, An introduction to variable and feature selection. J. Mach. Learn. Res. 3 (2003) 1157–1182. [Google Scholar]
- I. Guyon, J. Weston, S. Barnhill and V.N. Vapnik, Gene selection for cancer classification using support vector machines. Mach. Learn. 46 (2002) 389–422. [Google Scholar]
- T. Hastié, R. Tibshirani and J. Friedman, The Elements of Statistical Learning. Springer (2001). [Google Scholar]
- T. Hesterberg, N.H. Choi, L. Meier and C. Fraley, Least angle regresion and l1 penalized regression: A review. Stat. Surv. 2 (2008) 61–93. [CrossRef] [Google Scholar]
- R. Kohavi and G.H. John, Wrappers for feature subset selection. Artificial Intelligence 97 (1997) 273–324. [Google Scholar]
- V. Koltchinskii, Local rademacher complexities and oracle inequalities in risk minimization. Ann. Stat. 34 (2004) 2593–2656. [CrossRef] [Google Scholar]
- E. Mammen and A. Tsybakov, Smooth discrimination analysis. Ann. Stat. 27 (1999) 1808–1829. [CrossRef] [Google Scholar]
- P. Massart, Some applications of concentration inequalities to statistics. Annales de la faculté des sciences de Toulouse 2 (2000) 245–303. [CrossRef] [Google Scholar]
- P. Massart, Concentration Inequlaities and Model Selection. Lect. Notes Math. Springer (2003). [Google Scholar]
- P. Massart and E. Nédélec, Risk bounds for statistical learning. Ann. Stat. 34 (2006). [Google Scholar]
- J.M. Poggi and C. Tuleau, Classification supervisée en grande dimension. Application à l’agrément de conduite automobile. Revue de Statistique Appliquée LIV (2006) 41–60. [Google Scholar]
- E. Rio, Une inégalité de bennett pour les maxima de processus empiriques. Ann. Inst. Henri Poincaré, Probab. Stat. 38 (2002) 1053–1057. [CrossRef] [Google Scholar]
- A. Saltelli, K. Chan and M. Scott, Sensitivity Analysis. Wiley (2000). [Google Scholar]
- M. Sauvé, Histogram selection in non gaussian regression. ESAIM PS 13 (2009) 70–86. [CrossRef] [EDP Sciences] [Google Scholar]
- M. Sauvé and C. Tuleau-Malot, Variable selection through CART, hal-00551375. [Google Scholar]
- I.M. Sobol, Sensitivity estimates for nonlinear mathematical models. Math. Mod. Comput. Experiment 1 (1993) 271–280. [Google Scholar]
- R. Tibshirani, Regression shrinkage and selection via Lasso. J. R. Stat. Soc. Ser. B 58 (1996) 267–288. [Google Scholar]
- A.B. Tsybakov, Optimal aggregation of classifiers in statistical learning. Ann. Stat. 32 (2004) 135–166. [CrossRef] [MathSciNet] [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.