Open Access
Issue
ESAIM: PS
Volume 28, 2024
Page(s) 227 - 257
DOI https://doi.org/10.1051/ps/2024006
Published online 24 May 2024
  1. Y. Freund and R. Schapire, Adaptive Game Playing Using Multiplicative Weights, Vol. 29 (1999) 79–103. [Google Scholar]
  2. S. Dudoit, Y.H. Yang, M.J. Callow and T.P. Speed, Statistical methods for identifying differentially expressed genes in replicated cdna microarray experiments. Statistica Sinica 12 (2002) 111–139. [Google Scholar]
  3. J. Bergstra, N. Casagrande, D. Erhan, D. Eck, and B. Kégl, Aggregate features and adaboost for music classification. Mach. Learn. 65 (2006) 473–484. [Google Scholar]
  4. J. Friedman, T. Hastie and R. Tibshirani, Additive logistic regression: a statistical view of boosting. Ann. Statist. 28 (2000) 337–407. [Google Scholar]
  5. J.H. Friedman, Greedy function approximation: a gradient boosting machine. Ann. Statist. (2001) 1189–1232. [Google Scholar]
  6. G. Ridgeway, Generalized boosting models: a guide to the gbm package. (2007). URL https://cran.r-project.org/web/packages/gbm/vignettes/gbm.pdf. [Google Scholar]
  7. T. Chen and C. Guestrin, XGBoost: a scalable tree boosting system, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco California, USA, 2016. ACM (2016) 785–794. ISBN 978-1-4503-4232-2. [Google Scholar]
  8. R.E. Schapire and Y. Freund, Boosting: Foundations and Algorithms. Cambridge University Press (2012). [Google Scholar]
  9. G. Biau and B. Cadre, Optimization by gradient boosting (supplementary material), in Advances in Contemporary Statistics and Econometrics: Festschrift in Honor of Christine Thomas-Agnan, edited by A. Daouia and A. Ruiz-Gazen. Springer, Cham (2001) 23–44. [Google Scholar]
  10. L. Breiman, Population theory for boosting ensembles. Ann. Statist. 32 (2004) 1–11. [Google Scholar]
  11. T. Zhang and B. Yu, Boosting with early stopping: convergence and consistency. Ann. Statist. 33 (2005) 1538–1579. [Google Scholar]
  12. P.L. Bartlett and M. Traskin, AdaBoost is consistent. J. Mach. Learn. Res. 8 (2007) 2347–2368. [Google Scholar]
  13. P. Bühlmann and B. Yu, Boosting with the L2 loss: regression and classification. J. Am. Statist. Assoc. 98 (2003) 324–339. [Google Scholar]
  14. S.N. Ethier and T.G. Kurtz, Markov Processes. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons, Inc., New York (1986). [Google Scholar]
  15. D.W. Stroock and S.R.S. Varadhan, Multidimensional Diffusion Processes. Classics in Mathematics. Springer-Verlag, Berlin (2006). Reprint of the 1997 edition. [Google Scholar]
  16. A. Dieuleveut, Stochastic approximation in Hilbert spaces. Université Paris sciences et lettres. (2017). English. NNT:2017PSLEE059. tel-01705522v2. [Google Scholar]
  17. H. Maennel, O. Bousquet and S. Gelly, Gradient Descent Quantizes ReLU Network Features. (2018). [Google Scholar]
  18. K. Lyu and J. Li, Gradient descent maximizes the margin of homogeneous neural networks, in International Conference on Learning Representations 2020 (2020). [Google Scholar]
  19. S.L. Smith, B. Dherin, D.G.T. Barrett and S. De, On the origin of implicit regularization in stochastic gradient descent, in International Conference on Learning Representations 2021 (2021). [Google Scholar]
  20. P.-A. Cornillon, N.W. Hengartner and E. Matzner-Løber, Recursive bias estimation for multivariate regression smoothers. ESAIM: PS 18 (2014) 483–502. [Google Scholar]
  21. E. A. Nadaraya, On estimating regression. Theory Proba. Appl. 9 (1964) 141–142. [Google Scholar]
  22. G.S. Watson, Smooth regression analysis. Sankhya 26 (1964) 359–372. [Google Scholar]
  23. G. Wahba, Spline Models for Observational Data. CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics (1990). [Google Scholar]
  24. L. Györfi, M. Kohler, A. Krzyżak and H. Walk, A Distribution-free Theory of Nonparametric Regression. Springer Series in Statistics, Springer-Verlag, New York (2002). [Google Scholar]
  25. R. Horn and C. Johnson, Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2013). [Google Scholar]
  26. J.H. Friedman, Stochastic gradient boosting. Computat. Statist. Data Anal. 38 (2002) 367–378. [Google Scholar]
  27. P. Billingsley, Convergence of Probability Measures. Wiley Series in Probability and Statistics: Probability and Statistics, 2nd edn. John Wiley & Sons, Inc., New York (1999). [Google Scholar]
  28. M. Redmond, Communities and Crime. UCI Machine Learning Repository. (2009). [Google Scholar]
  29. T. Apostol, Calculus. Vol. II: Multi-variable Calculus and Linear Algebra, with Applications to Differential Equations and Probability. Blaisdell International Textbook Series. Xerox College Publ. (1969). [Google Scholar]
  30. R. Bellman, Stability Theory of Differential Equations. Dover Books on Intermediate and Advanced Mathematics. Dover Publications (1969). [Google Scholar]
  31. R. Bellman, Introduction to Matrix Analysis: Second Edition. Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (1997). [Google Scholar]
  32. UCI, Machine Learning Repository DOI: https://doi.org/10.24432/C53W3X [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.