Free Access
Volume 17, 2013
Page(s) 650 - 671
Published online 04 November 2013
  1. P.L. Bartlett, S. Mendelson and J. Neeman, 1-regularized linear regression: persistence and oracle inequalities, Probability and related fields. Springer (2011). [Google Scholar]
  2. J.P. Baudry, Sélection de Modèle pour la Classification Non Supervisée. Choix du Nombre de Classes. Ph.D. thesis, Université Paris-Sud 11, France (2009). [Google Scholar]
  3. P.J. Bickel, Y. Ritov and A.B. Tsybakov, Simultaneous analysis of Lasso and Dantzig selector. Ann. Stat. 37 (2009) 1705–1732. [Google Scholar]
  4. S. Boucheron, G. Lugosi and P. Massart, A non Asymptotic Theory of Independence. Oxford University press (2013). [Google Scholar]
  5. P. Bühlmann and S. van de Geer, On the conditions used to prove oracle results for the Lasso. Electr. J. Stat. 3 (2009) 1360–1392. [Google Scholar]
  6. E. Candes and T. Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35 (2007) 2313–2351. [Google Scholar]
  7. S. Cohen and E. Le Pennec, Conditional Density Estimation by Penalized Likelihood Model Selection and Applications, RR-7596. INRIA (2011). [Google Scholar]
  8. B. Efron, T. Hastie, I. Johnstone and R. Tibshirani, Least Angle Regression. Ann. Stat. 32 (2004) 407–499. [CrossRef] [MathSciNet] [Google Scholar]
  9. M. Hebiri, Quelques questions de sélection de variables autour de l’estimateur Lasso. Ph.D. Thesis, Université Paris Diderot, Paris 7, France (2009). [Google Scholar]
  10. C. Huang, G.H.L. Cheang and A.R. Barron, Risk of penalized least squares, greedy selection and 1-penalization for flexible function librairies. Submitted to the Annals of Statistics (2008). [Google Scholar]
  11. P. Massart, Concentration inequalities and model selection. Ecole d’été de Probabilités de Saint-Flour 2003. Lect. Notes Math. Springer, Berlin-Heidelberg (2007). [Google Scholar]
  12. P. Massart and C. Meynet, The Lasso as an 1-ball model selection procedure. Elect. J. Stat. 5 (2011) 669–687. [Google Scholar]
  13. C. Maugis and B. Michel, A non asymptotic penalized criterion for Gaussian mixture model selection. ESAIM: PS 15 (2011) 41–68. [Google Scholar]
  14. G. McLachlan and D. Peel, Finite Mixture Models. Wiley, New York (2000). [Google Scholar]
  15. N. Meinshausen and B. Yu, Lasso type recovery of sparse representations for high dimensional data. Ann. Stat. 37 (2009) 246–270. [Google Scholar]
  16. R.A. Redner and H.F. Walker, Mixture densities, maximum likelihood and the EM algorithm. SIAM Rev. 26 (1984) 195–239. [CrossRef] [MathSciNet] [Google Scholar]
  17. P. Rigollet and A. Tsybakov, Exponential screening and optimal rates of sparse estimation. Ann. Stat. 39 (2011) 731–771. [CrossRef] [Google Scholar]
  18. N. Städler, B.P. Hlmann, and S. van de Geer, 1-penalization for mixture regression models. Test 19 (2010) 209–256. [CrossRef] [MathSciNet] [Google Scholar]
  19. R. Tibshirani, Regression shrinkage and selection via the Lasso. J. Roy. Stat. Soc. Ser. B 58 (1996) 267–288. [Google Scholar]
  20. M.R. Osborne, B. Presnell and B.A. Turlach, On the Lasso and its dual. J. Comput. Graph. Stat. 9 (2000) 319–337. [Google Scholar]
  21. M.R. Osborne, B. Presnell and B.A Turlach, A new approach to variable selection in least squares problems. IMA J. Numer. Anal. 20 (2000) 389–404. [CrossRef] [MathSciNet] [Google Scholar]
  22. A. van der Vaart and J. Wellner, Weak Convergence and Empirical Processes. Springer, Berlin (1996). [Google Scholar]
  23. V.N. Vapnik, Estimation of Dependencies Based on Empirical Data. Springer, New-York (1982). [Google Scholar]
  24. V.N. Vapnik, Statistical Learning Theory. J. Wiley, New-York (1990). [Google Scholar]
  25. P. Zhao and B. Yu On model selection consistency of Lasso. J. Mach. Learn. Res. 7 (2006) 2541–2563. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.