Free Access
Issue
ESAIM: PS
Volume 20, 2016
Page(s) 309 - 331
DOI https://doi.org/10.1051/ps/2015020
Published online 05 August 2016
  1. H. Akaike, Information theory and an extension of the maximum likelihood principle. In Second International Symposium on Information Theory (Tsahkadsor, 1971). Akadémiai Kiadó, Budapest (1973) 267–281. [Google Scholar]
  2. F. Bach, Self-concordant analysis for logistic regression. Electron. J. Statist. 4 (2010) 384–414. [CrossRef] [Google Scholar]
  3. P.L. Bartlett, S. Mendelson and J. Neeman, 1-regularized linear regression: persistence and oracle inequalities. Probab. Theory Relat. Fields 154 (2012) 193–224. [CrossRef] [Google Scholar]
  4. P.J. Bickel, Y. Ritov and A.B. Tsybakov, Simultaneous analysis of Lasso and Dantzig selector. Ann. Statist. 37 (2009) 1705–1732. [CrossRef] [MathSciNet] [Google Scholar]
  5. S. Boucheron, G. Lugosi and O. Bousquet, Concentration inequalities. Adv. Lect. Machine Learn. (2004) 208–240. [Google Scholar]
  6. F. Bunea, A.B. Tsybakov and M.H. Wegkamp, Aggregation and sparsity via l1 penalized least squares. In Learning theory, vol. 4005 of Lect. Notes Comput. Sci. Springer, Berlin (2006) 379–391. [Google Scholar]
  7. F. Bunea, A.B. Tsybakov and M.H. Wegkamp, Aggregation for Gaussian regression. Ann. Statist. 35 (2007) 1674–1697. [CrossRef] [MathSciNet] [Google Scholar]
  8. F. Bunea, A. Tsybakov and M. Wegkamp, Sparsity oracle inequalities for the Lasso. Electron. J. Statist. 1 (2007) 169–194. [CrossRef] [Google Scholar]
  9. C. Chesneau and M. Hebiri, Some theoretical results on the grouped variables lasso. Math. Methods Statist. 17 (2008) 317–326. [CrossRef] [MathSciNet] [Google Scholar]
  10. J. Friedman, T. Hastie and R. Tibshirani, Regularization paths for generalized linear models via coordinate descent. J. Statist. Software 33 (2010) 1. [Google Scholar]
  11. M. Garcia–Magariños, A. Antoniadis, R. Cao and W. González–Manteiga, Lasso logistic regression, GSoft and the cyclic coordinate descent algorithm: application to gene expression data. Stat. Appl. Genet. Mol. Biol. 9 (2010) 30. [MathSciNet] [Google Scholar]
  12. T. Hastie, Non-parametric logistic regression. SLAC PUB-3160 (1983). [Google Scholar]
  13. J. Huang, S. Ma and CH Zhang, The iterated lasso for high–dimensional logistic regression. Technical Report 392 (2008). [Google Scholar]
  14. J. Huang, J.L. Horowitz and F. Wei, Variable selection in nonparametric additive models. Ann. Statist. 38 (2010) 2282. [CrossRef] [MathSciNet] [Google Scholar]
  15. G.M James, P. Radchenko and J. Lv, Dasso: connections between the dantzig selector and lasso. J. Roy. Statist. Soc. Ser. B 71 (2009) 127–142. [CrossRef] [Google Scholar]
  16. K. Knight and W. Fu, Asymptotics for lasso-type estimators. Ann. Statist. 28 (2000) 1356–1378. [CrossRef] [MathSciNet] [Google Scholar]
  17. K. Lounici, M. Pontil, A.B. Tsybakov and S. Van De Geer, Taking advantage of sparsity in multi-task learning. In COLT’09 (2009). [Google Scholar]
  18. K. Lounici, M. Pontil, S. van de Geer and A.B. Tsybakov, Oracle inequalities and optimal inference under group sparsity. Ann. Statist. 39 (2011) 2164–2204. [CrossRef] [MathSciNet] [Google Scholar]
  19. P. Massart, Concentration inequalities and model selection. Lectures from the 33rd Summer School on Probability Theory held in Saint-Flour, July 6–23, 2003. With a foreword by Jean Picard. Vol. 1896 of Lect. Notes Math. Springer, Berlin (2007). [Google Scholar]
  20. P. Massart and C. Meynet, The Lasso as an 1-ball model selection procedure. Electron. J. Statist. 5 (2011) 669–687. [CrossRef] [Google Scholar]
  21. J. McAuley, J. Ming, D. Stewart and P. Hanna, Subband correlation and robust speech recognition. IEEE Trans. Speech Audio Process. 13 (2005) 956–964. [CrossRef] [Google Scholar]
  22. L. Meier, S. van de Geer and P. Bühlmann, The group Lasso for logistic regression. J. Roy. Statist. Soc. Ser. B 70 (2008) 53–71. [CrossRef] [Google Scholar]
  23. L. Meier, S. van de Geer and P. Bühlmann, High-dimensional additive modeling. Ann. Statist. 37 (2009) 3779–3821. [CrossRef] [MathSciNet] [Google Scholar]
  24. N. Meinshausen and P. Bühlmann, High-dimensional graphs and variable selection with the lasso. Ann. Statist. 34 (2006) 1436–1462. [Google Scholar]
  25. N. Meinshausen and B. Yu, Lasso-type recovery of sparse representations for high-dimensional data. Ann. Statist. 37 (2009) 246–270. [CrossRef] [MathSciNet] [Google Scholar]
  26. Y. Nardi and A. Rinaldo, On the asymptotic properties of the group lasso estimator for linear models. Electron. J. Statist. 2 (2008) 605–633. [CrossRef] [Google Scholar]
  27. S.N. Negahban, P. Ravikumar, M.J. Wainwright and B. Yu, A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Statist. Sci. 27 (2012) 538–557. [CrossRef] [MathSciNet] [Google Scholar]
  28. Y. Nesterov and A. Nemirovskii, Interior-point polynomial algorithms in convex programming. Vol. 13 of SIAM Studies in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (1994). [Google Scholar]
  29. M.R. Osborne, B. Presnell and B.A. Turlach, A new approach to variable selection in least squares problems. IMA J. Numer. Anal. 20 (2000) 389–403. [CrossRef] [MathSciNet] [Google Scholar]
  30. M.Y. Park and T. Hastie, L1-regularization path algorithm for generalized linear models. J. Roy. Statist. Soc. Ser. B 69 (2007) 659–677. [CrossRef] [Google Scholar]
  31. B. Tarigan and S.A. van de Geer, Classifiers of support vector machine type with l1 complexity regularization. Bernoulli 12 (2006) 1045–1076. [CrossRef] [MathSciNet] [Google Scholar]
  32. R. Tibshirani, Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 (1996) 267–288. [Google Scholar]
  33. P. Ravikumar, J. Lafferty, H. Liu and L. Wasserman, Sparse additive models. J. Roy. Statist. Soc. Ser. B 71 (2009) 1009–1030. [CrossRef] [Google Scholar]
  34. G. Schwarz, Estimating the dimension of a model. Ann. Statist. 6 (1978) 461–464. [Google Scholar]
  35. S.A. van de Geer, High-dimensional generalized linear models and the lasso. Ann. Statist. 36 (2008) 614–645. [CrossRef] [MathSciNet] [Google Scholar]
  36. S.A. van de Geer and P. Bühlmann, On the conditions used to prove oracle results for the Lasso. Electron. J. Statist. 3 (2009) 1360–1392. [CrossRef] [Google Scholar]
  37. T.T. Wu, Y.F. Chen, T. Hastie, E. Sobel and K. Lange, Genome-wide association analysis by lasso penalized logistic regression. Bioinform. 25 (2009) 714–721. [CrossRef] [PubMed] [Google Scholar]
  38. M. Yuan and Y. Lin, Model selection and estimation in regression with grouped variables. J. Roy. Statist. Soc. Ser. B 68 (2006) 49–67. [CrossRef] [Google Scholar]
  39. C.-H. Zhang and J. Huang, The sparsity and bias of the LASSO selection in high-dimensional linear regression. Ann. Statist. 36 (2008) 1567–1594. [CrossRef] [MathSciNet] [Google Scholar]
  40. P. Zhao and B. Yu, On model selection consistency of Lasso. J. Mach. Learn. Res. 7 (2006) 2541–2563. [Google Scholar]
  41. H. Zou, The adaptive lasso and its oracle properties. J. Am. Statist. Assoc. 101 (2006) 1418–1429. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.