Open Access
Issue
ESAIM: PS
Volume 25, 2021
Page(s) 298 - 324
DOI https://doi.org/10.1051/ps/2021010
Published online 12 July 2021
  1. G. Andrew and J. Gao, Scalable training of L1-regularized log-linear models. Proc. 24th Inte. Conf. Mach. Learning (2007) 33–40. [Google Scholar]
  2. O. Banerjee, L. El Ghaoui and A. D’Aspremont, Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J. Mach. Learn. Res. 9 (2008) 485–516. [Google Scholar]
  3. R. Baraniuk, M. Davenport, R. DeVore and M. Wakin, A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28 (2008) 253–263. [Google Scholar]
  4. S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press (2004). [Google Scholar]
  5. T. Cai, H. Li, W. Liu and J. Xie, Covariate-adjusted precision matrix estimation with an application in genetical genomics. Biometrika 100 (2013) 139–156. [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  6. T. Cai, W. Liu and X. Luo, A constrained 1 minimization approach to sparse precision matrix estimation. J. Am. Stat. Assoc. 106 (2011) 594–607. [Google Scholar]
  7. J. Chiquet, T. Mary-Huard and S. Robin, Structured regularization for conditional Gaussian graphical models. Stat. Comput. 27 (2017) 789–804. [Google Scholar]
  8. J. Fan, Y. Feng and Y. Wu, Network exploration via the adaptive Lasso and SCAD penalties. Ann. Appl. Stat. 3 (2009) 521–541. [PubMed] [Google Scholar]
  9. J. Friedman, T. Hastie and R. Tibshirani, Sparse inverse covariance estimation with the graphical Lasso. Biostatistics 9 (2008) 432–441. [CrossRef] [PubMed] [Google Scholar]
  10. C. Giraud, Introduction to High-Dimensional Statistics. Chapman & Hall/CRC Monographs on Statistics & Applied Probability. Taylor & Francis (2014). [Google Scholar]
  11. T. Hastie, R. Tibshirani and M. Wainwright, Statistical Learning with Sparsity: The Lasso and Generalizations. Chapman & Hall/CRC Monographs on Statistics and Applied Probability. CRC Press (2015). [Google Scholar]
  12. R.A. Horn and C.R. Johnson, Matrix Analysis (Second Edition). Cambridge University Press, Cambridge, New York (2012). [Google Scholar]
  13. C. Johnson, A. Jalali and P. Ravikumar, High-dimensional sparse inverse covariance estimation using greedy methods. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, volume 22 of Proceedings of Machine Learning Research. PMLR (2012) 574–582. [Google Scholar]
  14. W. Lee and Y. Liu, Simultaneous multiple response regression and inverse covariance matrix estimation via penalized Gaussian maximum likelihood. J. Multivariate. Anal. 111 (2012) 241–255. [Google Scholar]
  15. Z. Lu, Smooth optimization approach for sparse covariance selection. Siam. J. Optimiz. 19 (2009) 1807–1827. [Google Scholar]
  16. M. Maathuis, M. Drton, S.L. Lauritzen and M. Wainwright, Handbook of Graphical Models. Chapman & Hall/CRC Handbooks of Modern Statistical Methods. CRC Press (2018). [Google Scholar]
  17. N. Meinshausen and P. Bühlmann, High-dimensional graphs and variable selection with the Lasso. Ann. Stat. 34 (2006) 1436–1462. [Google Scholar]
  18. F. Pascal, L. Bombrun, J.Y. Tourneret and Y. Berthoumieu, Parameter estimation for multivariate generalized Gaussian distributions. IEEE Trans. Signal. Process. 61 (2013) 5960–5971. [Google Scholar]
  19. J. Peng, P. Wang, N. Zhou and J. Zhu, Partial correlation estimation by joint sparse regression models. J. Am. Stat. Assoc. 104 (2009) 735–746. [CrossRef] [PubMed] [Google Scholar]
  20. J. Ramsay and B. Silverman, Functional Data Analysis, 2nd ed. Springer, New York (2006). [Google Scholar]
  21. P. Ravikumar, M. Wainwright, G. Raskutti and B. Yu, High-dimensional covariance estimation by minimizing 1 -penalized log-determinant divergence. Electr. J. Stat. 5 (2011) 935–980. [Google Scholar]
  22. P. Rossi, G. Allenby and R. McCulloch, Bayesian Statistics and Marketing. Wiley Series in Probability and Statistics. Wiley (2012). [Google Scholar]
  23. A. Rothman, E. Levina and J. Zhu, Sparse multivariate regression with covariance estimation. J. Comput. Graph. Stat. 19 (2010) 947–962. [CrossRef] [PubMed] [Google Scholar]
  24. M. Slawski, The structured elastic net for quantile regression and support vector classification. Stat. Comput. 22 (2012) 153–168. [Google Scholar]
  25. M. Slawski, W. Zu Castell and G. Tutz, Feature selection guided by structural information. Ann. Appl. Stat. 4 (2010) 1056–1080. [Google Scholar]
  26. K.A. Sohn and S. Kim, Joint estimation of structured sparsity and output structure in multiple-output regression via inverse-covariance regularization. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics. Vol. 22 of Proceedings of Machine Learning Research. PMLR (2012) 1081–1089. [Google Scholar]
  27. J. Yin and H. Li, A sparse conditional Gaussian graphical model for analysis of genetical genomics data. Ann. Appl. Stat. 5 (2011) 2630–2650. [PubMed] [Google Scholar]
  28. M. Yuan and Y. Lin, Model selection and estimation in the Gaussian graphical model. Biometrika 94 (2007) 19–35. [Google Scholar]
  29. X.T. Yuan and T. Zhang, Partial Gaussian graphical model estimation. IEEE Trans. Inf. Theory 60 (2014) 1673–1687. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.