Free Access
Issue
ESAIM: PS
Volume 15, 2011
Page(s) 41 - 68
DOI https://doi.org/10.1051/ps/2009004
Published online 05 January 2012
  1. H. Akaike, Information theory and an extension of the maximum likelihood principle, in Second International Symposium on Information Theory (Tsahkadsor, 1971), Akadémiai Kiadó, Budapest (1973) 267–281. [Google Scholar]
  2. S. Arlot and P. Massart, Data-driven calibration of penalties for least-squares regression. J. Mach. Learn. Res. (2008) (to appear). [Google Scholar]
  3. J.D. Banfield and A.E. Raftery, Model-based Gaussian and non-Gaussian clustering. Biometrics 49 (1993) 803–821. [CrossRef] [MathSciNet] [Google Scholar]
  4. A. Barron, L. Birgé and P. Massart, Risk bounds for model selection via penalization. Prob. Th. Re. Fields 113 (1999) 301–413. [Google Scholar]
  5. J.-P. Baudry, Clustering through model selection criteria. Poster session at One Day Statistical Workshop in Lisieux. http://www.math.u-psud.fr/ baudry, June (2007). [Google Scholar]
  6. C. Biernacki, G. Celeux and G. Govaert, Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Trans. Pattern Analy. Mach. Intell. 22 (2000) 719–725. [Google Scholar]
  7. C. Biernacki, G. Celeux, G. Govaert and F. Langrognet, Model-based cluster and discriminant analysis with the mixmod software. Comput. Stat. Data Anal. 51 (2006) 587–600. [Google Scholar]
  8. L. Birgé and P. Massart, Gaussian model selection. J. Eur. Math. Soc. 3 (2001) 203–268. [CrossRef] [MathSciNet] [Google Scholar]
  9. L. Birgé and P. Massart, A generalized Cp criterion for Gaussian model selection. Prépublication n° 647, Universités de Paris 6 et Paris 7 (2001). [Google Scholar]
  10. L. Birgé and P. Massart. Minimal penalties for Gaussian model selection. Prob. Th. Rel. Fields 138 (2007) 33–73. [Google Scholar]
  11. L. Birgé and P. Massart, From model selection to adaptive estimation, in Festschrift for Lucien Le Cam. Springer, New York (1997) 55–87. [Google Scholar]
  12. C. Bouveyron, S. Girard and C. Schmid, High-Dimensional Data Clustering. Comput. Stat. Data Anal. 52 (2007) 502–519. [Google Scholar]
  13. K.P. Burnham and D.R. Anderson, Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach. Springer-Verlag, New York, 2nd edition (2002). [Google Scholar]
  14. G. Castellan, Modified Akaike's criterion for histogram density estimation. Technical report, Université Paris-Sud 11 (1999). [Google Scholar]
  15. G. Castellan, Density estimation via exponential model selection. IEEE Trans. Inf. Theory 49 (2003) 2052–2060. [CrossRef] [MathSciNet] [Google Scholar]
  16. G. Celeux and G. Govaert, Gaussian parsimonious clustering models. Pattern Recogn. 28 (1995) 781–793. [Google Scholar]
  17. A.P. Dempster, N.M. Laird and D.B. Rubin, Maximum likelihood from incomplete data via the EM algorithm (with discussion). J. R. Stat. Soc, Ser. B. 39 (1977) 1–38. [Google Scholar]
  18. C.R. Genovese and L. Wasserman, Rates of convergence for the Gaussian mixture sieve. Ann. Stat. 28 (2000) 1105–1127. [CrossRef] [Google Scholar]
  19. S. Ghosal and A.W. van der Vaart, Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities. Ann. Stat. 29 (2001) 1233–1263. [CrossRef] [Google Scholar]
  20. C. Keribin, Consistent estimation of the order of mixture models. Sankhyā. The Indian Journal of Statistics. Series A 62 (2000) 49–66. [MathSciNet] [Google Scholar]
  21. M.H. Law, M.A.T. Figueiredo and A.K. Jain, Simultaneous feature selection and clustering using mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 26 (2004) 1154–1166. [CrossRef] [PubMed] [Google Scholar]
  22. E. Lebarbier, Detecting multiple change-points in the mean of Gaussian process by model selection. Signal Proc. 85 (2005) 717–736. [Google Scholar]
  23. V. Lepez, Potentiel de réserves d'un bassin pétrolier: modélisation et estimation. Ph.D. thesis, Université Paris-Sud 11 (2002). [Google Scholar]
  24. P. Massart, Concentration inequalities and model selection. Springer, Berlin (2007). Lectures from the 33rd Summer School on Probability Theory held in Saint-Flour, July 6–23 (2003). [Google Scholar]
  25. C. Maugis, Sélection de variables pour la classification non supervisée par mélanges gaussiens. Applications à l'étude de données transcriptomes. Ph.D. thesis, University Paris-Sud 11 (2008). [Google Scholar]
  26. C. Maugis, G. Celeux and M.-L. Martin-Magniette, Variable Selection for Clustering with Gaussian Mixture Models. Biometrics (2008) (to appear). [Google Scholar]
  27. C. Maugis and B. Michel, Slope heuristics for variable selection and clustering via Gaussian mixtures. Technical Report 6550, INRIA (2008). [Google Scholar]
  28. A.E. Raftery and N. Dean, Variable Selection for Model-Based Clustering. J. Am. Stat. Assoc. 101 (2006) 168–178. [Google Scholar]
  29. G. Schwarz, Estimating the dimension of a model. Ann. Stat. 6 (1978) 461–464. [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
  30. D. Serre, Matrices. Springer-Verlag, New York (2002). [Google Scholar]
  31. M. Talagrand, Concentration of measure and isoperimetric inequalities in product spaces. Publ. Math., Inst. Hautes Étud. Sci. 81 (1995) 73–205. [Google Scholar]
  32. M. Talagrand, New concentration inequalities in product spaces. Invent. Math. 126 (1996) 505–563. [CrossRef] [MathSciNet] [Google Scholar]
  33. F. Villers, Tests et sélection de modèles pour l'analyse de données protéomiques et transcriptomiques. Ph.D. thesis, University Paris-Sud 11 (2007). [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.