In the context of finite mixture models one considers the problem of classifying as many observations as possible in the classes of interest while controlling the classification error rate in these same classes. Similar to what is done in the framework of statistical test theory, different type I and type II-like classification error rates can be defined, along with their associated optimal rules, where optimality is defined as minimizing type II error rate while controlling type I error rate at some nominal level. It is first shown that finding an optimal classification rule boils down to searching an optimal region in the observation space where to apply the classical Maximum A Posteriori (MAP) rule. Depending on the misclassification rate to be controlled, the shape of the optimal region is provided, along with a heuristic to compute the optimal classification rule in practice. In particular, a multiclass FDR-like optimal rule is defined and compared to the thresholded MAP rules that is used in most applications. It is shown on both simulated and real datasets that the FDR-like optimal rule may be significantly less conservative than the thresholded MAP rule.
Funding source: Agence Nationale de la Recherche
Award Identifier / Grant number: ANR-16-CE40-0019
Award Identifier / Grant number: ANR-17-EUR-0007
Award Identifier / Grant number: ANR-19-CHIA-0021-01
Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.
Research funding: G. Blanchard acknowledges support from Agence Nationale de la Recherche (ANR) via the project ANR-19-CHIA-0021-01 (BiSCottE), and the project ANR-16-CE40-0019 (SansSouci); and from the Franco-German University through the binational Doktorandenkolleg CDFA 01-18. GQE and IPS2 benefit from the support of the LabEx Saclay Plant Sciences-SPS (ANR-17-EUR-0007).
Conflict of interest statement: The authors declare no conflicts of interest regarding this article.
2. Bérard, C, Martin-Magniette, M-L, Brunaud, V, Aubourg, S, Robin, S. Unsupervised classification for tiling arrays: chip-chip and transcriptome. Stat Appl Genet Mol Biol 2011;10. https://doi.org/10.2202/1544-6115.1692.Search in Google Scholar PubMed
3. Friedman, J, Hastie, T, Tibshirani, R. The elements of statistical learning: data mining, inference, and prediction. New York: Springer Series in Statistics; 2009.10.1007/978-0-387-84858-7Search in Google Scholar
7. Bartlett, P, Wegkamp, M. Classification with a reject option using a hinge loss. J Mach Learn Res 2008;9:1823–40.Search in Google Scholar
8. Grandvalet, Y, Rakotomamonjy, A, Keshet, J, Canu, S. Support vector machines with a reject option. In: Bengio, Y, editor. Advances in neural information processing systems. Cambridge, MA: MIT press; 2009, vol 21:537–44 pp.Search in Google Scholar
10. Zhang, C, Chaudhuri, K. Beyond disagreement-based agnostic active learning. In: Welling, M, editor. Advances in neural information processing systems. Cambridge, MA: MIT Press; 2014, vol 27:442–50 pp.Search in Google Scholar
11. Schreuder, N, Chzhen, E. Classification with abstention but without disparities. 2021; arXiv preprint arXiv:2102.12258.Search in Google Scholar
12. Tseng, GC, Wong, WH. Tight clustering: a resampling-based approach for identifying stable and tight patterns in data. Biometrics 2005;61:10–6. https://doi.org/10.1111/j.0006-341x.2005.031032.x.Search in Google Scholar PubMed
13. Karmakar, B, Das, S, Bhattacharya, S, Sarkar, R, Mukhopadhyay, I. Tight clustering for large datasets with an application to gene expression data. Sci Rep 2019;9:3053. https://doi.org/10.1038/s41598-019-39459-w.Search in Google Scholar PubMed PubMed Central
17. Tong, X, Feng, Y, Zhao, A. A survey on neyman-pearson classification and suggestions for future research. Wiley Interdiscip Rev Comput Stat 2016;8:64–81. https://doi.org/10.1002/wics.1376.Search in Google Scholar
18. El-Yaniv, R, Wiener, Y. On the foundations of noise-free selective classification. J Mach Learn Res 2010;11:1605–41.Search in Google Scholar
20. Denis, C, Hebiri, M. Consistency of plug-in confidence sets for classification in semi-supervised learning. J Nonparametric Statistics 2020;32:42–72. https://doi.org/10.1080/10485252.2019.1689241.Search in Google Scholar
22. Neyman, J, Pearson, ES. On the problem of the most efficient tests of statistical hypotheses. Philos Trans R Soc Lond - Ser A Contain Pap a Math or Phys 1933;231:289–337. https://doi.org/10.1098/rsta.1933.0009.Search in Google Scholar
23. Scrucca, L, Fop, M, Murphy, T, Raftery, A. mclust 5: clustering, classification and density estimation using Gaussian finite mixture models. R J 2016;8:289–317. https://doi.org/10.32614/rj-2016-021.Search in Google Scholar
24. Tao, Q, Wu, G-W, Wang, F-Y, Wang, J. Posterior probability support vector machines for unbalanced data. IEEE Trans Neural Network 2005;16:1561–73. https://doi.org/10.1109/tnn.2005.857955.Search in Google Scholar PubMed
25. Grandvalet, Y, Mariéthoz, J, Bengio, S. A probabilistic interpretation of SVMs with an application to unbalanced classification. In: Larochelle, H, Ranzato, M, Hadsell, R, Balcan, MF, Lin, H, editors. Advances in neural information processing systems. Cambridge, MA: MIT Press; 2006:467–74 pp.Search in Google Scholar
26. Matias, C, Robin, S. Modeling heterogeneity in random graphs through latent space models: a selective review. ESAIM Proc. 2014;47:55–74. https://doi.org/10.1051/proc/201447004.Search in Google Scholar
The online version of this article offers supplementary material (https://doi.org/10.1515/ijb-2020-0105).
© 2021 Walter de Gruyter GmbH, Berlin/Boston