Search Results

You are looking at 1 - 10 of 1,085 items :

  • "feature selection" x
Clear All

References 1. ABE, N., KUDO, M., TOYAMA, J., SHIMBO, M. 2006. Classifier-independent feature selection on the basis of divergence criterion. Pattern Analysis & Applications , 9 (2), 2006, pp. 127.-137. 2. CHRYSOSTOMOU, K. 2009. Wrapper Feature Selection. In J. Wang (Ed.), Encyclopedia of Data Warehousing and Mining , Second Edition, pp. 2103-2108. Hershey, PA: Information Science Reference. doi:10.4018/978-1-60566-010-3.ch322 3. DEVIJVER, P. A., KITTLER, J. 1982. Pattern Recognition: A Statistical Approach. Prentice Hall. 4. De VEAUX, R., Predictive Analytics

Jasleen Kaur. Feature selection for website fingerprinting. Technical Report 18-001, 2018. [43] Rishab Nithyanand, Xiang Cai, and Rob Johnson. Glove: A bespoke website fingerprinting defense. In Proceedings of the 13th Workshop on Privacy in the Electronic Society , pages 131–134. ACM, 2014. [44] Pierre Geurts, Damien Ernst, and Louis Wehenkel. Extremely randomized trees. Machine learning , 63(1):3–42, 2006. [45] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D

Conference on Artificial Intelligence, 2002, pp. 249-256. 4. Das, K., K. Bhaduri, H. Kargupta. A Local Asynchronous Distributed Privacy Preserving Feature Selection Algorithm for Large Peer-To-Peer Networks. – Knowledge and Information Systems, Vol. 24 , 2010, No 3, pp. 341-367. 5. Sheela, M. A., K. Vijayalakshmi. Partition Based Perturbation for Privacy Preserving Distributed Data Mining. – Cybernetics and Information Technologies, Vol. 17 , 2017, No 2, pp. 44-55. 6. Skillicorn, D. B., S. M. McConnell. Distributed Prediction from Vertically Partitioned Data. – Journal

Networks Recipes in C++, Academic Press Inc, 1993. [9] Peterson D. A., Knight J. N., Kirby M. J., Anderson Ch. W., Thaut M. H., Feature Selection and Blind Source Separation in an EEG-Based Brain-Computer Interface, EURASIP Journal on Applied Signal Processing , 19 , 2005, 3128-3140. [10] Pfurtscheller G., Flotzinger D., Kalcher J., Brain-computer interface - a new communication device for handicapped persons, Journal of Microcomputer Application , 16 , 1993, 293-299. [11] Pfurtscheller G., Lopes da Silva F. H., Event-related EEG/MEG synchronization and

References 1. Yu, L., H. Liu. Efficient Feature Selection via Analysis of Relevance and Redundancy. – J. Mach. Learn. Res., Vol. 5 , 2004, No Oct, pp. 1205-1224. 2. Gheyas, I. A., L. S. Smith. Feature Subset Selection in Large Dimensionality Domains. – Pattern Recognit., Vol. 43 , January 2010, No 1, pp. 5-13. 3. Yang, Y., J. O. Pedersen. A Comparative Study on Feature Selection in Text Categorization. – In: Proc. of 14th International Conference on Machine Learning, ICML’97, 1997, pp. 412-420. 4. Yan, K., D. Zhang. Feature Selection and Analysis on Correlated

nuclei in breast cancer histopathology images, PLOS ONE 11(9): 1-15. Piórkowski, A. (2016). A statistical dominance algorithm for edge detection and segmentation of medical images, in E. Pietka et al. (Eds.), Information Technologies in Medicine, Advances in Intelligent Systems and Computing, Vol. 471, Springer, Cham, pp. 3-14. Roffo, G. (2016). Feature selection library (Matlab toolbox), arXiv: 1607.01327. Ronneberger, O., Fischer, P. and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation, CoRR: abs/1505.04597. Ruifrok, A.C. and Johnston

1 Introduction Languages change over time. Language change is inevitable. Such axiomatic statements have attained the status of fundamental truths in linguistics and no one but the most procrustean prescriptivist would doubt that language is a highly malleable aspect of humanity. But behind the accepted wisdom that language change is inexorable is a contentious debate concerning the exact mechanisms that motivate linguistic feature selection and language change, with scholars from the fields of Second Language Acquisition (hereafter, SLA), Conversation Analysis

classification. In sentiment analysis, feature selection is a method to identify a subset of features to achieve various goals: firstly, to reduce computational cost, secondly, to avoid over fitting, and thirdly, to enhance the classification accuracy of the model [ 54 ]. Feature selection methods can be broadly divided into three categories, as filter methods, wrapper methods, and embedded methods [ 40 ]. The filter method assesses the optimal subset of features by looking only at the underlying properties of the data. The feature relevance scores are calculated, and low

References [1] R. Kohavi, G.H. John, Wrappers for feature subset selection. Artificial Intelligence, 97, pp. 273-324, 1997. [2] S.B. Thrun, The Monk’s problems: a performance comparison of different learning algorithms, Tech. Rept. CMU-CS-91-197, Carnegie Mellon University, Pittsburgh, PA, 1991. [3] G.H. John, Enhancements to the data mining process. Ph.D. Thesis, Computer Science Department, Stanford University, CA, 1997. [4] M. Venkatadri, K. Srinivasa Rao, A multiobjective genetic algorithm for feature selection in data mining, International Journal of

classification models themselves has been widely studied by many investigators (Turney, 1995; Bousquet and Elisseeff, 2002; Waitman et al., 2006), the inconsistency of biomarker sets identified from different biomedical samples, or by different feature selection methods, has not been fully resolved. For instance, many reports in ovarian cancer research have suggested putative biomarkers from their own data mining techniques (Petricoin et al., 2002; Conrads et al., 2004; Fareley et al., 2008), but the overlap between the different sets is very small. These confusing outputs