1. [1] J. Derrac, S. Garcia and F. Herrera, "IFS-CoCo: Instance and feature selection based on cooperative coevolution with nearest neighbor rule.," Pattern Recognition , vol. 43, no. 6, pp. 2082-2105, 2010. [
DOI:10.1016/j.patcog.2009.12.012]
2. [2] H. Liu and L. Yu, "Toward Integrating Feature Selection Algorithms for Classification and Clustering," Knowledge and Data Engineering, IEEE Transactions on, vol. 17, no. 4, pp. 491-502, 2005. [
DOI:10.1109/TKDE.2005.66]
3. [3] V. Bolón-Canedo, N. Sánchez-Maroño and A. Alonso-Betanzos, "Recent advances and emerging challenges of feature selection in the context of big data," Knowledge-Based Systems, vol. 86, pp. 33-45, 2015. [
DOI:10.1016/j.knosys.2015.05.014]
4. [4] L. C. Molina, L. Belanche and À. Nebot, "Feature Selection Algorithms: A Survey and Experimental Evaluation," in IEEE International Conference on Data Mining, 2002.
5. [5] ج. پورامینی, ب. مینایی بیدگلی و م. اسماعیلی, "یک روش جدید انتخاب ویژگی یکطرفه در دستهبندی دادههای متنی نامتوازن،," پردازش علائم و دادهها, 16(1)، 40-21، 1398
6. [5] J. Pouramini, B.Minaei-Bidgoli, M.Esmaeili, "A Novel One Sided Feature Selection Method for Imbalanced Text Classification", JSDP, 2019, vol. 16 (1), pp.21-40. [
DOI:10.29252/jsdp.16.1.21]
7. [6] P. S. Bradley, U. M. Fayyad and C. Reina, "Scaling clustering algorithms to large databases," in Proceedings of the Fourth International Conference on Knowledge Discovery & Data Mining, New York, 1998.
8. [7] H. Liu, H. Motoda and L. Yu, "A selective sampling approach to active feature selection," Artificial Intelligence, vol. 159, pp. 49-74, 2004. [
DOI:10.1016/j.artint.2004.05.009]
9. [8] W. Cochran, Sampling Techniques, New York: Wiley, 1977.
10. [9] H. Liu and H. Motoda, Instance Selection and Construction for Data Mining, Boston,MA: Kluwer Academic, 2001. [
DOI:10.1007/978-1-4757-3359-4]
11. [10] S. Garcı'a, J. Derrac, J. R. Cano and F. Herrera, "Prototype Selection for Nearest Neighbor Classification: Taxonomy and Empirical Study," IEEE Transactions On Pattern Analysis And Machine Intelligence, vol. 34, no. 3, pp. 417-435, MARCH 2012. [
DOI:10.1109/TPAMI.2011.142] [
PMID]
12. [11] S. Garc'ıa, J. Derrac, J. R. Cano and F. Herrera, "Prototype Selection for Nearest Neighbor Classification: Survey of Methods," IEEE Transactions On Pattern Analysis And Machine Intelligence., vol. 34, no. 3, pp. 417-435, 2012. [
DOI:10.1109/TPAMI.2011.142] [
PMID]
13. [12] F. H. M. L. Jose Ramon Cano, "On the combination of evolutionary algorithms and stratified strategies for training set selection in data mining," Applied Soft Computing , vol. 6, pp. 323-332, 2006. [
DOI:10.1016/j.asoc.2005.02.006]
14. [13] S. d. Río, V. Lopez, J. M. Benítez and F. Herrera, "On the use of MapReduce for imbalanced big data using Random Forest," Information Sciences , vol. 285, pp. 112-137, 2014. [
DOI:10.1016/j.ins.2014.03.043]
15. [14] d. Wilson and t. r. Martinez, "Reduction techniques for instance-based learning algorithms," Machine learning, vol. 38, no. 3, pp. 257-286, 2000. [
DOI:10.1023/A:1007626913721]
16. [15] H. Liu and H. Motoda, "On issues of instance selection," Data Mining and Knowledge Discovery, vol. 6, no. 2, pp. 115-130, 2002. [
DOI:10.1023/A:1014056429969]
17. [16] D. R. Wilson and M. Tony R, "Instance pruning techniques," ICML, vol. 97, pp. 403-411, 1997.
18. [17] p. Jaccard, "Étude comparative de la distribution florale dans une portion des Alpes et des Jura," Bulletin de la Société Vaudoise des Sciences Naturelles, vol. 37, p. 547-579, 1901.
19. [18] G. H. J. Ron Kohavi, "Wrappers for feature subset selection," Artificial intelligence, vol. 97, no. 1, pp. 273-324, 1997. [
DOI:10.1016/S0004-3702(97)00043-X]
20. [19] W. Duch, T. Wieczorek, J. Biesiada and M. Blachnik, "Comparison of feature ranking methods based on information entropy," in IEEE International Joint Conference on Neural Networks, 2004.
21. [20] G. Chandrashekar and F. Sahin, "A survey on feature selection methods," Computers & Electrical Engineering, vol. 40, no. 1, pp. 16-28, 2014. [
DOI:10.1016/j.compeleceng.2013.11.024]
22. [21] I. Kononenko, E. Šimec and M. Robnik-Šikonja, "Overcoming the myopia of inductive learning algorithms with RELIEFF," Applied Intelligence, vol. 7, no. 1, pp. 39-55, 1997. [
DOI:10.1023/A:1008280620621]
23. [22] J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann Publishers, 1993.
24. [23] L. A. R. Kenji Kira, "The feature selection problem: Traditional methods and a new algorithm," AAAI, vol. 2, pp. 129-134, 1992.
25. [24] M. Robnik-Šikonja and I. Kononenko, "Theoretical and empirical analysis of ReliefF and RReliefF," Machine learning , vol. 53, no. (1-2), pp. 23-69, 2003. [
DOI:10.1023/A:1025667309714]
26. [25] H. Liu and H. Motoda, Computational methods of feature selection, CRC Press, 2007. [
DOI:10.1201/9781584888796]
27. [26] K. Yu, X. Xu, M. Ester and H.-P. Kriegel, "Feature weighting and instance selection for collaborative filtering: An information-theoretic approach," Knowledge and Information Systems, vol. 5, no. 2, pp. 201-224, 2003. [
DOI:10.1007/s10115-003-0089-6]
28. [27] T. Chen, X. Zhang, S. Jin and O. Kim, "Efficient classification using parallel and scalable compressed model andits application on intrusion detection," Expert Systems with Applications, vol. 41, pp. 5972-5983, 2014. [
DOI:10.1016/j.eswa.2014.04.009]
29. [28] W. T, Hadoop, The Definitive Guide, O'Reilly Media, Inc., 2012.
30. [29] P. Perner, "Prototype-based classification," Applied Intelligence, vol. 28, no. 3, pp. 238-246, 2008. [
DOI:10.1007/s10489-007-0064-0]
31. [30] C.-F. Tsai, W. Eberle and C.-Y. Chu, "Genetic algorithms in feature and instance selection," Knowledge-Based Systems, vol. 39, p. 240-247, 2013. [
DOI:10.1016/j.knosys.2012.11.005]
32. [31] F. Dimitris, D. Meretakis and L. Spiros, "Integrating Feature and Instance Selection for text classification," In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, 2002.
33. [32] H. Ahn and K.-j. Kim, "Bankruptcy prediction modeling with hybrid case-based reasoning and genetic algorithms approach," Applied Soft Computing, vol. 9, no. 2, pp. 599-607, 2009. [
DOI:10.1016/j.asoc.2008.08.002]
34. [33] Z. Abbasi and M. Rahmani, "An Instance Selection Algorithm Based on ReliefF," International Journal on Artificial Intelligence Tools, vol. 28, no. 1, p. 1950001, 2019. [
DOI:10.1142/S0218213019500015]
35. [34] I. Tomek, "An experiment with the edited nearest-neighbor rule," IEEE Transactions on Systems, Man, and Cybernetics, vol. 6, pp. 448-452, 1976. [
DOI:10.1109/TSMC.1976.4309523]
36. [35] J. R. Quinlan, "Improved use of continuous attributes in c4.5.," Journal of Artificial Intelligence Research, vol. 4, pp. 77-90, 1996. [
DOI:10.1613/jair.279]
37. [36] I. Triguero, D. Peralta, J. Bacardit, S. García and F. Herrera, "MRPR: A MapReduce solution for prototype reduction in big data classification," Neurocomputing, vol. 150, pp. 331-345, 2015. [
DOI:10.1016/j.neucom.2014.04.078]
38. [37] R. Hyndman and K. Anne B., "Another look at measures of forecast accuracy," International Journal of Forecasting, vol. 22, no. 4, pp. 679-688, 2006. [
DOI:10.1016/j.ijforecast.2006.03.001]