Volume 14, Issue 2 (9-2017)                   JSDP 2017, 14(2): 43-58 | Back to browse issues page


XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Kianisarkaleh A, Ghassemian M H. Modified Nonparametric Discriminant Analysis for Classification of Hyperspectral Images with Limited Training Samples. JSDP. 2017; 14 (2) :43-58
URL: http://jsdp.rcisp.ac.ir/article-1-344-en.html
Professor Tarbiat Modares University
Abstract:   (949 Views)

Feature extraction performs an important role in improving hyperspectral image classification. Compared with parametric methods, nonparametric feature extraction methods have better performance when classes have no normal distribution. Besides, these methods can extract more features than what parametric feature extraction methods do. Nonparametric feature extraction methods use nonparametric scatter matrices to compute transformation matrix. Nonparametric Discriminant Analysis (NDA) is one of the nonparametric feature extraction methods in which, to form nonparametric scatter matrices, local means of samples and weight function are used. Local mean is calculated by k nearest neighbors of each sample and weight function emphasizes on boundary samples in between class scatter matrix formation. In this paper, modified NDA (MNDA) is proposed to improve NDA. In MNDA, the number of neighboring samples, when measuring local mean, are determined considering position of each sample in feature space. MNDA uses new weight functions in scatter matrix formation. Suggested weight functions emphasizes on boundary samples in between class scatter matrix formation and focus on samples close to class mean in within class scatter matrix formation. Moreover, within class scatter matrix is regularized to avoid singularity. Experimental results on Indian Pines and Salinas images show that MNDA has better performance compared to other parametric, nonparametric feature extraction methods. For Indian Pines data set, the maximum average classification accuracy is 80.34%, which is obtained by 18 training samples, support vector machine (SVM) classifier and 10 extracted features achieved by MNDA method. For Salinas data set, the maximum average classification accuracy is 94.31%, which is obtained by 18 training samples, SVM classifier and 9 extracted features achieved by MNDA method. Experiments show that using suggested weight functions and regularized within class scatter matrix, the proposed method obtained better results in hyperspectral image classification with limited training samples.
 

Full-Text [PDF 6004 kb]   (366 Downloads)    
Type of Study: Research | Subject: Paper
Received: 2015/03/14 | Accepted: 2016/10/17 | Published: 2017/10/21 | ePublished: 2017/10/21

References
1. [1] G. F. Hughes, "On the mean accuracy of statistical pattern recognizers," IEEE Transactions on Information Theory, vol. 14, pp. 55–63, 1968. [DOI:10.1109/TIT.1968.1054102]
2. [2] M. Imani, and H. Ghassemian, "Binary coding based feature extraction in remote sensing high dimensional data," Information Sciences, vol. 342, pp. 191-208, 2016. [DOI:10.1016/j.ins.2016.01.032]
3. [3] S. A. Hosseini, and H. Ghassemian, "Rational function approximation for feature reduction in hyperspectral data," Remote Sensing Letters, vol. 7, pp. 101-110, 2015. [DOI:10.1080/2150704X.2015.1101180]
4. [4] S. A. Hosseini, and H. Ghassemian, "Hyper-spectral data feature extraction using rational function curve fitting," Signal and Data Processing, vol. 13, pp. 3-16, 2016. [DOI:10.18869/acadpub.jsdp.13.3.3]
5. [5] M. Imani, and H. Ghassemian, "Feature reduction of hyperspectral data for increasing of class separability and preserving of data structure," Signal and Data Processing, vol. 14, pp. 71-82, 2017. [DOI:10.18869/acadpub.jsdp.14.1.71]
6. [6] W. Liao, A. Pizurica, P. Scheunders, W. Philips, and Y. Pi, "Semisupervised local discriminant analysis for feature extraction in hyperspectral images," IEEE Transaction on Geoscience and Remote Sensing, vol. 51, pp. 184-198, 2013. [DOI:10.1109/TGRS.2012.2200106]
7. [7] Z. Feng, Sh. Yang, Sh. Wang, and L. Jiao, "Discriminative spectral–spatial margin-based semisupervised dimensionality reduction of hyperspectral data," IEEE Geoscience and Remote Sensing Letters, vol. 12, pp. 224-228, 2015. [DOI:10.1109/LGRS.2014.2327224]
8. [8] K. Fukunaga, Introduction to statistical pattern recognition. San Diego, CA, USA, Academic, 1990.
9. [9] X. He, and P. Niyogi, "Locality preserving projections," Advances in Neural Information Processing Systems, vol. 16, pp. 153–160, 2004.
10. [10] M. Kamandar, and H. Ghassemian, "Linear feature extraction for hyperspectral images based on information theoretic learning," IEEE Geoscience and Remote Sensing Letters, vol. 10, pp. 702-706, 2013. [DOI:10.1109/LGRS.2012.2219575]
11. [11] B. C. Kuo, and D. A. Landgrebe, "Nonparametric weighted feature extraction for classification" IEEE Transaction on Geoscience and Remote Sensing, vol. 42, pp. 1096–1105, 2004. [DOI:10.1109/TGRS.2004.825578]
12. [12] J. Yang, P. Yu, and B. C. Kuo, "A nonparametric feature extraction and its application to nearest neighbor classification for hyperspectral image data," IEEE Transaction on Geoscience and Remote Sensing, vol. 48, pp. 1279–1293, 2010. [DOI:10.1109/TGRS.2009.2031812]
13. [13] M. Imani, and H. Ghassemian, "Feature reduction of hyperspectral images: discriminant analysis and the first principal component," journal of AI and Data Mining, vol. 3, pp. 1-9, 2015.
14. [14] M. Imani, and H. Ghassemian, "Feature extraction using attraction points for classification of hyperspectral images in a small sample size situation," IEEE Geoscience and Remote Sensing Letters, vol. 11, pp. 1325-1329, 2014. https://doi.org/10.1109/LGRS.2013.2292892 [DOI:10.1109/LGRS.2014.2316134]
15. [15] M. Imani, and H. Ghassemian, "Feature space discriminant analysis for hyperspectral data feature reduction," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 102, pp. 1–13, 2015. [DOI:10.1016/j.isprsjprs.2014.12.024]
16. [16] R. O. Duda, P. E. Hart, and D. G. Stock, Pattern classification, 2nd ed. New York, Wiley, 2001.
17. [17] C. Chang, and C. Lin, "LIBSVM : A library for support vector machines," ACM Transactions on Intelligent Systems and Technology, vol. 2, pp. 1–27, 2011. [DOI:10.1145/1961189.1961199]
18. [18] G. M. Foody, "Thematic map comparison: evaluating the statistical significance of differences in classification accuracy," Photogrammetric Engineering and Remote Sensing, vol. 70, pp. 627-633, 2004. [DOI:10.14358/PERS.70.5.627]

Add your comments about this article : Your username or Email:
CAPTCHA code

Send email to the article author


© 2015 All Rights Reserved | Signal and Data Processing