Volume 16, Issue 3 (12-2019)                   JSDP 2019, 16(3): 148-129 | Back to browse issues page

XML Persian Abstract Print

Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Zandifar M, Tahmoresnezhad J. Sample-oriented Domain Adaptation for Image Classification. JSDP. 2019; 16 (3) :148-129
URL: http://jsdp.rcisp.ac.ir/article-1-847-en.html
Urmia University of Technology
Abstract:   (474 Views)
Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. The conventional image processing algorithms cannot perform well in scenarios where the training images (source domain) that are used to learn the model have a different distribution with test images (target domain). Also, many real world applications suffer from a limited number of training labeled data and therefore benefit from the related available labeled datasets to train the model. In this way, since there is the distribution difference across the source and target domains (domain shift problem), the learned classifier on the training set might perform poorly on the test set. Transfer learning and domain adaptation are two outstanding solutions to tackle this challenge by employing available datasets, even with significant difference in distribution and properties, to transfer the knowledge from a related domain to the target domain. The main assumption in domain shift problem is that the marginal or the conditional distribution of the source and the target data is different. Distribution adaptation explicitly minimizes predefined distance measures to reduce the difference in the marginal distribution, conditional distribution, or both. In this paper, we address a challenging scenario in which the source and target domains are different in marginal distributions, and the target images have no labeled data. Most prior works have explored two following learning strategies independently for adapting domains: feature matching and instance reweighting. In the instance reweighting approach, samples in the source data are weighted individually so that the distribution of the weighted source data is aligned to that of the target data. Then, a classifier is trained on the weighted source data. This approach can effectively eliminate unrelated source samples to the target data, but it would reduce the number of samples in adapted source data, which results in an increase in generalization errors of the trained classifier. Conversely, the feature-transform approach creates a feature map such that distributions of both datasets are aligned while both datasets are well distributed in the transformed feature space. In this paper, we show that both strategies are important and inevitable when the domain difference is substantially large. Our proposed using sample-oriented Domain Adaptation for Image Classification (DAIC) aims to reduce the domain difference by jointly matching the features and reweighting the instances across images in a principled dimensionality reduction procedure, and construct new feature representation that is invariant to both the distribution difference and the irrelevant instances. We extend the nonlinear Bregman divergence to measure the difference in marginal, and integrate it with Fisher’s linear discriminant analysis (FLDA) to construct feature representation that is effective and robust for substantial distribution difference. DAIC benefits pseudo labels of target data in an iterative manner to converge the model. We consider three types of cross-domain image classification data, which are widely used to evaluate the visual domain adaptation algorithms: object (Office+Caltech- 256), face (PIE) and digit (USPS, MNIST). We use all three datasets prepared by and construct 34 cross-domain problems. The Office-Caltech-256 dataset is a benchmark dataset for cross-domain object recognition tasks, which contains 10 overlapping categories from following four domains: Amazon (A), Webcam (W), DSLR (D) and Caltech256 (C). Therefore 4 × 3 = 12 cross domain adaptation tasks are constructed, namely A → W, ..., C → D. USPS (U) and MNIST (M) datasets are widely used in computer vision and pattern recognition tasks. We conduct two handwriting recognition tasks, i.e., usps-mnist and mnist-usps. PIE is a benchmark dataset for face detection task and has 41,368 face images of size 3232 from 68 individuals. The images were taken by 13 synchronized cameras and 21 flashes, under varying poses, illuminations, and expressions. PIE dataset consists five subsets depending on the different poses as follows: PIE1 (C05, left pose), PIE2 (C07, upward pose), PIE3 (C09, downward pose), PIE4 (C27, frontal pose), PIE5 (C29, right pose). Thus, we can construct 20 cross domain problems, i.e., P1 → P2, P1 → P3, ..., P5 → P4. We compare our proposed DAIC with two baseline machine learning methods, i.e., NN, Fisher linear discriminant analysis (FLDA) and nine state-of-the-art domain adaptation methods for image classification problems (TSL, DAM, TJM, FIDOS and LRSR). Due to these methods are considered as dimensionality reduction approaches, we train a classifier on the labeled training data (e.g., NN classifier), and then apply it on test data to predict the labels of the unlabeled target data. DAIC efficiently preserves and utilizes the specific information among the samples from different domains. The obtained results indicate that DAIC outperforms several state of-the-art adaptation methods even if the distribution difference is substantially large.
Full-Text [PDF 6090 kb]   (86 Downloads)    
Type of Study: Research | Subject: Paper
Received: 2018/03/13 | Accepted: 2019/07/3 | Published: 2020/01/7 | ePublished: 2020/01/7

1. [1] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2010.
2. [2] L. Shao, F. Zhu, and X. Li, “Transfer learning for visual categorization: A survey,” IEEE trans-actions on neural networks and learning systems, vol. 26, no. 5, pp. 1019–1034, 2015.
3. [3] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan, “A theory of learning from different domains,” Machine learning, vol. 79, no. 1-2, pp. 151–175, 2010.
4. [4] J. Tahmoresnezhad , and S. Hashemi, “A generalized kernel-based random k-sample sets method for transfer learning”, Iran J Sci Technol Trans Electrical Eng, vol. 39, pp. 193-207, 2015.
5. [5] S. Si, D. Tao, and B. Geng, “Bregman diver-gence-based regularization for transfer subspace learning”, IEEE TKDE, 2010.
6. [6] L. M. Bregman, “The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming”, USSR Computational Mathe-matics and Mathematical Physics, vol, 7, pp. 200–217, 1967.
7. [7] M. Long, J. Wang, G. Ding, S. J. Pan and P. Yu, “Adaptation regularization: a general frame-work for transfer learning”, IEEE Trans. Knowl. Data Eng, vol. 26, pp. 1076–1089, 2013.
8. [8] X. Li, M. Fang, J. J. Zhang, and J. Wu, “Sample selection for visual domain adaptation via sparse coding”, Signal Processing: Image Communi-cation, vol 44, pp. 92-100, 2016.
9. [9] J. Tahmoresnezhad and S. Hashemi, “Visual domain adaptation via transfer feature learning”, KnowlInf Syst, vol. 50, no. 2, pp.585-605, 2016.
10. [10] M. Long, J. Wang, G. Ding, J. Sun, and P. Yu, “Transfer feature learning with joint distribution adaptation,” in Proc IEEE International Con-ference on Computer Vision, pp. 2200–2207, 2013.
11. [11] M. Ghifary, D. Balduzzi, W. B. Kleijn, and M. Zhang, “Scatter component analysis: A unified framework for domain adaptation and domain generalization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PP, no. 99, pp. 1–1, 2016.
12. [12] R. A. Fisher, “The use of multiple measurements in taxonomic problems,” Annals of eugenics, vol. 7, no. 2, pp. 179–188, 1936.
13. [13] K. Saenko, B. Kulis, M. Fritz and T. Darrell, “Adapting visual category models to new domains”, Proceedings of the European Conference on Computer Vision, pp. 213-226, 2010.
14. [14] J. J. Hull, “A database for handwritten text recognition research”, IEEE Trans. Pattern Anal. Mach. Intell, vol. 16, no. 5, pp. 550–554, 1994.
15. [15] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, “Gradient-based learning applied to document recognition”, Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
16. [16] T. Sim, S. Baker ,and M. Bsat, “The CMU pose, illumination, and expression (PIE) database”, Proceedings of Fifth IEEE International Con-ference on Automatic Face Gesture Recognition, pp. 53-58, 2002.
17. [17] Cuong V Dinh, Robert PW Duin, Ignacio Piqueras-Salazar, and Marco Loog. Fidos: A generalized fisher based feature extraction method for domain shift. Pattern Recognition, 46(9):2510–2518, 2013.
18. [18] L. Duan, D. Xu, I.W. Tsang, “Domain adaptation from multiple sources: A domain-dependent regularization approach,” IEEE Trans, Neural Netw. Learn. Syst. vol.23, pp. 504–518, 2012.
19. [19] M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, “Transfer joint matching for unsupervised domain adaptation,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, IEEE, pp. 1410–1417 , 2014.
20. [20] Y. Xu, X. Fang, J. Wu, X. Li, and D. Zhang, “Discriminative transfer subspace learning via low-rank and sparse representation,” IEEE Transactions on Image Processing, vol. 25, no. 2, pp. 850–863, 2016.
21. [21] J. Tahmoresnezhad and S. Hashemi, “Visual domain adaptation via transfer285 feature learning,” Knowledge and Information Systems, vol. 50, no. 2, pp. 585– 605, 2017.
22. [22] J. Liu, J. Li, and K. Lu, “Coupled local–global adaptation for multi-source transfer learning,” Neurocomputing, vol. 275, pp. 247–254, 2018.
23. [23] L. Luo, X. Wang, S. Hu, C. Wang, Y. Tang, and L. Chen, “Close yet distinctive domain adaptation,” arXiv preprint arXiv: 1704.04235, 2017.
24. [24] W. Dai, Q. Yang, G.-R. Xue, and Y. Yu, “Boosting for transfer learning,” in Proceedings of the 24th international conference on Machine learning, 2007, pp. 193–200.
25. [25] Y. Tsuboi, H. Kashima, S. Hido, S. Bickel, and M. Sugiyama, “Direct density ratio estimation for large-scale covariate shift adaptation.” Information and Media Technologies, vol. 4, no. 2, pp. 529–546, 2009.
26. [26] J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence, “Dataset Shift in Machine Learning”, The MIT Press, 2009.
27. [27] S. J. Pan, X. Ni, J.-T. Sun, Q. Yang, and Z. Chen, “Cross-domain sentiment classification via spectral feature alignment,” in Proceedings of the 19th international conference on World Wide Web. ACM, pp. 751–760, 2010.
28. [28] S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, “Domain adaptation via transfer component analysis,” IEEE Transactions on Neural Networks, vol. 22, no. 2, pp. 199 – 210, 2011.
29. [29] X. Shi, Q. Liu, W. Fan, and P. S. Yu, “Transfer across completely different feature spaces via spectral embedding,” IEEE Transactions on Knowledge and Data Engineering, vol. 25, no. 4, pp. 906–918, 2013.
30. [30] R. Aljundi, R. Emonet, D. Muselet, and M. Sebban, “Landmarks-based kernelized subspace alignment for unsupervised domain adaptation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 56–63, 2015.
31. [31] M. Gheisari and M. S. Baghshah, “Joint predictive model and representation learning for visual domain adaptation,” Engineering Appli-cations of Artificial Intelligence, vol. 58, pp. 157–170, 2017.
32. [32] Z. Ding, M. Shao, and Y. Fu, “Deep low-rank coding for transfer learning,” in Proceedings of the 24th International Conference on Artificial Intelligence. AAAI Press, 2015, pp. 3453–3459.
33. [33] M. Long, J. Wang, G. Ding, J. Sun, and P. Yu, “Transfer feature learning with joint distribution adaptation,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 2200– 2207.
34. [34] H. Li, T. Jiang, and K. Zhang, “Efficient and robust feature extraction by maximum margin criterion,” IEEE Transactions on Neural Networks, vol. 17, no. 1, pp. 157–165, 2006.
35. [35] M. Long, J. Wang, G. Ding, S. J. Pan, and P. S. Yu, “Adaptation regularization: A general framework for transfer learning,” IEEE Tran-sactions on Knowledge and Data Engineering, vol. 26, no. 5, pp. 1076–1089, 2014.
36. [36] Y. Xu, S. J. Pan, H. Xiong, Q. Wu, R. Luo, H. Min, and H. Song, “A uni- 310 fied framework for metric transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 29, no. 6, pp. 1158–1171, 2017.
37. [37] S. Z. Seyyedsalehi, S. A. Seyyedsalehi, “Improving nonlinear manifold separator model to the face recognition by a single image of per person”, Signal and Data Processing, vol. 12, no. 1, pp. 3–16, 2015.
38. [38] S. Ahmadkhani, and P. Adibi, “Supervised Probability Component Analysis Mixture Model in a Lossless Dimensionality Reduction Frame-work for Face Recognition”, Signal and Data Processing, vol. 12, no. 4, pp. 53–65, 2016.

Add your comments about this article : Your username or Email:

Send email to the article author

© 2015 All Rights Reserved | Signal and Data Processing