Volume 18, Issue 4 (3-2022)                   JSDP 2022, 18(4): 49-68 | Back to browse issues page


XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Mohammadi Kashani M, Amiri S H. Scalable Image Annotation by Summarizing Training Samples into Labeled Prototypes. JSDP 2022; 18 (4) : 4
URL: http://jsdp.rcisp.ac.ir/article-1-1046-en.html
Shahid Rajaee Teacher Training University
Abstract:   (1887 Views)
By increasing the number of images, it is essential to provide fast search methods and intelligent filtering of images. To handle images in large datasets, some relevant tags are assigned to each image to for describing its content. Automatic Image Annotation (AIA) aims to automatically assign a group of keywords to an image based on visual content of the image. AIA frameworks have two main stages; Feature Extraction and Tag Assignment which are both important in order to reach a proper performance. In the first stage of our proposed method, we utilize deep models to obtain a visual representation of images. We apply different pre-trained architectures of Convolutional Neural Networks (CNN) to the input image including Vgg16, Dense169, and ResNet 101. After passing the image through the layers of CNN, we obtain a single feature vector from the layer before the last layer, resulting into a rich representation for the visual content of the image. One advantage of deep feature extractor is that it substitutes a single feature vector instead of multiple feature vectors and thus, there is no need for combining multiple features. In the second stage, some tags are assigned from training images to a test image which is called “Tag Assignment”. Our approach for image annotation belongs to the search-based methods which have high performance in spite of simple structure. Although it is even more time-consuming due to its method of comparing the test image to every training in order to find similar images. Despite the efficiency of automatic Image annotation methods, it is challenging to provide a scalable method for large-scale datasets. In this paper, to solve this challenge, we propose a novel approach to summarize training database (images and their relevant tags) into a small number of prototypes. To this end, we apply a clustering algorithm on the visual descriptors of training images to extract the visual part of prototypes. Since the number of clusters is much smaller than the number of images, a good level of summarization will be achieved using our approach. In the next step, we extract the labels of prototypes based on the labels of input images in the dataset. because of this, semantic labels are propagated from training images to the prototypes using a label propagation process on a graph. In this graph, there is one node for each input image and one node for each prototypes. This means that we have a graph with union of input images and prototypes. Then, to extract the edges of graph, the visual feature of each node on graph is coded using other nodes to obtain its K-nearest neighbors. This goal is achieved by using Locality-constraints Linear Coding algorithm. After construction the above graph, a label propagation algorithm is applied on the graph to extract the labels of prototypes. Based on this approach, we achieve a set of labeled prototypes which can be used for annotating every test image. To assign tags for an input image, we propose an adaptive thresholding method that finds the labels of a new image using a linear interpolation from the labels of learned prototypes. The proposed method can reduce the size of a training dataset to 22.6% of its original size. This issue will considerably reduce the annotation time such that, compared to the state-of-the-art search-based methods such as 2PKNN,  the proposed method is at least 4.2 times faster than 2PKNN, while the performance of annotation process in terms of Precision, Recall and F1 will be maintained on different datasets.
Article number: 4
Full-Text [PDF 1267 kb]   (623 Downloads)    
Type of Study: Applicable | Subject: Paper
Received: 2019/07/14 | Accepted: 2020/08/18 | Published: 2022/03/21 | ePublished: 2022/03/21

References
1. [1] V. N. Murthy, E. F. Can, and R. Manmatha, "A hybrid model for automatic image annotation," in Proceedings of International Conference on Multimedia Retrieval, pp. 355-369, 2014. [DOI:10.1145/2578726.2578774]
2. [2] S. Feng, R. Manmatha, and V. Lavrenko, "Multiple Bernoulli relevance models for image and video annotation," in Computer Vision and Pattern Recognition (CVPR), 2004.
3. [3] P. Ji, X. Gao, and X. Hu, "Automatic image annotation by combining generative and discriminant models," Neurocomputing, 2016. [DOI:10.1016/j.neucom.2016.09.108]
4. [4] L. Ballan, T. Uricchio, L. Seidenari, and A. Del Bimbo, "A cross-media model for automatic image annotation," in Proceedings of International Conference on Multimedia Retrieval, 2014, pp. 73. [DOI:10.1145/2578726.2578728]
5. [5] J. Jeon, V. Lavrenko, and R. Manmatha, "Automatic image annotation and retrieval using cross-media relevance models," in Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, 2003, pp. 119-126. [DOI:10.1145/860435.860459]
6. [6] J. Li and J. Z. Wang, "Automatic linguistic indexing of pictures by a statistical modeling approach," IEEE Transactions on pattern analysis and machine intelligence, vol. 25, pp. 1075-1088, 2003. [DOI:10.1109/TPAMI.2003.1227984]
7. [7] A. Makadia and V. Pavlovic, "Baselines for image annotation." International Journal of Computer Vision, pp. 88-105, 2010. [DOI:10.1007/s11263-010-0338-6]
8. [8] Wang, J., Yang, J., Lv, F., Huang, T., "Locality-constrained linear coding for image classification," 2010. [DOI:10.1109/CVPR.2010.5540018]
9. [9] M. M. Kashani and S. H. Amiri, "Leveraging deep learning representation for search-based image annotation," in Artificial Intelligence and Signal Processing Conference (AISP), 2017, pp. 156-161. [DOI:10.1109/AISP.2017.8324073]
10. [10] V. N. Murthy, S. Maji, and R. Manmatha, "Automatic image annotation using deep learning representations," in Proceedings of the 5th ACM on International Conference on Multimedia Retrieval, 2015, pp. 603-606. [DOI:10.1145/2671188.2749391] [PMID]
11. [11] X. Li, T. Uricchio, L. Ballan, M. Bertini, "Socializing the semantic gap: A comparative survey on image tag assignment, refinement, and retrieval." ACM Computing Surveys (CSUR), 2016, 49(1): 14. [DOI:10.1145/2906152]
12. [12] Q. Cheng, Q. Zhang, P. Fu, C. Tu, S. Li, "A survey and analysis on automatic image annotation," Pattern Recognition, pp. 242-259, 2018. [DOI:10.1016/j.patcog.2018.02.017]
13. [13] D. M. Blei, A. Y. Ng, and M. I. Jordan, "Latent dirichlet allocation," Journal of machine Learning research, vol. 3, pp. 993-1022, 2003.
14. [14] F. Monay and D. Gatica-Perez, "PLSA-based image auto-annotation: constraining the latent space," in Proceedings of the 12th annual ACM international conference on Multimedia, 2004, pp. 348-351. [DOI:10.1145/1027527.1027608]
15. [15] A. Llorente, R. Manmatha, S. Ruger, Image retrieval using markov random Fields and global image features, in Proceedings of the ACM International,Conference on Image and Video Retrieval, ACM, 2010, pp. 243-250. [DOI:10.1145/1816041.1816078]
16. [16] Y. Xiang, X. Zhou, T.-S. Chua, C.-W. Ngo, A revisit of generative model for automatic image annotation using markov random _elds, in Computer Vision and Pattern Recognition, 2009. CVPR 2009, IEEE Conference on, IEEE, 2009, pp. 1153-1160. [DOI:10.1109/CVPR.2009.5206518]
17. [17] I. Dimitrovski, D. Kocev, S. Loskovska, S. D_zeroski, Hierarchical annotation of medical images, Pattern Recognition 44 (10-11), pp. 2436-2449, 2011. [DOI:10.1016/j.patcog.2011.03.026]
18. [18] J. Wang and J. Hu, Multi-label image annotation via maximum consistency, in Image Processing (ICIP), 2010 17th IEEE International Conference on, IEEE, 2010, pp. 2337-2340. [DOI:10.1109/ICIP.2010.5649863] [PMCID]
19. [19] H.Wang, H. Huang, C. Ding, Image annotation using the bi-relational graph of images and semantic labels, in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on IEEE, 2011, pp. 793-800. [DOI:10.1109/CVPR.2011.5995379] [PMCID]
20. [20] Z. Lin, G. Ding, M. Hu, J. Wang, X. Ye, Image tag completion via image-specific and tag-specific linear sparse reconstructions, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 1618-1625. [DOI:10.1109/CVPR.2013.212]
21. [21] L. Wu, R. Jin, A. K. Jain, Tag Completion for image retrieval, IEEE Trans. Pattern Anal. Mach. Intell. 35 (3), (2013), pp. 716-727. [DOI:10.1109/TPAMI.2012.124] [PMID]
22. [22] Z. Qin, C.-G. Li, H. Zhang, J. Guo, Improving tag matrix completion for image annotation and retrieval, in Visual Communications and Image Processing (VCIP), IEEE, 2015, pp. 1-4. [DOI:10.1109/VCIP.2015.7457871]
23. [23] X.-Y. Jing, F. Wu, Z. Li, R. Hu, D. Zhang, Multi-label dictionary learning for image annotation, IEEE Transactions on Image Processing 25 (6) (2016),2712-2725. [DOI:10.1109/TIP.2016.2549459] [PMID]
24. [24] Y. Hou, Z. Lin, Image tag completion and refinement by subspace clustering and matrix completion, in Visual Communications and Image Processing(VCIP), 2015, IEEE, 2015, pp. 1-4. [DOI:10.1109/VCIP.2015.7457875]
25. [25] Z. Lin, G. Ding, M. Hu, Y. Lin, S. S. Ge, Image tag completion via dual-view linear sparse reconstructions, Computer Vision and Image Understanding, 124 (2014) 42-60 [DOI:10.1016/j.cviu.2014.03.012]
26. [26] K. Q. Weinberger, L. K. Saul, Distance metric learning for large margin nearest neighbor classification, Journal of Machine Learning Research, 10, pp. 207-244, 2009.
27. [27] E. P. Xing, M. I. Jordan, S. J. Russell, A. Y. Ng, Distance metric learning with application to clustering with side-information, in Advances in neural information processing systems, pp. 521-528, 2003.
28. [28] S. C. Hoi, W. Liu, M. R. Lyu, W.-Y. Ma, Learning distance metrics with contextual constraints for image retrieval, in Computer vision and pattern recognition, IEEE computer society conference, Vol. 2, 2006, pp. 2072-2078.
29. [29] Y. Verma and & C. V. Jawahar, Image annotation by propagating labels from semantic neighbourhoods. International Journal of Computer Vision, 2017, 121. 1., pp. 126-148. [DOI:10.1007/s11263-016-0927-0]
30. [30] M. Guillaumin, T. Mensink, J. Verbeek, and C. Schmid, "Tagprop: Discriminative metric learning in nearest neighbor models for image auto-annotation," in 2009 IEEE 12th international conference on computer vision, 2009, pp. 309-316. [DOI:10.1109/ICCV.2009.5459266]
31. [31] L. Wu, S. C. Hoi, R. Jin, J. Zhu, N. Yu, Distance metric learning from uncertain side information with application to automated photo tagging, in Proceedings of the 17th ACM international conference on Multimedia,, 2009, pp. 135-144. [DOI:10.1145/1631272.1631293]
32. [32] A. Bar-Hillel, T. Hertz, N. Shental, D. Weinshall, Learning a Mahalanobis metric from equivalence constraints, Journal of Machine Learning Research, 6, pp. 937-965, Jun 2005.
33. [33] F. Liu, T. Xiang, T. M. Hospedales, W. Yang, C. Sun, Semantic regularisation for recurrent image annotation, in Computer Vision and Pattern Recognition (CVPR), IEEE Conference, 2017, pp. 4160-4168. [DOI:10.1109/CVPR.2017.443] [PMCID]
34. [34] J. Johnson, L. Ballan, L. Fei-Fei, Love the neighbors: Image annotation by exploiting image metadata, in Proceedings of the IEEE international conference on computer vision, 2015, pp. 4624-4632. [DOI:10.1109/ICCV.2015.525]
35. [35] H.-F. Yu, P. Jain, P. Kar, I. Dhillon, Large-scale multi-label learning with missing labels, in International conference on machine learning, 2014, pp. 593-601.
36. [36] Y. Verma, C. Jawahar, Exploring svm for image annotation in presence of confusing labels, in BMVC, 2013, pp. 1-25. [DOI:10.5244/C.27.25] [PMID]
37. [37] B. Hariharan, L. Zelnik-Manor, M. Varma, S. Vishwanathan, Large scale max-margin multi-label classification with priors, in Proceedings of the 27th International Conference on Machine Learning (ICML-10), Citeseer, 2010, pp. 423-430.
38. [38] Y. Li, Y. Song, J. Luo, Improving pairwise ranking for multi-label image classification, in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3617-3625. [DOI:10.1109/CVPR.2017.199]
39. [39] T. Lan, G. Mori, A max-margin riffled independence model for image tag ranking, in IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2013, pp. 3103-3110. [DOI:10.1109/CVPR.2013.399]
40. [40] Y. Yang, W. Zhang, and Y. Xie, "Image automatic annotation via multi-view deep representation," Journal of Visual Communication and Image Representation, vol. 33, 2015, pp. 368-377. [DOI:10.1016/j.jvcir.2015.10.006]
41. [41] H. K. Shooroki, M. A. Z. Chahooki, Selection of effective training instances for scalable automatic image annotation, Multimedia Tools and Applications, 2017, 76 (7) (2017), pp. 9643-9666. [DOI:10.1007/s11042-016-3572-2]
42. [42] S. H. Amiri and M. Jamzad. "Leveraging multi-modal fusion for graph-based image annotation.", Journal of Visual Communication and Image Representation, 2018, 55, pp. 816-828. [DOI:10.1016/j.jvcir.2018.08.012]
43. [43] R. Rad and M. Jamzad. "Image annotation using multi-view non-negative matrix factorization with a different number of basis vectors." Journal of Visual Communication and Image Representation, 2017, 46: 1-12. [DOI:10.1016/j.jvcir.2017.03.005]
44. [44] M. M. Kalayeh, H. Idrees, and M. Shah, "NMF-KNN: Image annotation using weighted multi-view non-negative matrix factorization," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 184-191. [DOI:10.1109/CVPR.2014.31]
45. [45] Sun, Y., Liu, Q., Tang, J., Tao, D., "Learning discriminative dictionary for group sparse representation." IEEE transactions on image processing, 2014, 23(9): 3816-3828. [DOI:10.1109/TIP.2014.2331760] [PMID]
46. [46] XC. Deng, X. Liu, Y. Mu, J. Li, Large-scale multi-task image labeling with adaptive relevance discovery and feature hashing, Signal Processing 112 , 2015, pp. 137-145. [DOI:10.1016/j.sigpro.2014.07.017]
47. [47] J. Wang, G. Li, A multi-modal hashing learning framework for automatic image annotation, in IEEE Second International Conference on Data Science in Cyberspace (DSC), IEEE, 2017, pp. 14-21. [DOI:10.1109/DSC.2017.48]
48. [48] Wang, Changhu, Shuicheng Yan, Lei Zhang, and Hong-Jiang Zhang. "Multi-label sparse coding for automatic image annotation." In 2009 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2009, pp. 1643-1650. [DOI:10.1109/CVPR.2009.5206866] [PMCID]
49. [49] Q. Zhang and B. Li, 2015. Dictionary learning in visual computing. Synthesis Lectures on Image, Video, & Multimedia Processing, 8(2), pp.1-151. [DOI:10.2200/S00640ED1V01Y201504IVM018]
50. [50] F. Wang and C. Zhang, "Label propagation through linear neighborhoods," IEEE Transactions on Knowledge and Data Engineering, vol. 20, pp. 55-67, 2008. [DOI:10.1109/TKDE.2007.190672]
51. [51] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition,", arXiv preprint arXiv:1409.1556, 2014.
52. [52] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778. [DOI:10.1109/CVPR.2016.90] [PMID]
53. [53] G. Huang and Z. Liu, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3. [DOI:10.1109/CVPR.2017.243] [PMCID]

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2015 All Rights Reserved | Signal and Data Processing