Volume 17, Issue 2 (9-2020)                   JSDP 2020, 17(2): 84-71 | Back to browse issues page


XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Nemati Khalil Abad F, Hadizadeh H, Ebrahimi Moghadam A, Khademi Darah M. Just Noticeable Difference Estimation Using Visual Saliency in Images. JSDP 2020; 17 (2) :84-71
URL: http://jsdp.rcisp.ac.ir/article-1-899-en.html
Ferdowsi University of Mashhad
Abstract:   (2480 Views)
Due to some physiological and physical limitations in the brain and the eye, the human visual system (HVS) is unable to perceive some changes in the visual signal whose range is lower than a certain threshold so-called just-noticeable distortion (JND) threshold. Visual attention (VA) provides a mechanism for selection of particular aspects of a visual scene so as to reduce the computational load on the brain. According to the current knowledge, it is believed that VA is driven by “visual saliency”. In a visual scene, a region is said to be visually salient if it possess certain characteristics, which make it stand out from its surrounding regions and draw our attention to it. In most existing researches for estimating the JND threshold, the sensitivity of the HVS has been consider the same throughout the scene and the effects of visual attention (caused by visual saliency) which have been ignored. Several studies have shown that in salient areas that attract more visual attention, visual sensitivity is higher, and therefore the JND thresholds are lower in those points and vice versa. In other words, visual saliency modulates JND thresholds. Therefore, considering the effects of visual saliency on the JND threshold seems not only logical but also necessary. In this paper, we present an improved non-uniform model for estimating the JND threshold of images by considering the mechanism of visual attention and taking advantage of visual saliency that leads to non-uniformity of importance of different parts of an image. The proposed model, which has the ability to use any existing uniform JND model, improves the JND threshold of different pixels in an image according to the visual saliency and by using a non-linear modulation function. Obtaining the parameters of the nonlinear function through an optimization procedure leads to an improved JND model. What make the proposed model efficient, both in terms of computational simplicity, accuracy, and applicability, are: choosing nonlinear modulation function with minimum computational complexity, choosing appropriate JND base model based on simplicity and accuracy and also Computational model for estimating visual saliency  that accurately determines salient areas, Finally, determine the Efficient cost function and solve it by determining the appropriate  objective Image Quality Assessment. To evaluate the proposed model, a set of objective and subjective experiments were performed on 10 selected images from the MIT database. For subjective experiment, A Two Alternative Forced Choice (2AFC) method was used to compare subjective image quality and for objective experiment SSIM and IWSSIM was used. The obtained experimental results demonstrated that in subjective experiment the proposed model achieves significant superiority than other existing models and in objective experiment, on average, outperforms the compared models. The computational complexity of proposed model is also analyzed and shows that it has faster speed than compared models.
Full-Text [PDF 4230 kb]   (818 Downloads)    
Type of Study: Research | Subject: Paper
Received: 2018/09/13 | Accepted: 2019/09/2 | Published: 2020/09/14 | ePublished: 2020/09/14

References
1. [1] A. B. Watson, Digital Images and Human Vision. The MIT press, 1993.
2. [2] F. A. A. Kingdom, Psychophysics: A Practical Introduction. Academic press, 2009.
3. [3] X. K. Yang, W. S. Lin, Z. K. Lu, E. P. Ong, and S. S. Yao, "Just noticeable distortion model and its applications in video coding," Signal Process.: Image Community, vol. 20, no. 7, pp. 662-680, 2005. [DOI:10.1016/j.image.2005.04.001]
4. [4] X. Yang, W. Lin, Z. Lu, E. Ong, S. Yao, "Motion-compensated residue pre-processing in video coding based on just-noticeable-distortion profile," IEEE Trans. Circuits Syst. Video Techno, vol. 15, no. 6, pp. 742-752, 2005. [DOI:10.1109/TCSVT.2005.848313]
5. [5] H. R. Wu, A. R. Reibman, W. Lin, F. Pereira, and S. S. Hemami, "Perceptual visual signal compression and transmission," Proceedings of The IEEE, vol. 101, no. 9, pp. 2025-2043, 2013. [DOI:10.1109/JPROC.2013.2262911]
6. [6] C. H. Chou and K. C. Liu, "A perceptually tuned watermarking scheme for color images," IEEE Trans. Image Process, vol. 19, no. 11, pp. 2966 -2982, 2010. [DOI:10.1109/TIP.2010.2052261] [PMID]
7. [7] W. Lin and C. J. Kuo, "Perceptual visual quality metrics: A survey," J. Visual Communication and Image Representation, vol. 22, no. 4, pp. 297-312, 2011. [DOI:10.1016/j.jvcir.2011.01.005]
8. [8] A. Liu, W. Lin, M. Paul, C. Deng, and F. Zhang, "Just noticeable difference for images with decomposition model for separating edge and textured regions," IEEE Trans. Circuits Syst. Video Technolo, vol. 20, no. 11, pp. 1648-1652, 2010. [DOI:10.1109/TCSVT.2010.2087432]
9. [9] X. Zhang, W. Lin, and P. Xue, "Just-noticeable difference estimation with pixels in images," J. Vis. Commun. Image Represent, vol. 19, no. 1, pp. 30-41, 2008. [DOI:10.1016/j.jvcir.2007.06.001]
10. [10] C. H. Chou and Y. C. Li, "A perceptually tuned subband image coder based on the measure of just-noticeable distortion profile," IEEE Trans. Circuits Syst. Video Technol, vol. 5, no. 6, pp. 467-476, 1995. [DOI:10.1109/76.475889]
11. [11] Z. Wei and K. Ngan, "Spatio-temporal just noticeable distortion profile for grey scale image/video in dct domain," IEEE Trans. Circuits Syst. Video Technology, vol. 19, no.3, pp. 337-346, 2009. [DOI:10.1109/TCSVT.2009.2013518]
12. [12] J. Wu, W. Lin, G. Shi, X. Wang, and F. Li, "Pattern masking estimation in image with structural uncertainty," IEEE Trans. Image Process, vol. 22, no. 12, pp. 4892-4904, 2013. [DOI:10.1109/TIP.2013.2279934] [PMID]
13. [13] J. Wu, L. Li, W. Dong, G. Shi, W. Lin, C. J. Kuo, "Enhanced just noticeable difference model for images with pattern complexity," IEEE Trans. Image Process, vol. 26, no. 6, pp. 2682-2693, 2017. [DOI:10.1109/TIP.2017.2685682] [PMID]
14. [14] M. Banitalebi-Dehkordi, A. Ebrahimi-moghadam, M. Khademi, H. Hadizadeh. "Compressed-Sampling-Based Image Saliency Detection in the Wavelet Domain", JSDP, vol. 16 (4), pp. 59-72, 2020
15. [15] L. Itti, G. Rees, and J. K. Tsotsos, Neurobiology of Attention. Academic Press, 2005.
16. [16] A. Borji and L. Itti, "State-of-the-art in visual attention modeling," IEEE Trans. Pattern Anal. Mach. Intell, vol.35, no. 1, pp. 185-207, 2013. [DOI:10.1109/TPAMI.2012.89] [PMID]
17. [17] L. Itti, J. Braun, C. Koch, "Modeling the modulatory effect of attention on human spatial vision," Advances in Neural Information Processing Systems (NIPS), MA, USA: MIT Press, vol. 14, pp. 1247-1254, 2002.
18. [18] Z. Lu, W. Lin, X. Yang, E. Ong, and S. Yao, "Modeling visual attention's modulatory aftereffects on visual sensitivity and quality evaluation," IEEE Trans. Image Process, vol. 14, no. 11, pp. 1928-1942, 2005. [DOI:10.1109/TIP.2005.854478] [PMID]
19. [19] Y. Niu, M. Kyan, L. Ma, A. Beghdadi, S. Krishnan, "Visual saliency's modulatory effect on just noticeable distortion profile and its application in image watermarking," Signal Process. Image Commun, vol. 28, no. 8, pp. 917-928, 2013. [DOI:10.1016/j.image.2012.07.009]
20. [20] H. Hadizadeh, "A saliency-modulated just-noticeable-distortion model with non-linear saliency modulation functions," Pattern Recogni-tion Letters, vol. 84, pp. 49-55, 2016. [DOI:10.1016/j.patrec.2016.08.011]
21. [21] H. Hadizadeh, "Energy-efficient images," IEEE Trans. on Image Process, vol. 26, no. 6, pp. 2882-2891, 2017. [DOI:10.1109/TIP.2017.2690523] [PMID]
22. [22] H. Hadizadeh, A. Rajati, and I. V. Baji'c, "Saliency-guided just noticeable distortion estimation using the normalized laplacian pyramid," IEEE Signal Processing Letters, vol. 24, 2017. [DOI:10.1109/LSP.2017.2717946]
23. [23] J. Wu, L. Li, W. Dong, G. Shi, W. Lin, C. J. Kuo, "Enhanced just noticeable difference model for images with pattern complexity," IEEE Trans. Image Process, vol. 26, no. 6, pp. 2682-2693, 2017. [DOI:10.1109/TIP.2017.2685682] [PMID]
24. [24] M. Cornia, L. Baraldi, G. Serra, and R. Cucchiara, "Predicting human eye fixations via an LSTM-based saliency attentive model," [Online]. Available: https://arxiv.org/abs/1611.09571, 2017.
25. [25] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE Trans. Image Process, vol. 13, no. 4, pp. 1-14, 2004. [DOI:10.1109/TIP.2003.819861] [PMID]
26. [26] L. Zhang, L. Zhang, X. Mou, D. Zhang, "FSIM: A feature similarity index for image quality assessment," IEEE Trans. Image Process., vol. 20, no. 8, pp. 2378-2386, 2011. [DOI:10.1109/TIP.2011.2109730] [PMID]
27. [27] L. Zhang, Y. Shen, H. Li, "VSI: A visual saliency-induced index for perceptual image quality assessment," IEEE Trans. Image Process, vol. 23, no. 10, pp. 4270-4281, 2014. [DOI:10.1109/TIP.2014.2346028] [PMID]
28. [28] Z. Wang and Q. Li, "Information content weighting for perceptual image quality assessment," IEEE Trans. Image Process, vol. 20, no. 5, pp. 1185-1198, May 2011. [DOI:10.1109/TIP.2010.2092435] [PMID]
29. [29] L. Zhang, Z. Gu, and H. Li, "SDSP: A novel saliency detection method by combining simple priors," Proc. IEEE Int. Conf. Image Process, pp. 171-175, Sep. 2013. [DOI:10.1109/ICIP.2013.6738036]
30. [30] A. Borji, M.-Ming Cheng, H. Jiang, and J. Li, "Salient object detection: A benchmark," IEEE Trans. on Image Process, vol. 24, no. 12, pp. 5706-5722, 2015. [DOI:10.1109/TIP.2015.2487833] [PMID]
31. [31] T. Judd, K. Ehinger, F. Durand, and A. Torralba, "Learning to predict where humans look," Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2106-2113. 2009. [DOI:10.1109/ICCV.2009.5459462]
32. [32] http://saliency.mit.edu/results_cat2000.html.
33. [33] M. M. Taylor, C. D. Creelman, "PEST: efficient estimates on probability functions," J. Acoust. Soc. Am., vol. 41, pp. 782-787, 1967. [DOI:10.1121/1.1910407]
34. [34] M. Uzair, R. D. Dony, "Estimating just-noticeable distortion for images/videos in pixel domain", IET Image Processing, vol. 11, no. 8, pp. 559-567, 2017. [DOI:10.1049/iet-ipr.2016.1120]
35. [35] C. Wang, X. Han, W. Wan, J. Li, J. Sun, and M. Xu, "Visual saliency based just noticeable difference estimation in DWT domain," Information, vol. 9, no. 7, pp. 178, 2018. [DOI:10.3390/info9070178]

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2015 All Rights Reserved | Signal and Data Processing