Volume 19, Issue 1 (5-2022)                   JSDP 2022, 19(1): 111-124 | Back to browse issues page


XML Persian Abstract Print


Yazd University
Abstract:   (1181 Views)
The processing of point clouds is one of the growing areas in machine vision. With the advent of inexpensive depth sensors, there has been a great interest in point clouds to detect three-dimensional objects. In general, 3D object recognition methods are alienated into two classes: local and global feature-based methods.  In global feature-based methods, the entire shape of the model is described, while in local methods, the geometric properties of the local area around a point are used to obtain the characteristic of the point. Unlike global methods, local methods do not entail any segmentation and they are more robust to clutter and occlusion. The local feature-based methods extract some geometric features from local surfaces around specific points named keypoints. The geometric features of a keypoint are encoded into a feature descriptor. How to describe the environment around a keypoint is the main challenge of these methods. The commonly used local feature-based methods often are sensitive to noise, varying mesh resolution, and rigid transformation. To overcome such disadvantages, in this paper, a new local feature descriptor based on the Mercator projection is proposed. The Mercator projection is one of the most popular 3D to 2D projections that can preserve true distance, direction, and relative longitude and latitude between any two points in point clouds. To evaluate, the proposed method has been compared with several state-of-the-art descriptor methods. The superiority of this method over other methods is shown by using the criteria of square Root Mean Square Error (RMSE), Recall versus 1-Precision Curve (RPC), and registration correction, rotation, and translation errors, and it is proved that this method has good descriptiveness power and it is robust to noise and varying mesh resolution.

Introduction
In this paper, we propose a new local descriptor to provide robust and precise geometric features. The geometric features are extracted using the Mercator projection of the neighborhood sphere. Our contributions are as follows: (1) The proposed descriptor directly learns from the point clouds (2) using the proposed method, there is only one representation for each point so the problem of multiple representations of a point is addressed. Also, the Mercator projection has many properties that make it appropriate for data representations in a point cloud. (3) It can accurately describe the geometric properties around a point. (3) The Mercator projection is a conformal projection so it preserves true distances, directions, and relative longitudes and latitudes. (4) It keeps small element geometry, which means Mercator projection preserves the shapes of small regions. 

The proposed method
Given a query point p, a sphere of radius r is centered at p for determining the neighbor points. Then Mercator projection is used for mapping the sphere into a plane with considering the Local reference frame (LRF) as previously suggested by Tombaret al. (2010b). The Mercator projection is a cylindrical projection that was proposed by G. Mercator in 1569. In this projection, the surface of a sphere is mapped into a plane. It preserves true distances, directions, and relative longitudes and latitudes. The Mercator projection for each point is identified using two following equations:
(2)
where λ is the  longitude and φ is the  latitude of a point  in the sphere, and (x,  y) represents corresponding point  in the Cartesian map. For extracting images as the input of the Siamese network, we need ranges for achieved x and  y. The variable x is in the interval [−π,  π] but range of y is different for the Mercator projection of each keypoint. As a result, the minimum and maximum of the variable y for all neighbor points are considered as the range of y, then a histogram 30 × 30 is measured. The Mercator projections of all neighbors are defined and the number of points  in each bin counted. Then we normalize the histogram by dividing each bin by the total number of neighbor points, it causes more robustness to noise and mesh resolution.

Results and discussion
The performance of the proposed method is evaluated on the Bologna (Tombari et al., 2010c) and John Burkardt in terms of RMSE, RPC and registration correction rate, rotation and translation errors. The proposed outperforms other methods in term of RPC also the results show that the method is robust to noise, rigid transformation and varying mesh resolution.

Article number: 9
Full-Text [PDF 1917 kb]   (476 Downloads)    
Type of Study: Applicable | Subject: Paper
Received: 2020/08/9 | Accepted: 2022/01/15 | Published: 2022/06/22 | ePublished: 2022/06/22

References
1. [1] A. Aldoma et al., "CAD-model recognition and 6DOF pose estimation using 3D cues," in 2011 IEEE international conference on computer vision workshops (ICCV workshops), 2011: IEEE, pp. 585-592. [DOI:10.1109/ICCVW.2011.6130296]
2. [2] N. Bayramoglu, A. A. Alatan, "Shape index SIFT: Range image recognition using local features", In 2010 20th International Conference on Pattern Recognition, pp. 352-355, 2010. [DOI:10.1109/ICPR.2010.95]
3. [3] P. J. Besl and N. D. McKay, "Method for registration of 3-D shapes," in Sensor fusion IV: control paradigms and data structures, vol. 1611: International Society for Optics and Photonics, pp. 586-606,1992.
4. [4] S. Bu, L. Wang, P. Han, Z. Liu, and K. Li, "3D shape recognition and retrieval based on multi-modality deep learning," Neurocomputing, vol. 259, pp. 183-193, 2017. [DOI:10.1016/j.neucom.2016.06.088]
5. [5] E. L. Eisenstein and E. Elizabeth Lewisohn, The printing revolution in early modern Europe. Cambridge University Press, 2005. [DOI:10.1017/CBO9780511819230]
6. [6] D. Fehr, W. J. Beksi, D. Zermas, and N. Papanikolopoulos, "Covariance based point cloud descriptors for object detection and recognition," Computer Vision and Image Understanding, vol. 142, pp. 80-93, 2016. [DOI:10.1016/j.cviu.2015.06.008]
7. [7] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Communications of the ACM, vol. 24, no. 6, pp. 381-395, 1981. [DOI:10.1145/358669.358692]
8. [8] A. Frome, D. Huber, R. Kolluri, T. Bülow, and J. Malik, "Recognizing objects in range data using regional point descriptors," in European conference on computer vision, 2004: Springer, pp. 224-237. [DOI:10.1007/978-3-540-24672-5_18]
9. [9] G. Georgakis, S. Karanam, Z. Wu, J. Ernst, and J. Košecká, "End-to-end learning of keypoint detector and descriptor for pose invariant 3D matching," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1965-1973. [DOI:10.1109/CVPR.2018.00210]
10. [10] Y. Guo, F. Sohel, M. Bennamoun, M. Lu, and J. Wan, "Rotational projection statistics for 3D local surface description and object recognition," International journal of computer vision, vol. 105, no. 1, pp. 63-86, 2013. [DOI:10.1007/s11263-013-0627-y]
11. [11] Y. Guo, F. Sohel, M. Bennamoun, M. Lu, and J. Wan, "An Accurate and Robust Range Image Registration Algorithm for 3D Object Modeling," ieee transactions on multimedia, vol. 16, no. 5, pp. 1377-1390, 2014. [DOI:10.1109/TMM.2014.2316145]
12. [12] Y. Guo, F. Sohel, M. Bennamoun, M. Lu, and J. Wan, "3D object recognition in cluttered scenes with local surface features: A survey," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 11, pp. 2270-2287, 2014. [DOI:10.1109/TPAMI.2014.2316828] [PMID]
13. [13] A. E. Johnson and M. Hebert, "Using spin images for efficient object recognition in cluttered 3D scenes," IEEE Transactions on pattern analysis and machine intelligence, vol. 21, no. 5, pp. 433-449, 1999. [DOI:10.1109/34.765655]
14. [14] S. H. Kasaei, A. M. Tomé, L. S. Lopes, and M. Oliveira, "GOOD: A global orthographic object descriptor for 3D object recognition and manipulation," Pattern Recognition Letters, vol. 83, pp. 312-320, 2016. [DOI:10.1016/j.patrec.2016.07.006]
15. [15] O. Kechagias-Stamatis and N. Aouf, "Histogram of distances for local surface description," in 2016 IEEE international conference on robotics and automation (ICRA), 2016: IEEE, pp. 2487-2493. [DOI:10.1109/ICRA.2016.7487402]
16. [16] R. Lu, F. Zhu, Q. Wu, and Y. Kong, "LSAH: a fast and efficient local surface feature for point cloud registration," in Ninth International Conference on Graphic and Image Processing (ICGIP 2017), vol. 10615: International Society for Optics and Photonics, pp. 106151G , 2018. [DOI:10.1117/12.2303809]
17. [17] Z.-C. Marton, D. Pangercic, N. Blodow, J. Kleinehellefort, and M. Beetz, "General 3D modelling of novel objects from a single view," in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems,: IEEE, pp. 3700-3705, 2010. [DOI:10.1109/IROS.2010.5650434]
18. [18] Z.-C. Marton, D. Pangercic, N. Blodow, and M. Beetz, "Combined 2D-3D categorization and classification for multimodal perception systems," The International Journal of Robotics Research, vol. 30, no. 11, pp. 1378-1402, 2011. [DOI:10.1177/0278364911415897]
19. [19] Z. C. Marton, R. B. Rusu, and M. Beetz, "On fast surface reconstruction methods for large and noisy point clouds", in 2009 IEEE international conference on robotics and automation, pp. 3218-3223, 2009. [DOI:10.1109/ROBOT.2009.5152628]
20. [20] S. Quan, J. Ma, F. Hu, B. Fang, T. Ma, "Local voxelized structure for 3D binary feature representation and robust registration of point clouds from low-cost sensors", Information Sciences, vol. 444, pp. 153-171, 2018. [DOI:10.1016/j.ins.2018.02.070]
21. [21] J. C. Rangel, J. Martinez-Gomez, C. Romero-González, I. Garcia-Varea, and M. Cazorla, "Semi-supervised 3D object recognition through CNN labeling," Applied Soft Computing, vol. 65, pp. 603-613, 2018. [DOI:10.1016/j.asoc.2018.02.005]
22. [22] M. Rezaei, M. Rezaeian, V. Derhami, F. Sohel, M. Bennamoun, "Deep learning-based 3D local feature descriptor from Mercator projections", Computer Aided Geometric Design, Vol. 74, pp. 101771, 2019. [DOI:10.1016/j.cagd.2019.101771]
23. [23] R. B. Rusu, N. Blodow, Z. C. Marton, and M. Beetz, "Aligning point cloud views using persistent feature histograms," in 2008 IEEE/RSJ international conference on intelligent robots and systems, 2008: IEEE, pp. 3384-3391. [DOI:10.1109/IROS.2008.4650967]
24. [24] R. B. Rusu, N. Blodow, and M. Beetz, "Fast point feature histograms (FPFH) for 3D registration," in 2009 IEEE international conference on robotics and automation, 2009: IEEE, pp. 3212-3217. [DOI:10.1109/ROBOT.2009.5152473]
25. [25] R. B. Rusu, G. Bradski, R. Thibaux, and J. Hsu, "Fast 3d recognition and pose using the viewpoint feature histogram," in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010: IEEE, pp. 2155-2162. [DOI:10.1109/IROS.2010.5651280]
26. [26] D. Salomon, Transformations and projections in computer graphics. Springer Science & Business Media, 2007.
27. [27] F. Tombari, S. Salti, and L. Di Stefano, "Unique shape context for 3D data description," in Proceedings of the ACM workshop on 3D object retrieval, 2010, pp. 57-62. [DOI:10.1145/1877808.1877821]
28. [28] F. Tombari, S. Salti, and L. Di Stefano, "Unique signatures of histograms for local surface description," in European conference on computer vision, 2010: Springer, pp. 356-369. [DOI:10.1007/978-3-642-15558-1_26]
29. [29] W. Wohlkinger and M. Vincze, "Ensemble of shape functions for 3d object classification," in 2011 IEEE international conference on robotics and biomimetics, 2011: IEEE, pp. 2987-2992. [DOI:10.1109/ROBIO.2011.6181760]
30. [30] J. Yang, Z. Cao, and Q. Zhang, "A fast and robust local descriptor for 3D point cloud registration," Information Sciences, vol. 346, pp. 163-179, 2016. [DOI:10.1016/j.ins.2016.01.095]
31. [31] J. Yang, Q. Zhang, and Z. Cao, "Multi-attribute statistics histograms for accurate and robust pairwise registration of range images," Neurocomputing, vol. 251, pp. 54-67, 2017. [DOI:10.1016/j.neucom.2017.04.015]
32. [32] J. Yang, Q. Zhang, Y. Xiao, and Z. Cao, "TOLDI: An effective and robust approach for 3D local shape description," Pattern Recognition, vol. 65, pp. 175-187, 2017. [DOI:10.1016/j.patcog.2016.11.019]
33. [33] D. Zai et al., "Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 134, pp. 15-29, 2017. [DOI:10.1016/j.isprsjprs.2017.10.001]
34. [34] "What are Point Clouds". Tech27.

Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.