Volume 14, Issue 3 (12-2017)                   JSDP 2017, 14(3): 143-160 | Back to browse issues page


XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Paknezhad M, Rezaeian M. Indoor Planar Modeling Using RGB-D Images. JSDP 2017; 14 (3) :143-160
URL: http://jsdp.rcisp.ac.ir/article-1-490-en.html
Yazd University
Abstract:   (4730 Views)

In robotic applications and especially 3D map generation of indoor environments, analyzing RGB-D images have become a key problem. The mapping problem is one of the most important problems in creating autonomous mobile robots. Autonomous mobile robots are used in mine excavation, rescue missions in collapsed buildings and even planets’ exploration. Furthermore, indoor mapping is beneficial in finding and rescuing missions. With recent advances, mobile robots are used in hazardous missions such as radioactive areas or collapsing buildings. Having the environment’s map beforehand can boost efficiency and effectiveness of the mission. In order to digitize the environment, several 3D scans are needed. However, these scans should be merged according to a global coordination system to create a correct, consistent model. This process is called image registration. If the robot with 3D scanner is able to accurately localize itself, the registration can be done directly by robots pose. However, due to imprecise robot sensors, self-localization is error prone. Therefore, the geometric structure of overlapping 3D scans is considered. In order to registering various points sets, Iterative Closest Point (ICP) algorithm is used. ICP is the most common approach to align point clouds in two consecutive image frames. This algorithm uses a point to point approach. RGB and depth images which are captured by Kinect are used in this study. In order to reducing data points and performing faster 3D map creation, depth images are converted to point clouds and then segmentation is done according to image planes. For this purpose RGB images are segmented by region growing segmentation algorithm. In this algorithm, the image was initially over segmented. This algorithm uses stack data structure and Euclidean distance in Lab color space to segment the image. Euclidean distance in Lab color space describes the resemblance of two colors to each other. In this algorithm, the aim is to label each pixel to a segment. To this end, each unlabeled pixels Euclidean distance to its neighboring mean color is checked to be within a threshold. For over-segmentation, if the distance satisfies the smaller threshold, the more pixels will be merged to the segment. Afterwards a plane was fit to each segment. After segmentation, each segment should be represented by a plane. Eventually, the segments were merged based on the product of normal vectors and plane fitting error criteria. After segmentation, planes were fit to the new segments again. A given number of points were generated on the plane. ICP algorithm was executed on these points and transfer and rotation matrices were obtained. Generating points on the plane results in fewer points. Therefore, the points were reduced and algorithms performance was increased. The results show that the proposed method increases the speed up to 55 and 91 percent in consecutive and non-consecutive frames on average, respectively.
 

Full-Text [PDF 18951 kb]   (1304 Downloads)    
Type of Study: Applicable | Subject: Paper
Received: 2016/02/16 | Accepted: 2017/03/5 | Published: 2018/01/29 | ePublished: 2018/01/29

References
1. [1] S. Thrun, "Robotic mapping: a survey," in Exploring artificial intelligence in the new millennium, L. Gerhard and N. Bernhard, Eds., ed: Morgan Kaufmann Publishers Inc., 2003, pp. 1-35.
2. [2] V. HÖGMAN, "Building a 3D map from RGB-D sensors," Master dissertation, Dept. Computer Science, Computer Vision and Active Perception Laboratory Royal Institute of Technology (KTH), Stockholm, Sweden, 2011.
3. [3] P. Vieira and R. Ventura, "Interactive mapping in 3D using RGB-D data," IEEE International Symposium on Safety Security and Rescue Robotics (SSRR), 2012, pp. 1-6. [DOI:10.1109/SSRR.2012.6523879]
4. [4] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, "RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments," in Experimental Robotics, ed: Springer Berlin Heidelberg, 2010, pp. 477-491. [PMID]
5. [5] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, "Indoor segmentation and support inference from RGBD images," in Proc. 12th European conference on Computer Vision - Volume Part V, Florence, Italy, 2012. [DOI:10.1007/978-3-642-33715-4_54]
6. [6] C. Couprie, C. e. Farabet, L. Najman, and Y. LeCun, "Indoor Semantic Segmentation using depth information,", International Conference on Learning Representations (ICLR2013), 2013.
7. [7] P. J. Besl and N. D. McKay, "A method for registration of 3-D shapes," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 14, pp. 239-256, 1992. [DOI:10.1109/34.121791]
8. [8] A. Nüchter, 3D Robotic Mapping: Springer-Verlag Berlin Heidelberg, 2009.
9. [9] N. V. D. Hau, N. D. Thang, T. T. L. Anh, and T. C. Hung, "Combined Plane and Point Registration of Sequential Depth Images for Indoor Localization " in Third International Conference on Advances in Computing, Electronics and Electrical Technology - CEET 2015, 2015, pp. 136-140.
10. [10] L. Tae-kyeong, L. Seungwook, L. Seongsoo, A. Shounan, and O. Se-young, "Indoor mapping using planes extracted from noisy RGB-D sensors," in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, 2012, pp. 1727-1733.
11. [11] W. Caihua, H. Tanahashi, H. Hirayu, Y. Niwa, and K. Yamamoto, "Comparison of local plane fitting methods for range data," in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, 2001, pp. I-663-I-669 vol.1.
12. [12] J. Poppinga, N. Vaskevicius, A. Birk, and K. Pathak, "Fast plane detection and polygonalization in noisy 3D range images," in Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, 2008, pp. 3378-3383. [DOI:10.1109/IROS.2008.4650729]
13. [13] C. Erdogan, M. Paluri, and F. Dellaert, "Planar Segmentation of RGBD Images Using Fast Linear Fitting and Markov Chain Monte Carlo," in Proc. Ninth Conference on Computer and Robot Vision, 2012. [DOI:10.1109/CRV.2012.12]
14. [14] J. Strom, A. Richardson, and E. Olson, "Graph-based segmentation for colored 3D laser point clouds," in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, 2010, pp. 2131-2136. [DOI:10.1109/IROS.2010.5650459]
15. [15] C. J. Taylor and A. Cowley, "Fast scene analysis using image and range data," in Robotics and Automation (ICRA), 2011 IEEE International Conference on, 2011, pp. 3562-3567. [DOI:10.1109/ICRA.2011.5980326]
16. [16] ق. حسین پور، "یک الگوریتم ردیابی خودرو مبتنی بر ویژگی با استفاده از گروه بندی سلسله مراتبی ادغام و تقسیم،" مجله پردازش علائم و داده ها، 1394.
17. [17] D. Girardeau-Montaut, "Cloudcompare, a 3D point cloud and mesh processing free software," http://www.danielgm.net/cc (accessed 08.02.2016) 2011.

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2015 All Rights Reserved | Signal and Data Processing