3/2020 - 10 |
Edge-preserving Filtering and Fuzzy Image Enhancement in Depth Images Captured by Realsense Cameras in Robotic ApplicationsTADIC, V. , ODRY, A. , BURKUS, E. , KECSKES, I. , KIRALY, Z. , ODRY, P. |
Extra paper information in |
Click to see author's profile in SCOPUS, IEEE Xplore, Web of Science |
Download PDF (1,857 KB) | Citation | Downloads: 1,053 | Views: 3,028 |
Author keywords
filtering algorithms, fuzzy logic, image enhancement, robots, stereo vision
References keywords
intel(21), depth(19), image(14), realsense(13), robot(11), odry(10), technologies(9), group(9), vision(8), processing(8)
Blue keywords are present in both the references section and the paper title.
About this article
Date of Publication: 2020-08-31
Volume 20, Issue 3, Year 2020, On page(s): 83 - 92
ISSN: 1582-7445, e-ISSN: 1844-7600
Digital Object Identifier: 10.4316/AECE.2020.03010
Web of Science Accession Number: 000564453800010
SCOPUS ID: 85090336328
Abstract
This paper presents both the use of depth cameras in robotic applications and effects of post-processing on the captured depth images. The performance of depth cameras and post-processing image enhancement are evaluated with the aim to improve the depth-based object detection. First, the edge-preserving exponential moving average (EMA) filter and the fuzzy contrast enhancement procedures are briefly introduced. Then, the use of depth cameras with post-processing methods is shown in the example of painting robots. The use of the stereo depth camera is essential in robotic applications, since it constitutes the initial steps in a series of robotic operations, where the goal is to both detect and extract obstacles on walls that are not intended to be painted. |
References | | | Cited By «-- Click to see who has cited this paper |
[1] M. Carfagni, R. Furferi, L. Governi, C. Santarelli, M. Servi, "Metrological and Critical Characterization of the Intel D415 Stereo Depth Camera", Sensors, 2019, 19, 489; [CrossRef] [Web of Science Times Cited 65] [SCOPUS Times Cited 80] [2] J. Hu, Y. Niu, Z. Wang, "Obstacle Avoidance Methods for Rotor UAVs Using RealSense", 2017, Chinese Automation Congress (CAC), [CrossRef] [SCOPUS Times Cited 32] [3] S. Giancola, M. Valenti, R. Sala, "A Survey on 3D Cameras: Metrological Comparison of Time-of-Flight, Structured-Light and Active Stereoscopy Technologies", SpringerBriefs in Computer Science, Springer, ISSN 2191-5768, 2018, [CrossRef] [Web of Science Times Cited 88] [SCOPUS Times Cited 41] [4] L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, A. Bhowmik, "Intel RealSense Stereoscopic Depth Cameras", 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, [CrossRef] [Web of Science Times Cited 265] [SCOPUS Times Cited 179] [5] R. L. Lagendijk, R. E.H. Franich, E. A. Hendriks, "Stereoscopic Image Processing", The work was supported in part by the European Union under the RACE-II project DISTIMA and the ACTS project PANORAMA, [CrossRef] [6] F. L. Siena, B. Byrom, P. Watts, P. Breedon, "Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research", Journal of Medical Systems (2018) 42: 53, [CrossRef] [Web of Science Times Cited 48] [SCOPUS Times Cited 66] [7] "Intel RealSense D400 Series Product Family Datasheet", New Technologies Group, Intel Corporation, 2019, Document Number: 337029-005. [8] A. Grunnet-Jepsen, D. Tong, "Depth Post-Processing for Intel RealSense D400 Depth Cameras", New Technologies Group, Intel Corporation, 2018, Rev 1.0.2, Article ID 000028866 [9] "Evaluating Intel's RealSense SDK 2.0 for 3D Computer Vision Using the RealSense D415/D435 Depth Cameras", 2018, Berkeley Design Technology, Inc. [10] "Intel RealSense Camera Depth Testing Methodology", New Technologies Group, Intel Corporation, 2018, Revision 1.0. [11] A. Grunnet-Jepsen, J. N. Sweetser, J. Woodfill, "Best-Known-Methods for Tuning Intel RealSense D400 Depth Cameras for Best Performance", New Technologies Group, Intel Corporation, Rev 1.9, Article ID 000027833 [12] E. S. L. Gastal, M. M. Oliveira, "Domain Transform for Edge-Aware Image and Video Processing", ACM Transactions on Graphics, Volume 30 (2011), Number 4, Proceedings of SIGGRAPH 2011, [CrossRef] [SCOPUS Times Cited 181] [13] A. Grunnet-Jepsen, P. Winer, A. Takagi, J. Sweetser, K. Zhao, T. Khuong, D. Nie, J. Woodfill, "Using the Intel RealSenseTM Depth cameras D4xx in Multi-Camera Configurations", New Technologies Group, Intel Corporation, Rev 1.1, Article ID 000028140 [14] "Intel RealSense Depth Module D400 Series Custom Calibration", New Technologies Group, Intel Corporation, 2019, Revision 1.5.0, Article ID 000026725 [15] A. Grunnet-Jepsen, J. N. Sweetser, "Intel RealSens Depth Cameras for Mobile Phones", New Technologies Group, Intel Corporation, 2019, Article ID 000026983 [16] P. Krejov, A. Grunnet-Jepsen, "Intel RealSense Depth Camera over Ethernet", New Technologies Group, Intel Corporation, 2019 [17] J. Cunha, E. Pedrosa, C. Cruz, A. J. R. Neves, N. Lau, "Using a Depth Camera for Indoor Robot Localization and Navigation", Conference: RGB-D Advanced Reasoning with Depth Cameras Workshop, Robotics Science and Systems Conference (RSS), At LA, USA, 2011 [18] H. J. Hemmat, E. Bondarev, P. H. N. de With, "Real-time planar segmentation of depth images: from 3D edges to segmented planes", Journal of Electronic Imaging 24(5): 051008, 2015, [CrossRef] [SCOPUS Times Cited 5] [19] F. Flacco, T. Kroger, A. De Luca, O. Khatib, "A depth space approach to human-robot collision avoidance" 2012, IEEE International Conference on Robotics and Automation, [CrossRef] [SCOPUS Times Cited 389] [20] A. Saxena, S. H. Chung, A. Y. Ng, "3-D Depth Reconstruction from a Single Still Image", International Journal of Computer Vision, 2008, Volume 76, Issue 1, pp 53-69, [CrossRef] [Web of Science Times Cited 355] [SCOPUS Times Cited 494] [21] V. Sterzentsenko, A. Karakottas, A. Papachristou, N. Zioulis, A. Doumanoglou, D. Zarpalas, P. Daras, "A low-cost, flexible and portable volumetric capturing system", 14th International Conference on Signal-Image Technology & Internet-Based Systems, 2018, [CrossRef] [Web of Science Times Cited 22] [SCOPUS Times Cited 25] [22] N. Carey, R. Nagpal, J. Werfel, "Fast, accurate, small-scale 3D scene capture using a low-cost depth sensor", 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), [CrossRef] [Web of Science Times Cited 14] [SCOPUS Times Cited 15] [23] C. Garnica, F. Boochs, M. Twardochlib, "A New Approach To Edge-Preserving Smoothing for Edge Extraction and Image Segmentation", International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. [24] S. Reich, F. Worgotter, B. Dellen, "A Real-Time Edge-Preserving Denoising Filter'' In Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2018), Vol. 4: VISAPP, pages 85-94, [CrossRef] [Web of Science Times Cited 7] [SCOPUS Times Cited 9] [25] R. Abiko, M. Ikehara, "Fast Edge Preserving 2D Smoothing Filter Using Indicator Function", ICASSP 2019, 978-1-5386-4658-8/18, IEEE, [CrossRef] [Web of Science Times Cited 2] [SCOPUS Times Cited 4] [26] J. Choi, H. Park, D. Seo, "Pansharpening Using Guided Filtering to Improve the Spatial Clarity of VHR Satellite Imagery", Remote Sensing, 2019, 11, 633; [CrossRef] [Web of Science Times Cited 22] [SCOPUS Times Cited 23] [27] N. Iqbal, S. Ali, I. Khan, B. M. Lee, "Adaptive Edge Preserving Weighted Mean Filter for Removing Random-Valued Impulse Noise", Symmetry 2019, 11, 395; [CrossRef] [Web of Science Times Cited 21] [SCOPUS Times Cited 26] [28] F. Zhu, Z. Liang, X. Jia, L. Zhang, Y. Yu, "A Benchmark for Edge-Preserving Image Smoothing", âIEEE Transactions on Image Processing 28(7): 3556-3570, 2019, [CrossRef] [Web of Science Times Cited 62] [SCOPUS Times Cited 73] [29] J. Sandeep, K. Samrudh, "Image contrast enhancement using fuzzy logic", arXiv: 1809.04529, 2018. [30] D. Van De Ville, M. Nachtegael, D. Van der Weken, E. E. Kerre, W. Philips, I. Lemahieu, "Noise reduction by fuzzy image filtering", IEEE Transactions on Fuzzy Systems, 11(4), 429-436. [CrossRef] [Web of Science Times Cited 166] [SCOPUS Times Cited 253] [31] H. D. Cheng, H. Xu, "A novel fuzzy logic approach to contrast enhancement", Pattern Recognition Volume 33, Issue 5, May 2000, Pages 809-819, [CrossRef] [Web of Science Times Cited 114] [SCOPUS Times Cited 161] [32] A. S. Parihar, O. P. Verma, C. Khanna, "Fuzzy-Contextual Contrast Enhancement", IEEE Transactions on Image Processing, Volume: 26 , Issue: 4 , April 2017, [CrossRef] [Web of Science Times Cited 101] [SCOPUS Times Cited 127] [33] V. Tadic, M. Popovic, P. Odry, "Fuzzified Gabor filter for license plate detection", Engineering Applications of Artificial Intelligence, 2016, 48, 40-58, [CrossRef] [Web of Science Times Cited 39] [SCOPUS Times Cited 42] [34] A. A. M. Salih, K. Hasikin, N. A. M. Isa, "Adaptive Fuzzy Exposure Local Contrast Enhancement", IEEE Access, Volume: 6, 58794 - 58806, [CrossRef] [Web of Science Times Cited 13] [SCOPUS Times Cited 16] [35] A. S. Parihar, "Fuzzy adaptive gamma correction for contrast enhancement", 2017 International Conference on Intelligent Sustainable Systems (ICISS), IEEE 2018, [CrossRef] [SCOPUS Times Cited 11] [36] V. Tadic, A. Odry, I. Kecskes, E. Burkus, Z. Kiraly, P. Odry, "Application of Intel RealSense Cameras for Depth Image Generation in Robotics", WSEAS Transactions on Computers, E-ISSN: 2224-2872, Volume 18, 2019 [37] R. C. Gonzales, R. E. Woods, S. L. Eddins, "Digital Image Processing Using MATLAB", pp. 150-156 and 486-534, 2nd Edition, Gatesmark, LLC, USA, 2009, ISBN-13: 978-0982085400 [38] V. Tadic, Z. Kiraly, P. Odry, Z. Trpovski, T. Loncar-Turukalo, "Comparison of Gabor Filter Bank and Fuzzified Gabor Filter for License Plate Detection", Acta Polytechnica Hungarica, 17(1): 61-81, 2020, [CrossRef] [SCOPUS Times Cited 13] [39] NIST/SEMATECH e-Handbook of Statistical Methods, http://www.itl.nist.gov/div898/handbook/, 2012. [40] J. S. Hunter, "The Exponentially Weighted Moving Average", Journal of Quality Technology, 18:4, 203-210, 1986, [CrossRef] [41] T. Chaira, A. K. Ray, "Fuzzy Image Processing and Applications with MATLAB", pp. 47-49, CRC Press, Taylor & Francis Group, LLC, USA, 2009. [42] P. M. Khandekar, S. S. Chiddarwar, A. Jha, "Programming of an Industrial Robot Using Demonstrations and Soft Computing Techniques", Journal of Scientific & Industrial Research, pp. 156-163, Vol. 77, 2018 [43] D. Ristic-Durrant, S. M. Grigorescu, A. Gräser, Ž. cojbaÅ¡ic, V. Nikolic, "Robust Stereo-Vision Based 3D Object Reconstruction for the Assistive Robot FRIEND", Advances in Electrical and Computer Engineering, Volume 11, Number 4, 2011, [CrossRef] [Full Text] [Web of Science Times Cited 8] [SCOPUS Times Cited 13] [44] X. Ning, G. Tian, Y. Wang, "Top-Down Approach to the Automatic Extraction of Individual Trees from Scanned Scene Point Cloud Data", Advances in Electrical and Computer Engineering, Volume 19, Number 3, 2019, [CrossRef] [Full Text] [Web of Science Times Cited 6] [SCOPUS Times Cited 6] [45] E. Asadi, B. Li, I. M. Chen, "Pictobot: A Cooperative Painting Robot for Interior Finishing of Industrial Developments with High Walls", IEEE Robotics & Automation Magazine, 2018, [CrossRef] [Web of Science Times Cited 71] [SCOPUS Times Cited 89] [46] I. M. Chen, E. Asadi, J. Nie, R. J. Yan, W. C. Law, E. Kayacan, S. H. Yeo, K. H. Low, G. Seet, R. Tiong, "Innovations in Infrastructure Service Robots", CISM International Centre for Mechanical Sciences 2016, [CrossRef] [Web of Science Times Cited 10] [SCOPUS Times Cited 9] [47] L. Somlyai, Z. Vamossy, "SLAM algorithm for mobile robot localization with RGB-D camera", Fluids, Heat and Mass Transfer, Mechanical and Civil Engineering, WSEAS, ISBN: 978-1-61804-358-0 [48] G. Kertesz, S. Szenasi, Z. Vamossy, "Multi-Directional Image Projections with Fixed Resolution for Object Matching", Acta Polytechnica Hungarica, Vol. 15, No. 2, 2018, [CrossRef] [Web of Science Times Cited 10] [SCOPUS Times Cited 15] [49] T. Haidegger, G. S. Virk, C. Herman, R. Bostelman, P. Galambos, Gy. Gyorok, I. J. Rudas, "Industrial and Medical Cyber-Physical Systems: Tackling User Requirements and Challenges in Robotics", Recent Advances in Intelligent Engineering, vol 14. Springer, Cham, 2020, [CrossRef] [50] A. R. Varkonyi-Koczy, A. Rovid, "Soft Computing Based Point Correspondence Matching for Automatic 3D Reconstruction", Acta Polytechnica Hungarica, Vol. 2, No. 1, 2005. [51] E. Burkus, P. Odry, "Autonomous Hexapod Walker Robot "Szabad(ka)"", Acta Polytechnica Hungarica, Vol. 5, No. 1, 2008. [52] R. Szabo, A. Gontean, "Robotic Arm Control Algorithm Based on Stereo Vision Using RoboRealm Vision", Advances in Electrical and Computer Engineering, Volume 15, Number 2, 2015, [CrossRef] [SCOPUS Times Cited 21] [53] I. Kecskes, E. Burkus, F. Bazso, P. Odry, "Model validation of a hexapod walker robot", Robotica 35, no. 2, 2017, pp. 419-462, [CrossRef] [Web of Science Times Cited 20] [SCOPUS Times Cited 23] [54] A. Koubaa, "Robot Operating System (ROS)", Springer International Publishing Switzerland 2016, [CrossRef] [Web of Science Times Cited 33] [55] J. Kramer, M. Scheutz, "Development environments for au-tonomous mobile robots: A survey", Autonomous Robots, vol. 22, no. 2, pp. 101-132, 2007. [56] G. Bradski, A. Kaehler, "Learning OpenCV", pp. 115-124, O'Reilly Media, Inc., 1005, USA, ISBN: 978-0-596-51613-0, 2008 [57] A. Martinez, E. Fernandez, "Learning ROS for Robotics Programming", pp. 63-102, Published by Packt Publishing Ltd., UK., 2013, ISBN 978-1-78216-144-8 [58] J. Kerr, K. Nickels, "Robot operating systems: Bridging the gap between human and robot", Proceedings of the 44th Southeastern Symposium on System Theory (SSST), IBSN 978-1-4577-1493-1/12, 2012 [59] V. Tadic, E. Burkus, A. Odry, I. Kecskes, Z. Kiraly, P. Odry, "Effects of the Post-processing on Depth Value Accuracy of the Images Captured by RealSense Cameras", Contemporary Engineering Sciences, Vol. 13, 2020, no. 1, 149 - 156, HIKARI Ltd, [CrossRef] [60] M. Quigley, B. Gerkey, W. D. Smart, "Programming Robots with ROS", pp. 391-394, O'Reilly Media, Inc., 2015, ISBN: 978-1-4493-2389-9 [61] V. Tadic, A. Odry, A. Toth, Z. Vizvari, P. Odry, "Fuzzified Circular Gabor Filter for Circular and Near-Circular Object Detection", IEEE Access, [CrossRef] [Web of Science Times Cited 8] [SCOPUS Times Cited 7] Web of Science® Citations for all references: 1,570 TCR SCOPUS® Citations for all references: 2,448 TCR Web of Science® Average Citations per reference: 25 ACR SCOPUS® Average Citations per reference: 39 ACR TCR = Total Citations for References / ACR = Average Citations per Reference We introduced in 2010 - for the first time in scientific publishing, the term "References Weight", as a quantitative indication of the quality ... Read more Citations for references updated on 2024-12-16 09:37 in 248 seconds. Note1: Web of Science® is a registered trademark of Clarivate Analytics. Note2: SCOPUS® is a registered trademark of Elsevier B.V. Disclaimer: All queries to the respective databases were made by using the DOI record of every reference (where available). Due to technical problems beyond our control, the information is not always accurate. Please use the CrossRef link to visit the respective publisher site. |
Faculty of Electrical Engineering and Computer Science
Stefan cel Mare University of Suceava, Romania
All rights reserved: Advances in Electrical and Computer Engineering is a registered trademark of the Stefan cel Mare University of Suceava. No part of this publication may be reproduced, stored in a retrieval system, photocopied, recorded or archived, without the written permission from the Editor. When authors submit their papers for publication, they agree that the copyright for their article be transferred to the Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Romania, if and only if the articles are accepted for publication. The copyright covers the exclusive rights to reproduce and distribute the article, including reprints and translations.
Permission for other use: The copyright owner's consent does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific written permission must be obtained from the Editor for such copying. Direct linking to files hosted on this website is strictly prohibited.
Disclaimer: Whilst every effort is made by the publishers and editorial board to see that no inaccurate or misleading data, opinions or statements appear in this journal, they wish to make it clear that all information and opinions formulated in the articles, as well as linguistic accuracy, are the sole responsibility of the author.