Click to open the HelpDesk interface
AECE - Front page banner

Menu:


FACTS & FIGURES

JCR Impact Factor: 0.700
JCR 5-Year IF: 0.700
SCOPUS CiteScore: 2.0
Issues per year: 3
Current issue: Feb 2025
Next issue: Jun 2025
Avg review time: 87 days
Avg accept to publ: 60 days
APC: 300 EUR


PUBLISHER

Stefan cel Mare
University of Suceava
Faculty of Electrical Engineering and
Computer Science
13, Universitatii Street
Suceava - 720229
ROMANIA

Print ISSN: 1582-7445
Online ISSN: 1844-7600
WorldCat: 643243560
doi: 10.4316/AECE


TRAFFIC STATS

3,648,403 unique visits
1,366,862 downloads
Since November 1, 2009



Robots online now
Amazonbot
bingbot
PetalBot


SCOPUS CiteScore

SCOPUS CiteScore


SJR SCImago RANK

SCImago Journal & Country Rank




TEXT LINKS

MOST RECENT ISSUES

 Volume 25 (2025)
 
     »   Issue 1 / 2025
 
 
 Volume 24 (2024)
 
     »   Issue 4 / 2024
 
     »   Issue 3 / 2024
 
     »   Issue 2 / 2024
 
     »   Issue 1 / 2024
 
 
 Volume 23 (2023)
 
     »   Issue 4 / 2023
 
     »   Issue 3 / 2023
 
     »   Issue 2 / 2023
 
     »   Issue 1 / 2023
 
 
 Volume 22 (2022)
 
     »   Issue 4 / 2022
 
     »   Issue 3 / 2022
 
     »   Issue 2 / 2022
 
     »   Issue 1 / 2022
 
 
 Volume 21 (2021)
 
     »   Issue 4 / 2021
 
     »   Issue 3 / 2021
 
     »   Issue 2 / 2021
 
     »   Issue 1 / 2021
 
 
  View all issues  








LATEST NEWS

2025-May-01
Starting from 2025, our Journal will appear 3 times a year. Issues will be published in February, June, and October.

2024-Jun-20
Clarivate Analytics published the InCites Journal Citations Report for 2023. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.700 (0.700 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 0.600.

2023-Jun-28
Clarivate Analytics published the InCites Journal Citations Report for 2022. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.800 (0.700 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 1.000.

2023-Jun-05
SCOPUS published the CiteScore for 2022, computed by using an improved methodology, counting the citations received in 2019-2022 and dividing the sum by the number of papers published in the same time frame. The CiteScore of Advances in Electrical and Computer Engineering for 2022 is 2.0. For "General Computer Science" we rank #134/233 and for "Electrical and Electronic Engineering" we rank #478/738.

2022-Jun-28
Clarivate Analytics published the InCites Journal Citations Report for 2021. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.825 (0.722 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 0.752.

Read More »


  3/2024 - 1
View TOC | « Previous Article | Next Article »

Semantic Segmentation and Reconstruction of Indoor Scene Point Clouds

HAO, W. See more information about HAO, W. on SCOPUS See more information about HAO, W. on IEEExplore See more information about HAO, W. on Web of Science, WEI, H. See more information about  WEI, H. on SCOPUS See more information about  WEI, H. on SCOPUS See more information about WEI, H. on Web of Science, WANG, Y. See more information about WANG, Y. on SCOPUS See more information about WANG, Y. on SCOPUS See more information about WANG, Y. on Web of Science
 
Extra paper information in View the paper record and citations in Google Scholar View the paper record and similar papers in Microsoft Bing View the paper record and similar papers in Semantic Scholar the AI-powered research tool
Click to see author's profile in See more information about the author on SCOPUS SCOPUS, See more information about the author on IEEE Xplore IEEE Xplore, See more information about the author on Web of Science Web of Science

Download PDF pdficon (4,600 KB) | Citation | Downloads: 1,087 | Views: 1,448

Author keywords
point clouds, semantic segmentation, indoor scene reconstruction, slicing-projection method, template matching

References keywords
point(28), vision(15), clouds(14), semantic(13), reconstruction(13), recognition(13), indoor(13), segmentation(12), cloud(12), pattern(11)
Blue keywords are present in both the references section and the paper title.

About this article
Date of Publication: 2024-08-31
Volume 24, Issue 3, Year 2024, On page(s): 3 - 12
ISSN: 1582-7445, e-ISSN: 1844-7600
Digital Object Identifier: 10.4316/AECE.2024.03001
Web of Science Accession Number: 001306111400001
SCOPUS ID: 85203023424

Abstract
Quick view
Full text preview
Automatic 3D reconstruction of indoor scenes remains a challenging task due to the incomplete and noisy nature of scanned data. We propose a semantic-guided method for reconstructing indoor scene based on semantic segmentation of point clouds. Firstly, a Multi-Feature Adaptive Aggregation Network is designed for semantic segmentation, assigning the semantic label for each point. Then, a novel slicing-projection method is proposed to segment and reconstruct the walls. Next, a hierarchical Euclidean Clustering is proposed to separate objects into individual ones. Finally, each object is replaced with the most similar CAD model from the database, utilizing the Rotational Projection Statistics (RoPS) descriptor and the iterative closest point (ICP) algorithm. The selected template models are then deformed and transformed to fit the objects in the scene. Experimental results demonstrate that the proposed method achieves high-quality reconstruction even when faced with defective scanned point clouds.


References | Cited By  «-- Click to see who has cited this paper

[1] Y. Yang et al., "Automatic 3D indoor scene modeling from single panorama," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, 3926-3934.
[CrossRef] [Web of Science Times Cited 34] [SCOPUS Times Cited 45]


[2] C. Sun et al., "Indoor panorama planar 3D reconstruction via divide and conquer," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 11338-11347.
[CrossRef]


[3] L. Huan, X. Zheng, J. Gong, "GeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes," ISPRS Journal of Photogrammetry and Remote Sensing. 2022, 186, 301-314.
[CrossRef] [Web of Science Times Cited 19] [SCOPUS Times Cited 19]


[4] G. Chen et al., "Scene recognition with prototype-agnostic scene layout," IEEE Transactions on Image Processing. 2022, 29, 5877-5888.
[CrossRef] [Web of Science Times Cited 49] [SCOPUS Times Cited 56]


[5] M. Bassier, M. Vergauwen, "Unsupervised reconstruction of Building Information Modeling wall objects from point cloud data," Automation in construction. 2020, 120, 103338.
[CrossRef] [Web of Science Times Cited 72] [SCOPUS Times Cited 86]


[6] M. Kim et al., "Automated extraction of geometric primitives with solid lines from unstructured point clouds for creating digital buildings models," Automation in Construction.2023, 145, 104642.
[CrossRef] [Web of Science Times Cited 16] [SCOPUS Times Cited 16]


[7] Y. Guo et al., "Rotational projection statistics for 3D local surface description and object recognition," International journal of computer vision. 2013, 105, 63-86.
[CrossRef] [Web of Science Times Cited 505] [SCOPUS Times Cited 613]


[8] G. Pintore et al., "State-of-the-art in automatic 3D reconstruction of structured indoor environments," Computer Graphics Forum. 2020, 39(2), 667-699.
[CrossRef] [Web of Science Times Cited 88] [SCOPUS Times Cited 103]


[9] S. Oesau, F. Lafarge, P. Alliez, "Indoor scene reconstruction using feature sensitive primitive extraction and graph-cut," ISPRS journal of photogrammetry and remote sensing. 2014, 90, 68-82.
[CrossRef] [Web of Science Times Cited 155] [SCOPUS Times Cited 200]


[10] F. Yang et al., "Automatic indoor reconstruction from point clouds in multi-room environments with curved walls," Sensors. 2019, 19(17), 3798.
[CrossRef] [Web of Science Times Cited 41] [SCOPUS Times Cited 47]


[11] C. Fotsing et al., "Volumetric wall detection in unorganized indoor point clouds using continuous segments in 2D grids," Automation in Construction. 2022, 141, 104462.
[CrossRef] [Web of Science Times Cited 16] [SCOPUS Times Cited 16]


[12] Y. Cui et al., "Automatic 3-D reconstruction of indoor environment with mobile laser scanning point clouds," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2019, 12(8): 3117-3130.
[CrossRef] [SCOPUS Times Cited 1]


[13] H. Fang, C. Pan, H. Huang, "Structure-aware indoor scene reconstruction via two levels of abstraction," ISPRS Journal of Photogrammetry and Remote Sensing. 2021, 178, 155-170.
[CrossRef] [Web of Science Times Cited 14] [SCOPUS Times Cited 16]


[14] J. Chen, Z. Kira, and Y. K. Cho, "Deep learning approach to point cloud scene understanding for automated scan to 3D reconstruction," Journal of Computing in Civil Engineering. 2019, 33(4), 04019027.
[CrossRef] [Web of Science Times Cited 112] [SCOPUS Times Cited 137]


[15] C. R. Qi et al., "Pointnet: Deep learning on point sets for 3d classification and segmentation," Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, 1063-6919.
[CrossRef] [Web of Science Times Cited 4003] [SCOPUS Times Cited 11391]


[16] H. Kim, C. Kim, "3D as-built modeling from incomplete point clouds using connectivity relations," Automation in Construction. 2021, 130, 103855.
[CrossRef] [Web of Science Times Cited 20] [SCOPUS Times Cited 24]


[17] Y. Wang et al., "Dynamic graph CNN for learning on point clouds," ACM Transactions on Graphics (tog). 2019, 38(5), 1-12.
[CrossRef] [Web of Science Times Cited 4178] [SCOPUS Times Cited 5026]


[18] M. Ai, Z. Li, J. Shan, "Topologically consistent reconstruction for complex indoor structures from point clouds," Remote Sensing. 2022, 13(19), 3844.
[CrossRef] [Web of Science Times Cited 8] [SCOPUS Times Cited 10]


[19] C. R. Qi et al., "Pointnet++: Deep hierarchical feature learning on point sets in a metric space," Advances in neural information processing systems. 2017, 30.
[CrossRef]


[20] T. Wang et al., "Semantics-and-Primitives-Guided Indoor 3D Reconstruction from Point Clouds," Remote Sensing. 2022, 14(19), 4820.
[CrossRef] [Web of Science Times Cited 10] [SCOPUS Times Cited 12]


[21] S. Tang et al., "BIM generation from 3D point clouds by combining 3D deep learning and improved morphological approach," Automation in Construction. 2022, 141, 104422.
[CrossRef] [Web of Science Times Cited 56] [SCOPUS Times Cited 62]


[22] J. Wei et al., "Automatic extraction and reconstruction of a 3D wireframe of an indoor scene from semantic point clouds," International Journal of Digital Earth. 2023, 16(1), 3239-3267.
[CrossRef] [Web of Science Times Cited 7] [SCOPUS Times Cited 8]


[23] M. Jiang et al., "Pointsift: A sift-like network module for 3D point cloud semantic segmentation," arXiv preprint arXiv. 2018.
[CrossRef]


[24] Q. Hu et al., "Randla-net: Efficient semantic segmentation of large-scale point clouds," Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020, 11108-11117.
[CrossRef]


[25] S. Fan et al., "SCF-Net: Learning spatial contextual features for large-scale point cloud segmentation," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 14504-14513.
[CrossRef] [Web of Science Times Cited 200] [SCOPUS Times Cited 239]


[26] S. Qiu, S. Anwar, N. Barnes, "Semantic segmentation for real point cloud scenes via bilateral augmentation and adaptive fusion," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 1757-1767.
[CrossRef]


[27] W. Hao, Y. Wang, W. Liang, "Slice-based building facade reconstruction from 3D point clouds," International journal of remote sensing. 2018, 39(20): 6587-6606.
[CrossRef] [Web of Science Times Cited 12] [SCOPUS Times Cited 15]


[28] Z. Wu et al., "3D shapenets: A deep representation for volumetric shapes," Proceedings of the IEEE conference on computer vision and pattern recognition. 2015, 1912-1920.
[CrossRef] [SCOPUS Times Cited 5226]


[29] I. Armeni et al., "3D semantic parsing of large-scale indoor spaces," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, 1534-1543.
[CrossRef] [Web of Science Times Cited 971] [SCOPUS Times Cited 1631]


[30] T. Hackel et al., "Semantic3d.net: A new large-scale point cloud classification benchmark," ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences. 2017.
[CrossRef] [SCOPUS Times Cited 571]


[31] L. Landrieu, M. Simonovsky, "Large-scale point cloud semantic segmentation with superpoint graphs," Conference on Computer Vision and Pattern Recognition, CVPR. 2018, 4558-4567.
[CrossRef] [Web of Science Times Cited 979] [SCOPUS Times Cited 1203]


[32] H. Zhao et al., "Pointweb: Enhancing local neighborhood features for point cloud processing," Conference on Computer Vision and Pattern Recognition, CVPR. 2019, 5565-5573.
[CrossRef] [Web of Science Times Cited 604] [SCOPUS Times Cited 724]


[33] T. He et al., "Learning and memorizing representative prototypes for 3D point cloud semantic and instance segmentation," European Conference on Computer Vision, ECCV. 2020, 564-580.
[CrossRef] [SCOPUS Times Cited 29]


[34] N. Luo et al., "KVGCN: A KNN searching and VLAD combined graph convolutional network for point cloud segmentation," Remote Sensing. 2021, 13(5), 1003.
[CrossRef] [Web of Science Times Cited 12] [SCOPUS Times Cited 14]


[35] J. Liu et al., "Self-prediction for joint instance and semantic segmentation of point clouds," European Conference on Computer Vision, ECCV. 2020, 187-204.
[CrossRef]


[36] S. Qiu, S. Anwar, N. Barne, "Semantic segmentation for real point cloud scenes via bilateral augmentation and adaptive fusion [C]," Conference on Computer Vision and Pattern Recognition, CVPR. 2021: 1757-1767.
[CrossRef]


[37] Y. Ma et al., "Global context reasoning for semantic segmentation of 3D point clouds," Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2020, 2931-2940.
[CrossRef] [Web of Science Times Cited 73] [SCOPUS Times Cited 79]


[38] H. Liu et al., "Semantic context encoding for accurate 3D point cloud segmentation," IEEE Transactions on Multimedia. 2020, 23, 2045-2055.
[CrossRef] [Web of Science Times Cited 54] [SCOPUS Times Cited 64]




References Weight

Web of Science® Citations for all references: 12,308 TCR
SCOPUS® Citations for all references: 27,683 TCR

Web of Science® Average Citations per reference: 316 ACR
SCOPUS® Average Citations per reference: 710 ACR

TCR = Total Citations for References / ACR = Average Citations per Reference

We introduced in 2010 - for the first time in scientific publishing, the term "References Weight", as a quantitative indication of the quality ... Read more

Citations for references updated on 2025-06-01 03:21 in 279 seconds.




Note1: Web of Science® is a registered trademark of Clarivate Analytics.
Note2: SCOPUS® is a registered trademark of Elsevier B.V.
Disclaimer: All queries to the respective databases were made by using the DOI record of every reference (where available). Due to technical problems beyond our control, the information is not always accurate. Please use the CrossRef link to visit the respective publisher site.

Copyright ©2001-2025
Faculty of Electrical Engineering and Computer Science
Stefan cel Mare University of Suceava, Romania


All rights reserved: Advances in Electrical and Computer Engineering is a registered trademark of the Stefan cel Mare University of Suceava. No part of this publication may be reproduced, stored in a retrieval system, photocopied, recorded or archived, without the written permission from the Editor. When authors submit their papers for publication, they agree that the copyright for their article be transferred to the Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Romania, if and only if the articles are accepted for publication. The copyright covers the exclusive rights to reproduce and distribute the article, including reprints and translations.

Permission for other use: The copyright owner's consent does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific written permission must be obtained from the Editor for such copying. Direct linking to files hosted on this website is strictly prohibited.

Disclaimer: Whilst every effort is made by the publishers and editorial board to see that no inaccurate or misleading data, opinions or statements appear in this journal, they wish to make it clear that all information and opinions formulated in the articles, as well as linguistic accuracy, are the sole responsibility of the author.




Website loading speed and performance optimization powered by: