Click to open the HelpDesk interface
AECE - Front page banner

Menu:


FACTS & FIGURES

JCR Impact Factor: 0.700
JCR 5-Year IF: 0.700
SCOPUS CiteScore: 1.8
Issues per year: 4
Current issue: May 2024
Next issue: Aug 2024
Avg review time: 56 days
Avg accept to publ: 60 days
APC: 300 EUR


PUBLISHER

Stefan cel Mare
University of Suceava
Faculty of Electrical Engineering and
Computer Science
13, Universitatii Street
Suceava - 720229
ROMANIA

Print ISSN: 1582-7445
Online ISSN: 1844-7600
WorldCat: 643243560
doi: 10.4316/AECE


TRAFFIC STATS

2,699,457 unique visits
1,066,313 downloads
Since November 1, 2009



Robots online now
bingbot
Googlebot
SemanticScholar


SCOPUS CiteScore

SCOPUS CiteScore


SJR SCImago RANK

SCImago Journal & Country Rank




TEXT LINKS

Anycast DNS Hosting
MOST RECENT ISSUES

 Volume 24 (2024)
 
     »   Issue 2 / 2024
 
     »   Issue 1 / 2024
 
 
 Volume 23 (2023)
 
     »   Issue 4 / 2023
 
     »   Issue 3 / 2023
 
     »   Issue 2 / 2023
 
     »   Issue 1 / 2023
 
 
 Volume 22 (2022)
 
     »   Issue 4 / 2022
 
     »   Issue 3 / 2022
 
     »   Issue 2 / 2022
 
     »   Issue 1 / 2022
 
 
 Volume 21 (2021)
 
     »   Issue 4 / 2021
 
     »   Issue 3 / 2021
 
     »   Issue 2 / 2021
 
     »   Issue 1 / 2021
 
 
  View all issues  


FEATURED ARTICLE

Application of the Voltage Control Technique and MPPT of Stand-alone PV System with Storage, HIVZIEFENDIC, J., VUIC, L., LALE, S., SARIC, M.
Issue 1/2022

AbstractPlus






LATEST NEWS

2024-Jun-20
Clarivate Analytics published the InCites Journal Citations Report for 2023. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.700 (0.700 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 0.600.

2023-Jun-28
Clarivate Analytics published the InCites Journal Citations Report for 2022. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.800 (0.700 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 1.000.

2023-Jun-05
SCOPUS published the CiteScore for 2022, computed by using an improved methodology, counting the citations received in 2019-2022 and dividing the sum by the number of papers published in the same time frame. The CiteScore of Advances in Electrical and Computer Engineering for 2022 is 2.0. For "General Computer Science" we rank #134/233 and for "Electrical and Electronic Engineering" we rank #478/738.

2022-Jun-28
Clarivate Analytics published the InCites Journal Citations Report for 2021. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.825 (0.722 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 0.752.

2022-Jun-16
SCOPUS published the CiteScore for 2021, computed by using an improved methodology, counting the citations received in 2018-2021 and dividing the sum by the number of papers published in the same time frame. The CiteScore of Advances in Electrical and Computer Engineering for 2021 is 2.5, the same as for 2020 but better than all our previous results.

Read More »


    
 

  2/2022 - 3

Attention-Based Joint Semantic-Instance Segmentation of 3D Point Clouds

HAO, W. See more information about HAO, W. on SCOPUS See more information about HAO, W. on IEEExplore See more information about HAO, W. on Web of Science, WANG, H. See more information about  WANG, H. on SCOPUS See more information about  WANG, H. on SCOPUS See more information about WANG, H. on Web of Science, LIANG, W. See more information about  LIANG, W. on SCOPUS See more information about  LIANG, W. on SCOPUS See more information about LIANG, W. on Web of Science, ZHAO, M. See more information about  ZHAO, M. on SCOPUS See more information about  ZHAO, M. on SCOPUS See more information about ZHAO, M. on Web of Science, XIAO, Z. See more information about XIAO, Z. on SCOPUS See more information about XIAO, Z. on SCOPUS See more information about XIAO, Z. on Web of Science
 
View the paper record and citations in View the paper record and citations in Google Scholar
Click to see author's profile in See more information about the author on SCOPUS SCOPUS, See more information about the author on IEEE Xplore IEEE Xplore, See more information about the author on Web of Science Web of Science

Download PDF pdficon (3,331 KB) | Citation | Downloads: 673 | Views: 1,481

Author keywords
computer graphics, object segmentation, feature extraction, pattern recognition, machine learning

References keywords
point(24), segmentation(22), pattern(19), instance(17), vision(15), semantic(15), recognition(15), cvpr(14), clouds(11), learning(8)
Blue keywords are present in both the references section and the paper title.

About this article
Date of Publication: 2022-05-31
Volume 22, Issue 2, Year 2022, On page(s): 19 - 28
ISSN: 1582-7445, e-ISSN: 1844-7600
Digital Object Identifier: 10.4316/AECE.2022.02003
Web of Science Accession Number: 000810486800003
SCOPUS ID: 85131726998

Abstract
Quick view
Full text preview
In this paper, we propose an attention-based instance and semantic segmentation joint approach, termed ABJNet, for addressing the instance and semantic segmentation of 3D point clouds simultaneously. First, a point feature enrichment (PFE) module is used to enrich the segmentation networks input data by indicating the relative importance of each points neighbors. Then, a more efficient attention pooling operation is designed to establish a novel module for extracting point cloud features. Finally, an efficient attention-based joint segmentation module (ABJS) is proposed for combining semantic features and instance features in order to improve both segmentation tasks. We evaluate the proposed attention-based joint semantic-instance segmentation neural network (ABJNet) on a variety of indoor scene datasets, including S3DIS and ScanNet V2. Experimental results demonstrate that our method outperforms the start-of-the-art method in 3D instance segmentation and significantly outperforms it in 3D semantic segmentation.


References | Cited By  «-- Click to see who has cited this paper

[1] J. Wu, J. Jiao, Q. Yang, Z. Zha, X. Chen, "Ground-aware point cloud semantic segmentation for autonomous driving," Proceedings of the 27th ACM International Conference on Multimedia. 2019, pp.971-979.
[CrossRef] [Web of Science Times Cited 17] [SCOPUS Times Cited 19]


[2] Y. Nie, J. Hou, X. Han and M. Nießner, "RfD-Net: Point scene understanding by semantic instance reconstruction," IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4606-4616.
[CrossRef] [Web of Science Times Cited 31] [SCOPUS Times Cited 33]


[3] X. Wang, S. Liu, X. Shen, C. Shen, J. Jia, "Associatively segmenting instances and semantics in point clouds," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, pp.4096-4105.
[CrossRef] [Web of Science Times Cited 149] [SCOPUS Times Cited 200]


[4] C. R. Qi, L. Yi, H. Su, L. J. Guibas, "Pointnet++: Deep hierarchical feature learning on point sets in a metric space," International Conference on Neural Information Processing Systems, 2017, pp. 5105-5114.
[CrossRef]


[5] S. Qiu, S. Anwar, N. Barnes, "Semantic segmentation for real point cloud scenes via bilateral augmentation and adaptive fusion," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, pp.1757-1767.
[CrossRef] [Web of Science Times Cited 124] [SCOPUS Times Cited 159]


[6] S. Fan, Q. Dong, F. Zhu, Y. Lv, P. Ye, F.Y. Wang, "SCF-Net: Learning spatial contextual features for large-scale point cloud segmentation," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, pp. 14499-14508.
[CrossRef] [Web of Science Times Cited 110] [SCOPUS Times Cited 151]


[7] Y. Su, W. Liu, Z. Yuan, et al., "DLA-Net: Learning dual local attention features for semantic segmentation of large-scale building facade point clouds," Pattern Recognit. 123: 108372, 2022.
[CrossRef] [Web of Science Times Cited 19] [SCOPUS Times Cited 24]


[8] J. Hou, A. Dai, M. Niesner, "3D-SIS: 3D semantic instance segmentation of RGB-D scans," Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, pp. 4421-4430,
[CrossRef] [Web of Science Times Cited 218] [SCOPUS Times Cited 313]


[9] L. Yi, W. Zhao, H. Wang, M. Sung, L. Guibas, "GSPN: Generative shape proposal network for 3D instance segmentation in point cloud," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, pp. 3942-3951.
[CrossRef] [Web of Science Times Cited 152] [SCOPUS Times Cited 215]


[10] B. Yang, J. Wang, R. Clark, Q. Hu, S. Wang, A. Markham, "Learning object bounding boxes for 3D instance segmentation on point clouds," Advances in neural information processing systems, 2019, 32.
[CrossRef]


[11] F. Zhang, C. Guan, J. Fang, S. Bai, R. Yang, P. Torr, V. Prisacariu, "Instance segmentation of Lidar point clouds," IEEE International Conference on Robotics and Automation. 2020, pp.9448-9455.
[CrossRef] [Web of Science Times Cited 25] [SCOPUS Times Cited 43]


[12] W. Wang, R. Yu, Q. Huang, U. Neumann, "SGPN: Similarity group proposal network for 3D point cloud instance segmentation," Proceedings of the IEEE conference on computer vision and pattern recognition. 2020, pp.2569-2578.
[CrossRef] [Web of Science Times Cited 336] [SCOPUS Times Cited 413]


[13] C. R. Qi, H. Su, K. Mo, L. J. Guibas, "Pointnet: Deep learning on point sets for 3D classification and segmentation," Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, pp.77-85.
[CrossRef] [Web of Science Times Cited 2383] [SCOPUS Times Cited 8942]


[14] C. Liu, Y. Furukawa, "Masc: Multi-scale affinity with sparse convolution for 3D instance segmentation," arXiv preprint arXiv:1902.04478, 2019.

[15] B. Graham, M. Engelcke, L. Van Der Maaten, "3D semantic segmentation with submanifold sparse convolutional networks," Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, pp.9224-9232.
[CrossRef] [Web of Science Times Cited 840] [SCOPUS Times Cited 1058]


[16] Z. Liang, M. Yang, H. Li, C. Wang, "3D instance embedding learning with a structure-aware loss function for point cloud segmentation," IEEE Robotics and Automation Letters. vol.5, no.3, pp.4915-4922, 2020.
[CrossRef] [Web of Science Times Cited 18] [SCOPUS Times Cited 25]


[17] D. Comaniciu, P. Meer, "Mean shift: A robust approach toward feature space analysis," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.24, no.5, pp.603-619, 2002.
[CrossRef] [Web of Science Times Cited 7504] [SCOPUS Times Cited 10019]


[18] L. Jiang, H. Zhao, S. Shi, S. Liu, C. W. Fu, J. Jia, "Pointgroup: Dual-set point grouping for 3D instance segmentation," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, pp. 4866-4875.
[CrossRef] [Web of Science Times Cited 199] [SCOPUS Times Cited 240]


[19] T. He, C. Shen, A. Hengel. "Dynamic Convolution for 3D point cloud instance segmentation," arXiv preprint arXiv:2107.08392, 2021.
[CrossRef]


[20] Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, M. Bennamoun, "Deep learning for 3D point clouds: A survey," IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 12, pp. 4338-4364, 2020.
[CrossRef] [Web of Science Times Cited 1002] [SCOPUS Times Cited 966]


[21] Q. H. Pham, T. Nguyen, B. S. Hua, G. Roig, S. K. Yeung, "JSIS3D: Joint semantic-instance segmentation of 3D point clouds with multi-task pointwise networks and multi-value conditional random fields," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, pp.8827-8836.
[CrossRef] [Web of Science Times Cited 126] [SCOPUS Times Cited 173]


[22] L. Zhao, W. Tao, "JSNet: Joint instance and semantic segmentation of 3D point clouds," Proceedings of the AAAI Conference on Artificial Intelligence. vol. 34, no. 7, pp. 12951-12958, 2020.
[CrossRef]


[23] G. Wu, Z. Pan, P. Jiang, C. Tu, "Bi-Directional attention for joint instance and semantic segmentation in point clouds," Proceedings of the Asian Conference on Computer Vision, 2020, pp. 1-17.
[CrossRef]


[24] F. Chen, F. Wu, G. Gao, Y. Ji, J. Xu, G. Jiang, X. Jing, "JSPNet: Learning joint semantic & instance segmentation of point clouds via feature self-similarity and cross-task probability," Pattern Recognit. vol. 122, no. 108250, 2022.
[CrossRef] [Web of Science Times Cited 18] [SCOPUS Times Cited 19]


[25] C. Chen, L. Z. Fragonara, A. Tsourdos, "GAPNet: Graph attention based point neural network for exploiting local feature of point cloud," arXiv preprint arXiv:1905.08705, 2019.
[CrossRef]


[26] Q. Hu, B. Yang, L. Xie, S. Rosa, Y. Guo, Z. Wang, N. Trigoni, A. Markham, "RandLA-Net: efficient semantic segmentation of large-scale point clouds," Proceedings of the Computer Vision and Pattern Recognition, 2020, pp. 11105-11114.
[CrossRef] [SCOPUS Times Cited 1148]


[27] I. Armeni, O. Sener, A. R. Zamir, H. Jiang, I. Brilakis, M. Fischer, S. Savarese, "3D semantic parsing of large-scale indoor spaces," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, pp. 1534-1543.
[CrossRef] [Web of Science Times Cited 694] [SCOPUS Times Cited 1250]


[28] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, M. Nießner, "ScanNet: Richly-annotated 3D reconstructions of indoor scenes," Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, pp. 2432-2443.
[CrossRef] [Web of Science Times Cited 1706] [SCOPUS Times Cited 1895]


[29] L. Du, J. Tan, X. Xue, L. Chen, "3DCFS: Fast and robust joint 3D semantic-instance segmentation via coupled feature selection," IEEE International Conference on Robotics and Automation. 2020, pp. 6868-6875.
[CrossRef] [Web of Science Times Cited 8] [SCOPUS Times Cited 13]


[30] M. Zhong, G. Zeng, "Joint Semantic-Instance Segmentation of 3D point clouds: Instance separation and semantic fusion," 25th International Conference on Pattern Recognition. 2021, pp. 6616-6623.
[CrossRef] [Web of Science Times Cited 1] [SCOPUS Times Cited 2]




References Weight

Web of Science® Citations for all references: 15,680 TCR
SCOPUS® Citations for all references: 27,320 TCR

Web of Science® Average Citations per reference: 506 ACR
SCOPUS® Average Citations per reference: 881 ACR

TCR = Total Citations for References / ACR = Average Citations per Reference

We introduced in 2010 - for the first time in scientific publishing, the term "References Weight", as a quantitative indication of the quality ... Read more

Citations for references updated on 2024-07-22 01:55 in 199 seconds.




Note1: Web of Science® is a registered trademark of Clarivate Analytics.
Note2: SCOPUS® is a registered trademark of Elsevier B.V.
Disclaimer: All queries to the respective databases were made by using the DOI record of every reference (where available). Due to technical problems beyond our control, the information is not always accurate. Please use the CrossRef link to visit the respective publisher site.

Copyright ©2001-2024
Faculty of Electrical Engineering and Computer Science
Stefan cel Mare University of Suceava, Romania


All rights reserved: Advances in Electrical and Computer Engineering is a registered trademark of the Stefan cel Mare University of Suceava. No part of this publication may be reproduced, stored in a retrieval system, photocopied, recorded or archived, without the written permission from the Editor. When authors submit their papers for publication, they agree that the copyright for their article be transferred to the Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Romania, if and only if the articles are accepted for publication. The copyright covers the exclusive rights to reproduce and distribute the article, including reprints and translations.

Permission for other use: The copyright owner's consent does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific written permission must be obtained from the Editor for such copying. Direct linking to files hosted on this website is strictly prohibited.

Disclaimer: Whilst every effort is made by the publishers and editorial board to see that no inaccurate or misleading data, opinions or statements appear in this journal, they wish to make it clear that all information and opinions formulated in the articles, as well as linguistic accuracy, are the sole responsibility of the author.




Website loading speed and performance optimization powered by: 


DNS Made Easy