摘要
针对现有大规模场景点云语义分割方法效率低、难以满足实时性和大规模场景边界分割精度低的问题,提出一种高效稀疏特征聚合的点云语义分割方法。该方法以锥形栅格表述输入点云,设计高效稀疏特征聚合模块学习上下文语义特征,解决了特征提取计算量大、内存效率低的问题;通过邻域内语义标签单一性设计边界损失函数,解决物体边界模糊问题。实验表明:该方法在SemanticKITTI和nuScenes数据集上的语义分割平均交并比(mIoU)分别达到66.9%和74.1%,相比算法VCL分别提高了3.3、3.6个百分点;在SemanticKITTI验证集上推理速度达到19.2 Hz,远超该数据集点云采集频率10 Hz,满足实时性要求。本文方法能够更高效地提取稀疏语义特征,并能对物体边界进行准确分割。
To address the issues of low efficiency,insufficient real-time performance and poor boundary segmentation accuracy in existing large-scale point cloud semantic segmentation methods,an efficient sparse features aggregation method for 3D point cloud semantic segmentation was proposed.The method represented the input point cloud using a conical grid structure and leverages an efficient sparse feature aggregation module to learn contextual semantic features,effectively reducing computational overhead and improving memory efficiency in feature extraction.Additionally,a boundary loss function was designed based on the uniqueness of semantic labels within local neighborhoods to enhance object boundary segmentation accuracy.The results show that the proposed method achieves an mIoU of 66.9%and 74.1%on the SemanticKITTI and nuScenes datasets,outperforming the VCL algorithm by 3.3 and 3.6 percentage points.Moreover,the inference speed on the SemanticKITTI validation set reaches 19.2 Hz,significantly surpassing the dataset's point cloud acquisition frequency of 10 Hz,thereby meeting real-time requirements.The proposed method can efficiently extract sparse semantic features,accurately segment object boundaries.
作者
胡立坤
王小勇
黄润辉
HU Likun;WANG Xiaoyong;HUANG Runhui(School of Electrical Engineering,Guangxi University,Nanning 530004,China)
出处
《广西大学学报(自然科学版)》
北大核心
2025年第3期558-569,共12页
Journal of Guangxi University(Natural Science Edition)
基金
国家自然科学基金项目(61863002)
广西重点研发计划项目(桂科AB21220039)。
关键词
稀疏特征聚合
边界损失
语义分割
点云
sparse feature aggregation
boundary loss
semantic segmentation
point cloud
作者简介
通信作者:胡立坤(1977-),男,湖北襄阳人,教授,博士,E-mail:hlk3email@163.com。