Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed ir...Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed irregularly and discontinuously in spatial and temporal domains,where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem.In this paper,we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression.The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis.Then,it introduces a prediction method where both intraframe and inter-frame point clouds are available,by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm.Finally,the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques.Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information,and is suitable for 3D point cloud compression applicable to various types of scenes.展开更多
Light detection and ranging(LiDAR)has contributed immensely to forest mapping and 3D tree modelling.From the perspective of data acquisition,the integration of LiDAR data from different platforms would enrich forest i...Light detection and ranging(LiDAR)has contributed immensely to forest mapping and 3D tree modelling.From the perspective of data acquisition,the integration of LiDAR data from different platforms would enrich forest information at the tree and plot levels.This research develops a general framework to integrate ground-based and UAV-LiDAR(ULS)data to better estimate tree parameters based on quantitative structure modelling(QSM).This is accomplished in three sequential steps.First,the ground-based/ULS LiDAR data were co-registered based on the local density peaks of the clustered canopy.Next,redundancy and noise were removed for the ground-based/ULS LiDAR data fusion.Finally,tree modeling and biophysical parameter retrieval were based on QSM.Experiments were performed for Backpack/Handheld/UAV-based multi-platform mobile LiDAR data of a subtropical forest,including poplar and dawn redwood species.Generally,ground-based/ULS LiDAR data fusion outperforms ground-based LiDAR with respect to tree parameter estimation compared to field data.The fusion-derived tree height,tree volume,and crown volume significantly improved by up to 9.01%,5.28%,and 18.61%,respectively,in terms of rRMSE.By contrast,the diameter at breast height(DBH)is the parameter that has the least benefits from fusion,and rRMSE remains approximately the same,because stems are already well sampled from ground data.Additionally,particularly for dense forests,the fusion-derived tree parameters were improved compared to those derived from ground-based LiDAR.Ground-based LiDAR can potentially be used to estimate tree parameters in low-stand-density forests,whereby the improvement owing to fusion is not significant.展开更多
在自动驾驶感知系统中视觉传感器与激光雷达是关键的信息来源,但在目前的3D目标检测任务中大部分纯点云的网络检测能力都优于图像和激光点云融合的网络,现有的研究将其原因总结为图像与雷达信息的视角错位以及异构特征难以匹配,单阶段...在自动驾驶感知系统中视觉传感器与激光雷达是关键的信息来源,但在目前的3D目标检测任务中大部分纯点云的网络检测能力都优于图像和激光点云融合的网络,现有的研究将其原因总结为图像与雷达信息的视角错位以及异构特征难以匹配,单阶段融合算法难以充分融合二者的特征.为此,本文提出一种新的多层多模态融合的3D目标检测方法:首先,前融合阶段通过在2D检测框形成的锥视区内对点云进行局部顺序的色彩信息(Red Green Blue,RGB)涂抹编码;然后将编码后点云输入融合了自注意力机制上下文感知的通道扩充PointPillars检测网络;后融合阶段将2D候选框与3D候选框在非极大抑制之前编码为两组稀疏张量,利用相机激光雷达对象候选融合网络得出最终的3D目标检测结果.在KITTI数据集上进行的实验表明,本融合检测方法相较于纯点云网络的基线上有了显著的性能提升,平均mAP提高了6.24%.展开更多
文摘Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed irregularly and discontinuously in spatial and temporal domains,where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem.In this paper,we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression.The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis.Then,it introduces a prediction method where both intraframe and inter-frame point clouds are available,by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm.Finally,the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques.Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information,and is suitable for 3D point cloud compression applicable to various types of scenes.
基金supported by the National Natural Science Foundation of China(Project No.42171361)the Research Grants Council of the Hong Kong Special Administrative Region,China,under Project PolyU 25211819the Hong Kong Polytechnic University under Projects 1-ZE8E and 1-ZVN6.
文摘Light detection and ranging(LiDAR)has contributed immensely to forest mapping and 3D tree modelling.From the perspective of data acquisition,the integration of LiDAR data from different platforms would enrich forest information at the tree and plot levels.This research develops a general framework to integrate ground-based and UAV-LiDAR(ULS)data to better estimate tree parameters based on quantitative structure modelling(QSM).This is accomplished in three sequential steps.First,the ground-based/ULS LiDAR data were co-registered based on the local density peaks of the clustered canopy.Next,redundancy and noise were removed for the ground-based/ULS LiDAR data fusion.Finally,tree modeling and biophysical parameter retrieval were based on QSM.Experiments were performed for Backpack/Handheld/UAV-based multi-platform mobile LiDAR data of a subtropical forest,including poplar and dawn redwood species.Generally,ground-based/ULS LiDAR data fusion outperforms ground-based LiDAR with respect to tree parameter estimation compared to field data.The fusion-derived tree height,tree volume,and crown volume significantly improved by up to 9.01%,5.28%,and 18.61%,respectively,in terms of rRMSE.By contrast,the diameter at breast height(DBH)is the parameter that has the least benefits from fusion,and rRMSE remains approximately the same,because stems are already well sampled from ground data.Additionally,particularly for dense forests,the fusion-derived tree parameters were improved compared to those derived from ground-based LiDAR.Ground-based LiDAR can potentially be used to estimate tree parameters in low-stand-density forests,whereby the improvement owing to fusion is not significant.
文摘在自动驾驶感知系统中视觉传感器与激光雷达是关键的信息来源,但在目前的3D目标检测任务中大部分纯点云的网络检测能力都优于图像和激光点云融合的网络,现有的研究将其原因总结为图像与雷达信息的视角错位以及异构特征难以匹配,单阶段融合算法难以充分融合二者的特征.为此,本文提出一种新的多层多模态融合的3D目标检测方法:首先,前融合阶段通过在2D检测框形成的锥视区内对点云进行局部顺序的色彩信息(Red Green Blue,RGB)涂抹编码;然后将编码后点云输入融合了自注意力机制上下文感知的通道扩充PointPillars检测网络;后融合阶段将2D候选框与3D候选框在非极大抑制之前编码为两组稀疏张量,利用相机激光雷达对象候选融合网络得出最终的3D目标检测结果.在KITTI数据集上进行的实验表明,本融合检测方法相较于纯点云网络的基线上有了显著的性能提升,平均mAP提高了6.24%.