In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) ba...In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.展开更多
针对遮挡场景下车辆跟踪精度下降的问题,提出了一种基于卷积核优选的遮挡车辆跟踪(Convolutional Kernel Optimization for Occluded Vehicle Tracking,CKO-OVT)算法。CKO-OVT算法通过卷积核优选策略自适应挑选出对车辆目标更为敏感的...针对遮挡场景下车辆跟踪精度下降的问题,提出了一种基于卷积核优选的遮挡车辆跟踪(Convolutional Kernel Optimization for Occluded Vehicle Tracking,CKO-OVT)算法。CKO-OVT算法通过卷积核优选策略自适应挑选出对车辆目标更为敏感的卷积算子进行特征提取,通过判别式孪生网络对跟踪结果进行评估并在跟踪失效的情况下重定位目标,进一步提升跟踪的鲁棒性和准确性。实验部分,构建了遮挡车辆跟踪(Occluded Vehicle Tracking,OVT)数据集,分别在目标跟踪基准(Object Tracking Benchmark,OTB)数据集、TColor-128公开数据集和自建OVT数据集上同高效卷积跟踪(Efficient Convolution Operators for Tracking,ECO)算法、ECO轻量化版本(Efficient Convolution Operators for Tracking Using HOG and CN,ECOHC)、相关滤波(Kernelized Correlation Filters Tracker,KCF)算法、判别式尺度空间跟踪(Discriminative Scale Space Tracker,DSST)算法、循环结构核跟踪(Circulant Structure Kernel Tracker,CSK)算法、层次相关滤波跟踪(Hierarchical Convolutional Features for Visual Tracking,HCFT)算法、基于分层卷积特征的鲁棒视觉跟踪(Robust Visual Tracking via Hierarchical Convolutional Features,HCFTstar)算法、全卷积孪生网络跟踪(Fully-Convolutional Siamese Networks for Object Tracking,SiameseFC)算法和抗干扰感知孪生网络跟踪(Distractor-Aware Siamese Networks for Object Tracking,DaSiam)算法9种主流算法进行实验对比,实验结果表明CKO-OVT算法在OTB数据集上距离精确率提升了2.2%,重叠成功率提升了1.8%;在TColor-128数据集上距离精确率提升了0.4%,重叠成功率提升了0.9%;在OVT数据集上距离精确率提升了1.7%,重叠成功率提升了1.2%。CKO-OVT算法通过自适应卷积核优选和判别式孪生网络,显著提升了遮挡场景下车辆跟踪的鲁棒性和准确性,在OTB、TColor-128和自建OVT数据集上的实验结果表明,CKO-OVT算法在距离精确率和重叠成功率上优于主流跟踪算法,为智能交通和自动驾驶领域的车辆跟踪提供了有效的解决方案。展开更多
为解决吊装作业数据集获取困难与吊装作业过程中重要对象(吊物与吊钩)监管难题,提出虚实结合的方法构建数据集,基于SketchUp软件建立虚拟吊装作业场景获取虚拟吊装作业图片,同时从网络获取吊装作业图片及现场作业视频截图,将真实作业场...为解决吊装作业数据集获取困难与吊装作业过程中重要对象(吊物与吊钩)监管难题,提出虚实结合的方法构建数据集,基于SketchUp软件建立虚拟吊装作业场景获取虚拟吊装作业图片,同时从网络获取吊装作业图片及现场作业视频截图,将真实作业场景的图片与虚拟作业场景的图片共同组成虚实结合的数据集。引入可改变核卷积(Arbitrary Kernel Convolution,AKConv)和鬼魅空洞可分离卷积(Concentrated-Comprehensive Convolution with GhostBottleneck,C3Ghost)改进目标检测算法模型YOLOv5(You Only Look Once version 5),改进后的模型比原始模型在精确率上高出2.6百分点,在推理速度上高出9.1帧/s,且模型所占存储容量降低1.9 MB。搭建可视化操作界面,与优化好的模型整合成吊装作业实时监测系统,实现对吊物和吊钩的安全状态识别和风险预警,及时进行风险管控。展开更多
为解决现有深度学习网络结构对红外弱小目标的识别针对性不足问题,提出了一种基于改进Yolov8的红外弱小目标识别算法(Yolov8n based on UniRepLK Block and Triplet Attention,UT-Yolov8)。该算法通过特征融合网络输出端的检测头引入三...为解决现有深度学习网络结构对红外弱小目标的识别针对性不足问题,提出了一种基于改进Yolov8的红外弱小目标识别算法(Yolov8n based on UniRepLK Block and Triplet Attention,UT-Yolov8)。该算法通过特征融合网络输出端的检测头引入三重注意力机制,为特征融合网络内部添加新的小目标检测层、检测头,以及在特征提取网络的空间池化金字塔内结合大内核卷积,针对红外弱小目标的成像特性进行改进。算法在真实红外图像数据上进行验证,实验结果表明,UT-Yolov8算法在保持高检测速度的同时,有效提高了网络对于红外弱小目标识别精度,平均精度均值mAP@0.5达到了95.9%。展开更多
基金supported by the National Natural Science Foundation of China (62271255,61871218)the Fundamental Research Funds for the Central University (3082019NC2019002)+1 种基金the Aeronautical Science Foundation (ASFC-201920007002)the Program of Remote Sensing Intelligent Monitoring and Emergency Services for Regional Security Elements。
文摘In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.
文摘针对遮挡场景下车辆跟踪精度下降的问题,提出了一种基于卷积核优选的遮挡车辆跟踪(Convolutional Kernel Optimization for Occluded Vehicle Tracking,CKO-OVT)算法。CKO-OVT算法通过卷积核优选策略自适应挑选出对车辆目标更为敏感的卷积算子进行特征提取,通过判别式孪生网络对跟踪结果进行评估并在跟踪失效的情况下重定位目标,进一步提升跟踪的鲁棒性和准确性。实验部分,构建了遮挡车辆跟踪(Occluded Vehicle Tracking,OVT)数据集,分别在目标跟踪基准(Object Tracking Benchmark,OTB)数据集、TColor-128公开数据集和自建OVT数据集上同高效卷积跟踪(Efficient Convolution Operators for Tracking,ECO)算法、ECO轻量化版本(Efficient Convolution Operators for Tracking Using HOG and CN,ECOHC)、相关滤波(Kernelized Correlation Filters Tracker,KCF)算法、判别式尺度空间跟踪(Discriminative Scale Space Tracker,DSST)算法、循环结构核跟踪(Circulant Structure Kernel Tracker,CSK)算法、层次相关滤波跟踪(Hierarchical Convolutional Features for Visual Tracking,HCFT)算法、基于分层卷积特征的鲁棒视觉跟踪(Robust Visual Tracking via Hierarchical Convolutional Features,HCFTstar)算法、全卷积孪生网络跟踪(Fully-Convolutional Siamese Networks for Object Tracking,SiameseFC)算法和抗干扰感知孪生网络跟踪(Distractor-Aware Siamese Networks for Object Tracking,DaSiam)算法9种主流算法进行实验对比,实验结果表明CKO-OVT算法在OTB数据集上距离精确率提升了2.2%,重叠成功率提升了1.8%;在TColor-128数据集上距离精确率提升了0.4%,重叠成功率提升了0.9%;在OVT数据集上距离精确率提升了1.7%,重叠成功率提升了1.2%。CKO-OVT算法通过自适应卷积核优选和判别式孪生网络,显著提升了遮挡场景下车辆跟踪的鲁棒性和准确性,在OTB、TColor-128和自建OVT数据集上的实验结果表明,CKO-OVT算法在距离精确率和重叠成功率上优于主流跟踪算法,为智能交通和自动驾驶领域的车辆跟踪提供了有效的解决方案。
文摘为解决吊装作业数据集获取困难与吊装作业过程中重要对象(吊物与吊钩)监管难题,提出虚实结合的方法构建数据集,基于SketchUp软件建立虚拟吊装作业场景获取虚拟吊装作业图片,同时从网络获取吊装作业图片及现场作业视频截图,将真实作业场景的图片与虚拟作业场景的图片共同组成虚实结合的数据集。引入可改变核卷积(Arbitrary Kernel Convolution,AKConv)和鬼魅空洞可分离卷积(Concentrated-Comprehensive Convolution with GhostBottleneck,C3Ghost)改进目标检测算法模型YOLOv5(You Only Look Once version 5),改进后的模型比原始模型在精确率上高出2.6百分点,在推理速度上高出9.1帧/s,且模型所占存储容量降低1.9 MB。搭建可视化操作界面,与优化好的模型整合成吊装作业实时监测系统,实现对吊物和吊钩的安全状态识别和风险预警,及时进行风险管控。
文摘为解决现有深度学习网络结构对红外弱小目标的识别针对性不足问题,提出了一种基于改进Yolov8的红外弱小目标识别算法(Yolov8n based on UniRepLK Block and Triplet Attention,UT-Yolov8)。该算法通过特征融合网络输出端的检测头引入三重注意力机制,为特征融合网络内部添加新的小目标检测层、检测头,以及在特征提取网络的空间池化金字塔内结合大内核卷积,针对红外弱小目标的成像特性进行改进。算法在真实红外图像数据上进行验证,实验结果表明,UT-Yolov8算法在保持高检测速度的同时,有效提高了网络对于红外弱小目标识别精度,平均精度均值mAP@0.5达到了95.9%。