In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) ba...In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.展开更多
In this study,an underwater image enhancement method based on multi-scale adversarial network was proposed to solve the problem of detail blur and color distortion in underwater images.Firstly,the local features of ea...In this study,an underwater image enhancement method based on multi-scale adversarial network was proposed to solve the problem of detail blur and color distortion in underwater images.Firstly,the local features of each layer were enhanced into the global features by the proposed residual dense block,which ensured that the generated images retain more details.Secondly,a multi-scale structure was adopted to extract multi-scale semantic features of the original images.Finally,the features obtained from the dual channels were fused by an adaptive fusion module to further optimize the features.The discriminant network adopted the structure of the Markov discriminator.In addition,by constructing mean square error,structural similarity,and perceived color loss function,the generated image is consistent with the reference image in structure,color,and content.The experimental results showed that the enhanced underwater image deblurring effect of the proposed algorithm was good and the problem of underwater image color bias was effectively improved.In both subjective and objective evaluation indexes,the experimental results of the proposed algorithm are better than those of the comparison algorithm.展开更多
针对视觉算法在检测航拍图像中密集小目标时容易受到目标重叠、遮挡等情况干扰的现象,提出了一种基于高阶空间特征(目标形状、位置等信息的高级表示)提取的Transformer检测头HSF-TPH(Transformer prediction head with high-order spati...针对视觉算法在检测航拍图像中密集小目标时容易受到目标重叠、遮挡等情况干扰的现象,提出了一种基于高阶空间特征(目标形状、位置等信息的高级表示)提取的Transformer检测头HSF-TPH(Transformer prediction head with high-order spatial feature extraction)。所提检测头中将自注意力机制中的二阶交互扩展到三阶以生成高阶空间特征,提取更有区分度的空间关系,突出每一个小目标在空间上的语义信息。同时,为了缓解骨干网络过度下采样对小目标信息的压缩,设计了一种高分辨率特征图生成机制,增加头部网络的输入特征分辨率,以提升HSFTPH检测密集小目标的效果。设计了新的损失函数USIoU,降低算法位置偏差敏感性。在VisDrone2019数据集上开展实验证明,所提算法在面积最小、密度最高的人类目标的检测任务中实现了mAP50指标10个百分点以上的性能提升。展开更多
针对红外小目标图像的低分辨率、特征信息少、识别准确率低等问题,提出嵌入空间位置信息和多视角特征提取(Embedded Spatial Location Information and Multi-view Feature Extraction,ESLIMFE)的红外小目标检测模型。首先,随着网络深...针对红外小目标图像的低分辨率、特征信息少、识别准确率低等问题,提出嵌入空间位置信息和多视角特征提取(Embedded Spatial Location Information and Multi-view Feature Extraction,ESLIMFE)的红外小目标检测模型。首先,随着网络深度的增加导致特征图分辨率逐渐减小从而丢失细节信息,因此在骨干网络中嵌入空间位置信息融合注意力机制(Spatial Location Information Fusion,SLIF)弥补小目标特征信息。其次,结合C3模块和动态蛇形卷积提出多视角特征提取(Multi-view Feature Extraction,MVFE)模块,通过在不同视角下提取同一特征来增强小目标的特征表达能力。采用大选择核(Large Selection Kernel,LSK)模块,通过使用不同大小的卷积核提取小目标多尺度信息,以提高对红外小目标定位能力。最后,引入基于注意力的尺度内特征交互(Attention-based Intrascale Feature Interaction,AIFI)模块增强特征之间的交互性。在对空红外小目标数据集上进行实验,实验结果表明,mAP75的检测精度为90.5%,mAP50~95检测精度为74.5%,文中模型能够较好地实现对红外小目标精确检测。展开更多
基金supported by the National Natural Science Foundation of China (62271255,61871218)the Fundamental Research Funds for the Central University (3082019NC2019002)+1 种基金the Aeronautical Science Foundation (ASFC-201920007002)the Program of Remote Sensing Intelligent Monitoring and Emergency Services for Regional Security Elements。
文摘In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.
文摘In this study,an underwater image enhancement method based on multi-scale adversarial network was proposed to solve the problem of detail blur and color distortion in underwater images.Firstly,the local features of each layer were enhanced into the global features by the proposed residual dense block,which ensured that the generated images retain more details.Secondly,a multi-scale structure was adopted to extract multi-scale semantic features of the original images.Finally,the features obtained from the dual channels were fused by an adaptive fusion module to further optimize the features.The discriminant network adopted the structure of the Markov discriminator.In addition,by constructing mean square error,structural similarity,and perceived color loss function,the generated image is consistent with the reference image in structure,color,and content.The experimental results showed that the enhanced underwater image deblurring effect of the proposed algorithm was good and the problem of underwater image color bias was effectively improved.In both subjective and objective evaluation indexes,the experimental results of the proposed algorithm are better than those of the comparison algorithm.
文摘针对视觉算法在检测航拍图像中密集小目标时容易受到目标重叠、遮挡等情况干扰的现象,提出了一种基于高阶空间特征(目标形状、位置等信息的高级表示)提取的Transformer检测头HSF-TPH(Transformer prediction head with high-order spatial feature extraction)。所提检测头中将自注意力机制中的二阶交互扩展到三阶以生成高阶空间特征,提取更有区分度的空间关系,突出每一个小目标在空间上的语义信息。同时,为了缓解骨干网络过度下采样对小目标信息的压缩,设计了一种高分辨率特征图生成机制,增加头部网络的输入特征分辨率,以提升HSFTPH检测密集小目标的效果。设计了新的损失函数USIoU,降低算法位置偏差敏感性。在VisDrone2019数据集上开展实验证明,所提算法在面积最小、密度最高的人类目标的检测任务中实现了mAP50指标10个百分点以上的性能提升。