To preserve the sharp features and details of the synthetic aperture radar (SAR) image effectively when despeckling, a despeckling algorithm with edge detection in nonsubsampled second generation bandelet transform ...To preserve the sharp features and details of the synthetic aperture radar (SAR) image effectively when despeckling, a despeckling algorithm with edge detection in nonsubsampled second generation bandelet transform (NSBT) domain is proposed. First, the Canny operator is utilized to detect and remove edges from the SAR image. Then the NSBT which has an optimal approximation to the edges of images and a hard thresholding rule are used to approximate the details while despeckling the edge-removed image. Finally, the removed edges are added to the reconstructed image. As the edges axe detected and protected, and the NSBT is used, the proposed algorithm reaches the state-of-the-art effect which realizes both despeckling and preserving edges and details simultaneously. Experimental results show that both the subjective visual effect and the mainly objective performance indexes of the proposed algorithm outperform that of both Bayesian wavelet shrinkage with edge detection and Bayesian least square-Gaussian scale mixture (BLS-GSM).展开更多
Drone swarm systems,equipped with photoelectric imaging and intelligent target perception,are essential for reconnaissance and strike missions in complex and high-risk environments.They excel in information sharing,an...Drone swarm systems,equipped with photoelectric imaging and intelligent target perception,are essential for reconnaissance and strike missions in complex and high-risk environments.They excel in information sharing,anti-jamming capabilities,and combat performance,making them critical for future warfare.However,varied perspectives in collaborative combat scenarios pose challenges to object detection,hindering traditional detection algorithms and reducing accuracy.Limited angle-prior data and sparse samples further complicate detection.This paper presents the Multi-View Collaborative Detection System,which tackles the challenges of multi-view object detection in collaborative combat scenarios.The system is designed to enhance multi-view image generation and detection algorithms,thereby improving the accuracy and efficiency of object detection across varying perspectives.First,an observation model for three-dimensional targets through line-of-sight angle transformation is constructed,and a multi-view image generation algorithm based on the Pix2Pix network is designed.For object detection,YOLOX is utilized,and a deep feature extraction network,BA-RepCSPDarknet,is developed to address challenges related to small target scale and feature extraction challenges.Additionally,a feature fusion network NS-PAFPN is developed to mitigate the issue of deep feature map information loss in UAV images.A visual attention module(BAM)is employed to manage appearance differences under varying angles,while a feature mapping module(DFM)prevents fine-grained feature loss.These advancements lead to the development of BA-YOLOX,a multi-view object detection network model suitable for drone platforms,enhancing accuracy and effectively targeting small objects.展开更多
基金supported by the National Natural Science Foundation of China(6067309760702062)+3 种基金the National HighTechnology Research and Development Program of China(863 Program)(2008AA01Z1252007AA12Z136)the National ResearchFoundation for the Doctoral Program of Higher Education of China(20060701007)the Program for Cheung Kong Scholarsand Innovative Research Team in University(IRT 0645).
文摘To preserve the sharp features and details of the synthetic aperture radar (SAR) image effectively when despeckling, a despeckling algorithm with edge detection in nonsubsampled second generation bandelet transform (NSBT) domain is proposed. First, the Canny operator is utilized to detect and remove edges from the SAR image. Then the NSBT which has an optimal approximation to the edges of images and a hard thresholding rule are used to approximate the details while despeckling the edge-removed image. Finally, the removed edges are added to the reconstructed image. As the edges axe detected and protected, and the NSBT is used, the proposed algorithm reaches the state-of-the-art effect which realizes both despeckling and preserving edges and details simultaneously. Experimental results show that both the subjective visual effect and the mainly objective performance indexes of the proposed algorithm outperform that of both Bayesian wavelet shrinkage with edge detection and Bayesian least square-Gaussian scale mixture (BLS-GSM).
基金supported by the Natural Science Foundation of China,Grant No.62103052.
文摘Drone swarm systems,equipped with photoelectric imaging and intelligent target perception,are essential for reconnaissance and strike missions in complex and high-risk environments.They excel in information sharing,anti-jamming capabilities,and combat performance,making them critical for future warfare.However,varied perspectives in collaborative combat scenarios pose challenges to object detection,hindering traditional detection algorithms and reducing accuracy.Limited angle-prior data and sparse samples further complicate detection.This paper presents the Multi-View Collaborative Detection System,which tackles the challenges of multi-view object detection in collaborative combat scenarios.The system is designed to enhance multi-view image generation and detection algorithms,thereby improving the accuracy and efficiency of object detection across varying perspectives.First,an observation model for three-dimensional targets through line-of-sight angle transformation is constructed,and a multi-view image generation algorithm based on the Pix2Pix network is designed.For object detection,YOLOX is utilized,and a deep feature extraction network,BA-RepCSPDarknet,is developed to address challenges related to small target scale and feature extraction challenges.Additionally,a feature fusion network NS-PAFPN is developed to mitigate the issue of deep feature map information loss in UAV images.A visual attention module(BAM)is employed to manage appearance differences under varying angles,while a feature mapping module(DFM)prevents fine-grained feature loss.These advancements lead to the development of BA-YOLOX,a multi-view object detection network model suitable for drone platforms,enhancing accuracy and effectively targeting small objects.