The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f...The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.展开更多
Low-light image enhancement is one of the most active research areas in the field of computer vision in recent years.In the low-light image enhancement process,loss of image details and increase in noise occur inevita...Low-light image enhancement is one of the most active research areas in the field of computer vision in recent years.In the low-light image enhancement process,loss of image details and increase in noise occur inevitably,influencing the quality of enhanced images.To alleviate this problem,a low-light image enhancement model called RetinexNet model based on Retinex theory was proposed in this study.The model was composed of an image decomposition module and a brightness enhancement module.In the decomposition module,a convolutional block attention module(CBAM)was incorporated to enhance feature representation capacity of the network,focusing on crucial features and suppressing irrelevant ones.A multifeature fusion denoising module was designed within the brightness enhancement module,circumventing the issue of feature loss during downsampling.The proposed model outperforms the existing algorithms in terms of PSNR and SSIM metrics on the publicly available datasets LOL and MIT-Adobe FiveK,as well as gives superior results in terms of NIQE metrics on the publicly available dataset LIME.展开更多
In challenging situations,such as low illumination,rain,and background clutter,the stability of the thermal infrared(TIR)spectrum can help red,green,blue(RGB)visible spectrum to improve tracking performance.However,th...In challenging situations,such as low illumination,rain,and background clutter,the stability of the thermal infrared(TIR)spectrum can help red,green,blue(RGB)visible spectrum to improve tracking performance.However,the high-level image information and the modality-specific features have not been sufficiently studied.The proposed correlation filter uses the fused saliency content map to improve filter training and extracts different features of modalities.The fused content map is intro-duced into the spatial regularization term of correlation filter to highlight the training samples in the content region.Furthermore,the fused content map can avoid the incompleteness of the con-tent region caused by challenging situations.Additionally,differ-ent features are extracted according to the modality characteris-tics and are fused by the designed response-level fusion stra-tegy.The alternating direction method of multipliers(ADMM)algorithm is used to solve the tracker training efficiently.Experi-ments on the large-scale benchmark datasets show the effec-tiveness of the proposed tracker compared to the state-of-the-art traditional trackers and the deep learning based trackers.展开更多
A new method for image fusion based on Contourlet transform and cycle spinning is proposed. Contourlet transform is a flexible multiresolution, local and directional image expansion, also provids a sparse representati...A new method for image fusion based on Contourlet transform and cycle spinning is proposed. Contourlet transform is a flexible multiresolution, local and directional image expansion, also provids a sparse representation for two-dimensional piece-wise smooth signals resembling images. Due to lack of translation invariance property in Contourlet transform, the conventional image fusion algorithm based on Contourlet transform introduces many artifacts. According to the theory of cycle spinning applied to image denoising, an invariance transform can reduce the artifacts through a series of processing efficiently. So the technology of cycle spinning is introduced to develop the translation invariant Contourlet fusion algorithm. This method can effectively eliminate the Gibbs-like phenomenon, extract the characteristics of original images, and preserve more important information. Experimental results show the simplicity and effectiveness of the method and its advantages over the conventional approaches.展开更多
In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Fi...In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Firstly,the source images are separated into a series of high and low frequency components.Secondly,three visual features of the source image are extracted to construct a decision graph model.Thirdly,a fast weighted guided filter is raised to optimize the result obtained in the previous step and reduce the time complexity by considering the correlation among neighboring pixels.Finally,the image obtained in the previous step is combined with the weight map to realize the image fusion.The proposed algorithm is applied to multi-focus,visible-infrared and multi-modal image respectively and the final results show that the algorithm effectively solves the halo artifacts of the merged images with higher efficiency,and is better than the traditional method considering subjective visual consequent and objective evaluation.展开更多
on the basis of analyzing the characteristics of low light level(LLL)image and ultra-violet image and the information amount of dual channel color night vision system,the LLL and ultra-violet color night vision techni...on the basis of analyzing the characteristics of low light level(LLL)image and ultra-violet image and the information amount of dual channel color night vision system,the LLL and ultra-violet color night vision technique is put forward.The methods of gray-scale modulation,frequency field fusion,special component fusion arc tried,and the improved LLL and ultra-violet image pseudo color fusion algorithms are presented.These new algorithms include subsection gray-scale modulation,image difference picking-up,component separation based on the reflected characteristics to night skylight reflection characteristics of objects and color space mapping which embodies the spectrum response of image sensor and nature vision.Some good results are obtained.展开更多
A novel feature fusion method is proposed for the edge detection of color images. Except for the typical features used in edge detection, the color contrast similarity and the orientation consistency are also selected...A novel feature fusion method is proposed for the edge detection of color images. Except for the typical features used in edge detection, the color contrast similarity and the orientation consistency are also selected as the features. The four features are combined together as a parameter to detect the edges of color images. Experimental results show that the method can inhibit noisy edges and facilitate the detection for weak edges. It has a better performance than conventional methods in noisy environments.展开更多
Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to ...Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.展开更多
A muitisensor image fusion algorithm is described using 2-dimensional nonseparable wavelet frame (NWF) transform. The source muitisensor images are first decomposed by the NWF transform. Then, the NWF transform coef...A muitisensor image fusion algorithm is described using 2-dimensional nonseparable wavelet frame (NWF) transform. The source muitisensor images are first decomposed by the NWF transform. Then, the NWF transform coefficients of the source images are combined into the composite NWF transform coefficients. Inverse NWF transform is performed on the composite NWF transform coefficients in order to obtain the intermediate fused image. Finally, intensity adjustment is applied to the intermediate fused image in order to maintain the dynamic intensity range. Experiment resuits using real data show that the proposed algorithm works well in muitisensor image fusion.展开更多
Image fusion should consider the priori knowledge of the source images to be fused, such as the characteristics of the images and the goal of image fusion, that is to say, the knowledge about the input data and the ap...Image fusion should consider the priori knowledge of the source images to be fused, such as the characteristics of the images and the goal of image fusion, that is to say, the knowledge about the input data and the application plays a crucial role. This paper is concerned on multiresolution (MR) image fusion. Considering the characteristics of the multisensor (SAR and FLIR etc) and the goal of fusion, which is to achieve one image in possession of the contours feature and the target region feature. It seems more meaningful to combine features rather than pixels. A multisensor image fusion scheme based on K-means duster and steerable pyramid is presented. K-means cluster is used to segment out objects in FLIR images. The steerable pyramid is a multiresolution analysis method, which has a good property to extract contours information at different scales, Comparisons are made with the relevant existing techniques in the literature. The paper concludes with some examples to illustrate the efficiency of the proposed scheme.展开更多
A hierarchical particle filter(HPF) framework based on multi-feature fusion is proposed.The proposed HPF effectively uses different feature information to avoid the tracking failure based on the single feature in a ...A hierarchical particle filter(HPF) framework based on multi-feature fusion is proposed.The proposed HPF effectively uses different feature information to avoid the tracking failure based on the single feature in a complicated environment.In this approach,the Harris algorithm is introduced to detect the corner points of the object,and the corner matching algorithm based on singular value decomposition is used to compute the firstorder weights and make particles centralize in the high likelihood area.Then the local binary pattern(LBP) operator is used to build the observation model of the target based on the color and texture features,by which the second-order weights of particles and the accurate location of the target can be obtained.Moreover,a backstepping controller is proposed to complete the whole tracking system.Simulations and experiments are carried out,and the results show that the HPF algorithm with the backstepping controller achieves stable and accurate tracking with good robustness in complex environments.展开更多
The speed and quality of the image fusion always restrain each other.The real-time image fusion is one of the problems which needs to be studied and solved urgently.The windowing processing technology for the image fu...The speed and quality of the image fusion always restrain each other.The real-time image fusion is one of the problems which needs to be studied and solved urgently.The windowing processing technology for the image fusion proposed in this paper can solve this problem in a certain extent.The windowing rules were put forward and the applicable scope for the windowing fusion and the calculation method for the maximum windowing area were determined.And,the results of the windowing fusion were analyzed,verified and compared to confirm the feasibility of this technology.展开更多
Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational com...Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational complexity and neglecting the local features of an image,resulting in limited image detail retention and a high registration misalignment sensitivity.In order to overcome these shortcomings and the noise existing in the image of the fusion process,this paper proposes a new signal decomposition model,namely the multi-source image fusion algorithm of the gradient regularization convolution SR(CSR).The main innovation of this work is using the sparse optimization function to perform two-scale decomposition of the source image to obtain high-frequency components and low-frequency components.The sparse coefficient is obtained by the gradient regularization CSR model,and the sparse coefficient is taken as the maximum value to get the optimal high frequency component of the fused image.The best low frequency component is obtained by using the fusion strategy of the extreme or the average value.The final fused image is obtained by adding two optimal components.Experimental results demonstrate that this method greatly improves the ability to maintain image details and reduces image registration sensitivity.展开更多
In this paper,based on a bidirectional parallel multi-branch feature pyramid network(BPMFPN),a novel one-stage object detector called BPMFPN Det is proposed for real-time detection of ground multi-scale targets by swa...In this paper,based on a bidirectional parallel multi-branch feature pyramid network(BPMFPN),a novel one-stage object detector called BPMFPN Det is proposed for real-time detection of ground multi-scale targets by swarm unmanned aerial vehicles(UAVs).First,the bidirectional parallel multi-branch convolution modules are used to construct the feature pyramid to enhance the feature expression abilities of different scale feature layers.Next,the feature pyramid is integrated into the single-stage object detection framework to ensure real-time performance.In order to validate the effectiveness of the proposed algorithm,experiments are conducted on four datasets.For the PASCAL VOC dataset,the proposed algorithm achieves the mean average precision(mAP)of 85.4 on the VOC 2007 test set.With regard to the detection in optical remote sensing(DIOR)dataset,the proposed algorithm achieves 73.9 mAP.For vehicle detection in aerial imagery(VEDAI)dataset,the detection accuracy of small land vehicle(slv)targets reaches 97.4 mAP.For unmanned aerial vehicle detection and tracking(UAVDT)dataset,the proposed BPMFPN Det achieves the mAP of 48.75.Compared with the previous state-of-the-art methods,the results obtained by the proposed algorithm are more competitive.The experimental results demonstrate that the proposed algorithm can effectively solve the problem of real-time detection of ground multi-scale targets in aerial images of swarm UAVs.展开更多
基金Supported by the Henan Province Key Research and Development Project(231111211300)the Central Government of Henan Province Guides Local Science and Technology Development Funds(Z20231811005)+2 种基金Henan Province Key Research and Development Project(231111110100)Henan Provincial Outstanding Foreign Scientist Studio(GZS2024006)Henan Provincial Joint Fund for Scientific and Technological Research and Development Plan(Application and Overcoming Technical Barriers)(242103810028)。
文摘The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.
文摘Low-light image enhancement is one of the most active research areas in the field of computer vision in recent years.In the low-light image enhancement process,loss of image details and increase in noise occur inevitably,influencing the quality of enhanced images.To alleviate this problem,a low-light image enhancement model called RetinexNet model based on Retinex theory was proposed in this study.The model was composed of an image decomposition module and a brightness enhancement module.In the decomposition module,a convolutional block attention module(CBAM)was incorporated to enhance feature representation capacity of the network,focusing on crucial features and suppressing irrelevant ones.A multifeature fusion denoising module was designed within the brightness enhancement module,circumventing the issue of feature loss during downsampling.The proposed model outperforms the existing algorithms in terms of PSNR and SSIM metrics on the publicly available datasets LOL and MIT-Adobe FiveK,as well as gives superior results in terms of NIQE metrics on the publicly available dataset LIME.
基金supported by the National Natural Science Foundation of China(62073036,62076031)Beijing Natural Science Foundation(4242049).
文摘In challenging situations,such as low illumination,rain,and background clutter,the stability of the thermal infrared(TIR)spectrum can help red,green,blue(RGB)visible spectrum to improve tracking performance.However,the high-level image information and the modality-specific features have not been sufficiently studied.The proposed correlation filter uses the fused saliency content map to improve filter training and extracts different features of modalities.The fused content map is intro-duced into the spatial regularization term of correlation filter to highlight the training samples in the content region.Furthermore,the fused content map can avoid the incompleteness of the con-tent region caused by challenging situations.Additionally,differ-ent features are extracted according to the modality characteris-tics and are fused by the designed response-level fusion stra-tegy.The alternating direction method of multipliers(ADMM)algorithm is used to solve the tracker training efficiently.Experi-ments on the large-scale benchmark datasets show the effec-tiveness of the proposed tracker compared to the state-of-the-art traditional trackers and the deep learning based trackers.
基金supported by the National Natural Science Foundation of China (60802084)
文摘A new method for image fusion based on Contourlet transform and cycle spinning is proposed. Contourlet transform is a flexible multiresolution, local and directional image expansion, also provids a sparse representation for two-dimensional piece-wise smooth signals resembling images. Due to lack of translation invariance property in Contourlet transform, the conventional image fusion algorithm based on Contourlet transform introduces many artifacts. According to the theory of cycle spinning applied to image denoising, an invariance transform can reduce the artifacts through a series of processing efficiently. So the technology of cycle spinning is introduced to develop the translation invariant Contourlet fusion algorithm. This method can effectively eliminate the Gibbs-like phenomenon, extract the characteristics of original images, and preserve more important information. Experimental results show the simplicity and effectiveness of the method and its advantages over the conventional approaches.
基金supported by the National Natural Science Foundation of China(61472324 61671383)+1 种基金Shaanxi Key Industry Innovation Chain Project(2018ZDCXL-G-12-2 2019ZDLGY14-02-02)
文摘In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Firstly,the source images are separated into a series of high and low frequency components.Secondly,three visual features of the source image are extracted to construct a decision graph model.Thirdly,a fast weighted guided filter is raised to optimize the result obtained in the previous step and reduce the time complexity by considering the correlation among neighboring pixels.Finally,the image obtained in the previous step is combined with the weight map to realize the image fusion.The proposed algorithm is applied to multi-focus,visible-infrared and multi-modal image respectively and the final results show that the algorithm effectively solves the halo artifacts of the merged images with higher efficiency,and is better than the traditional method considering subjective visual consequent and objective evaluation.
文摘on the basis of analyzing the characteristics of low light level(LLL)image and ultra-violet image and the information amount of dual channel color night vision system,the LLL and ultra-violet color night vision technique is put forward.The methods of gray-scale modulation,frequency field fusion,special component fusion arc tried,and the improved LLL and ultra-violet image pseudo color fusion algorithms are presented.These new algorithms include subsection gray-scale modulation,image difference picking-up,component separation based on the reflected characteristics to night skylight reflection characteristics of objects and color space mapping which embodies the spectrum response of image sensor and nature vision.Some good results are obtained.
基金supported partly by the National Basic Research Program of China (2005CB724303)the National Natural Science Foundation of China (60671062) Shanghai Leading Academic Discipline Project (B112).
文摘A novel feature fusion method is proposed for the edge detection of color images. Except for the typical features used in edge detection, the color contrast similarity and the orientation consistency are also selected as the features. The four features are combined together as a parameter to detect the edges of color images. Experimental results show that the method can inhibit noisy edges and facilitate the detection for weak edges. It has a better performance than conventional methods in noisy environments.
基金supported by the National Natural Science Foundation of China(6157206361401308)+6 种基金the Fundamental Research Funds for the Central Universities(2016YJS039)the Natural Science Foundation of Hebei Province(F2016201142F2016201187)the Natural Social Foundation of Hebei Province(HB15TQ015)the Science Research Project of Hebei Province(QN2016085ZC2016040)the Natural Science Foundation of Hebei University(2014-303)
文摘Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.
文摘A muitisensor image fusion algorithm is described using 2-dimensional nonseparable wavelet frame (NWF) transform. The source muitisensor images are first decomposed by the NWF transform. Then, the NWF transform coefficients of the source images are combined into the composite NWF transform coefficients. Inverse NWF transform is performed on the composite NWF transform coefficients in order to obtain the intermediate fused image. Finally, intensity adjustment is applied to the intermediate fused image in order to maintain the dynamic intensity range. Experiment resuits using real data show that the proposed algorithm works well in muitisensor image fusion.
基金This project was supported by National "863" High Technology Research and Development Program of China(2001AA135091) National Science Foundation of China +2 种基金Shanghai Key Scientific Project (02DZ15001) China PH.D. DisciplineSpecial Foundation (20020248029) China Aviation Science Foundation (02D57003) .
文摘Image fusion should consider the priori knowledge of the source images to be fused, such as the characteristics of the images and the goal of image fusion, that is to say, the knowledge about the input data and the application plays a crucial role. This paper is concerned on multiresolution (MR) image fusion. Considering the characteristics of the multisensor (SAR and FLIR etc) and the goal of fusion, which is to achieve one image in possession of the contours feature and the target region feature. It seems more meaningful to combine features rather than pixels. A multisensor image fusion scheme based on K-means duster and steerable pyramid is presented. K-means cluster is used to segment out objects in FLIR images. The steerable pyramid is a multiresolution analysis method, which has a good property to extract contours information at different scales, Comparisons are made with the relevant existing techniques in the literature. The paper concludes with some examples to illustrate the efficiency of the proposed scheme.
基金supported by the National Natural Science Foundation of China(61304097)the Projects of Major International(Regional)Joint Research Program NSFC(61120106010)the Foundation for Innovation Research Groups of the National National Natural Science Foundation of China(61321002)
文摘A hierarchical particle filter(HPF) framework based on multi-feature fusion is proposed.The proposed HPF effectively uses different feature information to avoid the tracking failure based on the single feature in a complicated environment.In this approach,the Harris algorithm is introduced to detect the corner points of the object,and the corner matching algorithm based on singular value decomposition is used to compute the firstorder weights and make particles centralize in the high likelihood area.Then the local binary pattern(LBP) operator is used to build the observation model of the target based on the color and texture features,by which the second-order weights of particles and the accurate location of the target can be obtained.Moreover,a backstepping controller is proposed to complete the whole tracking system.Simulations and experiments are carried out,and the results show that the HPF algorithm with the backstepping controller achieves stable and accurate tracking with good robustness in complex environments.
文摘The speed and quality of the image fusion always restrain each other.The real-time image fusion is one of the problems which needs to be studied and solved urgently.The windowing processing technology for the image fusion proposed in this paper can solve this problem in a certain extent.The windowing rules were put forward and the applicable scope for the windowing fusion and the calculation method for the maximum windowing area were determined.And,the results of the windowing fusion were analyzed,verified and compared to confirm the feasibility of this technology.
基金the National Natural Science Foundation of China(61671383)Shaanxi Key Industry Innovation Chain Project(2018ZDCXL-G-12-2,2019ZDLGY14-02-02,2019ZDLGY14-02-03).
文摘Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational complexity and neglecting the local features of an image,resulting in limited image detail retention and a high registration misalignment sensitivity.In order to overcome these shortcomings and the noise existing in the image of the fusion process,this paper proposes a new signal decomposition model,namely the multi-source image fusion algorithm of the gradient regularization convolution SR(CSR).The main innovation of this work is using the sparse optimization function to perform two-scale decomposition of the source image to obtain high-frequency components and low-frequency components.The sparse coefficient is obtained by the gradient regularization CSR model,and the sparse coefficient is taken as the maximum value to get the optimal high frequency component of the fused image.The best low frequency component is obtained by using the fusion strategy of the extreme or the average value.The final fused image is obtained by adding two optimal components.Experimental results demonstrate that this method greatly improves the ability to maintain image details and reduces image registration sensitivity.
文摘In this paper,based on a bidirectional parallel multi-branch feature pyramid network(BPMFPN),a novel one-stage object detector called BPMFPN Det is proposed for real-time detection of ground multi-scale targets by swarm unmanned aerial vehicles(UAVs).First,the bidirectional parallel multi-branch convolution modules are used to construct the feature pyramid to enhance the feature expression abilities of different scale feature layers.Next,the feature pyramid is integrated into the single-stage object detection framework to ensure real-time performance.In order to validate the effectiveness of the proposed algorithm,experiments are conducted on four datasets.For the PASCAL VOC dataset,the proposed algorithm achieves the mean average precision(mAP)of 85.4 on the VOC 2007 test set.With regard to the detection in optical remote sensing(DIOR)dataset,the proposed algorithm achieves 73.9 mAP.For vehicle detection in aerial imagery(VEDAI)dataset,the detection accuracy of small land vehicle(slv)targets reaches 97.4 mAP.For unmanned aerial vehicle detection and tracking(UAVDT)dataset,the proposed BPMFPN Det achieves the mAP of 48.75.Compared with the previous state-of-the-art methods,the results obtained by the proposed algorithm are more competitive.The experimental results demonstrate that the proposed algorithm can effectively solve the problem of real-time detection of ground multi-scale targets in aerial images of swarm UAVs.