期刊文献+

双尺度分解和显著性分析相结合的红外与可见光图像融合 被引量:17

Dual-scale decomposition and saliency analysis based infrared and visible image fusion
原文传递
导出
摘要 目的针对图像融合中存在的目标信息减弱、背景细节不清晰、边缘模糊和融合效率低等不足,为了充分利用源图像的有用特征,将双尺度分解与基于视觉显著性的融合权重的思想融合在一起,提出了一种基于显著性分析和空间一致性的双尺度图像融合方法。方法利用均值滤波器对源图像进行双尺度分解,先后得到源图像的基层图像信息和细节层图像信息;对基层图像基于加权平均规则融合,对细节层图像先基于显著性分析得到初始权重图,再利用引导滤波优化得到的最终权重图指导加权;通过双尺度重建得到融合图像。结果根据传统方法与深度学习的不同特点,在TNO等公开数据集上从主观和客观两方面对所提方法进行评价。从主观分析来看,本文方法可以有效提取和融合源图像中的重要信息,得到融合质量高、视觉效果自然清晰的图像。从客观评价来看,实验验证了本文方法在提升融合效果上的有效性。与各种融合结果进行量化比较,在平均梯度、边缘强度、空间频率、特征互信息和交叉熵上的平均精度均为最优;与深度学习方法相比,熵、平均梯度、边缘强度、空间频率、特征互信息和交叉熵等指标均值分别提升了6.87%、91.28%、91.45%、85.10%、0.18%和45.45%。结论实验结果表明,所提方法不仅在目标、背景细节和边缘等信息的增强效果显著,而且能快速有效地利用源图像的有用特征。 Objective Image fusion technology is of great significance for image recognition and comprehension.Infrared and visible image fusion has been widely applied in computer vision,target detection,video surveillance,military and many other areas.The weakened target,unclear background details,blurred edges and low fusion efficiency have been existing due to high algorithm complexity in fusion.The dual-scale methods can reduce the complexity of the algorithm and obtain satisfying results in the first level of decomposition itself compared to most multi-scale methods that require more than two decomposition levels,with utilizing the large difference of information on the two scales.However,insufficient extraction of salient features and neglect of the influence of noise which may lead to unexpected fusion effect.Dual-scale decomposition has been combined to the saliency analysis and spatial consistency for acquiring high-quality fusion of infrared and visible images.Method The visual saliency has been used to integrate the important and valuable information of the source images into the fused image.The spatial consistency has been fully considered to prevent the influence of noise on the fusion results.First,the mean filter has been used to filter the source image,to separate the high-frequency and low-frequency information in the image:the base image containing low-frequency information has been obtained first.The detail image containing high-frequency information has been acquired second via subtracting from the source image.Next,a simple weighted average fusion rule,that is,the arithmetic average rule,has been used to fuse the base image via the different sensitivity of the human visual system to the information of base image and detail image.The common features of the source images can be preserved and the redundant information of the fused base image can be reduced;For the detail image,the fusion weight based on visual saliency has been selected to guide the weighting.The saliency information of the image can be extracted using the difference between the mean and the median filter output.The saliency map of the source images can be obtained via Gaussian filter on the output difference.Therefore,the initial weight map has been constructed via the visual saliency.Furthermore,combined with the principle of spatial consistency,the initial weight map has been optimized based on guided filtering for the purpose of reducing noise and keeping the boundary aligned.The detail image can be fused under the guidance of the final weight map obtained.Therefore,the target,background details and edge information can be enhanced and the noise can be released.At last,the dual-scale reconstruction has been performed to obtain the final fused image of the fused base image and detail image.Result Based on the different characteristics of traditional and deep learning methods,two groups of different gray images from TNO and other public datasets have been opted for comparison experiments.The subjective and objective evaluations have been conducted with other methods to verify the effectiveness and superiority performance of the proposed method on the experimental platform MATLAB R2018 a.The key prominent areas have been marked with white boxes in the results to fit the subjective analysis for illustrating the differences of the fused images in detail.The subjective analyzing method can comprehensively and accurately extract the information to obtain clear visual effect based on the source images and the fused image.First,the first group of experimental images and the effectiveness of the proposed method in improving the fusion effect can be verified on the aspect of objective evaluation.Next,the qualified average precision of average gradient,edge intensity,spatial frequency,feature mutual information and crossentropy have been presented quantitatively,which are 3.9907,41.7937,10.5366,0.4460 and 1.4897,respectively.At last,the proposed method has shown obvious advantages in the second group of experimental images compared with a deep learning method.The highest entropy has been obtained both.An average increase of 91.28%,91.45%,85.10%,0.18%and 45.45%in the above five metrics have been acquired respectively.Conclusion Due to the complexity of salient feature extraction and the uncertainty of noise in the fusion process,the extensive experiments have demonstrated that some existing fusion methods are inevitably limited,and the fusion effect cannot meet high-quality requirements of image processing.By contrast,the proposed method combining the dual-scale decomposition and the fusion weight based on visual saliency has achieved good results.The enhancement effect of the target,background details and edge information are particularly significant including anti-noise performance.High-quality fusion of multiple groups of images can be achieved quickly and effectively for providing the possibility of real-time fusion of infrared and visible images.The actual effect of this method has been more qualified in comparison with a fusion method based on deep learning framework.The further research method has been more universal and can be used to fuse multi-source and other multi-source and multi-mode images.
作者 霍星 邹韵 陈影 檀结庆 Huo Xing;Zou Yun;Chen Ying;Tan Jieqing(School of Mathematics,Hefei University of Technology,Hefei 230009,China)
出处 《中国图象图形学报》 CSCD 北大核心 2021年第12期2813-2825,共13页 Journal of Image and Graphics
基金 国家自然科学基金项目(61872407) 科技部国际合作项目(2014DFE10220)。
关键词 红外图像 可见光图像 显著性分析 空间一致性 双尺度分解 图像融合 infrared image visible image saliency analysis spatial consistency dual-scale decomposition image fusion
作者简介 霍星,1979年生,女,教授,主要研究方向为图形图像处理和计算机图形学。E-mail:huoxing@hfut.edu.cn;通信作者:檀结庆,男,教授,主要研究方向为应用数值逼近、计算机辅助几何设计与图形学、数字图像处理。E-mail:jieqingtan@hfut.edu.cn;邹韵,女,硕士研究生,主要研究方向为图形图像处理。E-mail:2783215249@qq.com;陈影,女,硕士研究生,主要研究方向为图形图像处理。E-mail:cccying520@163.com。
  • 相关文献

参考文献4

二级参考文献36

  • 1隆刚,肖磊,陈学佺.Curvelet变换在图像处理中的应用综述[J].计算机研究与发展,2005,42(8):1331-1337. 被引量:37
  • 2陈铭生,赖炬铭,孙季丰.基于离散多小波变换的医学图像融合[J].江西科学,2006,24(4):209-212. 被引量:3
  • 3Kong W W, Wang B H, Lei Y. Technique for infrared and visi- ble image fusion based on non-subsamp|ed shearlet transform and spiking cortical model[ J]. Infrared Physics & Technology,2015,71 (6) :87-98.
  • 4Xin Z, Xin Y, Rui A L, et al. Infrared and visible image fusion technology based on directionlets transfrom [ J ]. Journal on Wire- less Communications and Networking,2013,42 ( 1 ) : 1-4.
  • 5Parul A, Shabbir N. Merchant,et al. Muhifocus and multispectral image fusion based on pixel significance using multi resolution de- composition [ J ]. Verlag London Limited,2013, (7) : 95-109.
  • 6Yin H T. Sparse representation with learned muhiscale dictionary for image fusion[ J]. Neurocomputing,2015,148 (7) :600-610.
  • 7Yu L,Liu S P, Wang Z F. A general framework for image fusion based on multi scale transform and sparse representation [ J ]. In- formation Fusion,2015,24 ( 6 ) : 147 -164.
  • 8Vladimir P, Vladimir D. Focused pooling for image fusionevalua- tion[J]. Information Fusion,2015,22(3) :119-126.
  • 9郭宝龙,张强,侯叶.Region-based fusion of infrared and visible images using nonsubsampled contourlet transform[J].Chinese Optics Letters,2008,6(5):338-341. 被引量:10
  • 10陈浩,王延杰.基于拉普拉斯金字塔变换的图像融合算法研究[J].激光与红外,2009,39(4):439-442. 被引量:74

共引文献63

同被引文献118

引证文献17

二级引证文献53

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部