Shearlet变换作为后小波时代的一个重要的多尺度几何分析工具具有良好的各向异性和方向捕捉性,同时它也可以对诸如图像等多维信号进行一种近最优的稀疏表示.非下采样Shearlet变换(NSST)在保持Shearlet变换特性的同时还具有平移不变特性...Shearlet变换作为后小波时代的一个重要的多尺度几何分析工具具有良好的各向异性和方向捕捉性,同时它也可以对诸如图像等多维信号进行一种近最优的稀疏表示.非下采样Shearlet变换(NSST)在保持Shearlet变换特性的同时还具有平移不变特性,这在具有丰富纹理和细节信息的图像处理中发挥着重要作用.该文首先对图像NSST方向子带内系数的概率密度分布进行分析,获得系数的稀疏统计特性和Cauchy分布拟合子带内系数的有效性;其次对NSST方向子带间系数的联合概率分布进行分析,获得方向子带系数间所具有的持续和传递特性,确定了一种NSST子带间树形架构的系数对应关系,进而提出一种NSST域隐马尔可夫模树模型(C-NSSTHMT),该模型通过Cauchy分布来拟合NSST系数,更好地揭示图像NSST变换后相同尺度子带内和不同尺度子带间系数的相关性.进一步提出一种基于所提出C-NSST-HMT模型的图像去噪算法,该算法对于含噪声方差为30和40的噪声图像,其去噪后的PSNR(Peak Signal to Noise Ratio)较NSCT-HMT方法分别提高了1.995dB和1.193dB.特别对纹理和细节丰富的图像,该算法在去噪的同时,有效地保留了图像的几何信息.展开更多
传统图像去噪方法在去除声呐图像斑点噪声的同时,难以有效保留细节特征.针对该问题,提出一种基于密度聚类与灰度变换的非下采样剪切波域图像去噪方法.利用非下采样剪切波变换将含噪图像分解为高频系数和低频系数,根据声呐图像中斑点噪...传统图像去噪方法在去除声呐图像斑点噪声的同时,难以有效保留细节特征.针对该问题,提出一种基于密度聚类与灰度变换的非下采样剪切波域图像去噪方法.利用非下采样剪切波变换将含噪图像分解为高频系数和低频系数,根据声呐图像中斑点噪声的分布特性,采用基于密度的噪声应用空间聚类(Density-based Spatial Clustering of Applications with Noise,DBSCAN)算法对高频系数进行处理,分离噪声信号,保留细节信息;对低频系数进行灰度变换,以增强图像对比度.通过非下采样剪切波逆变换对处理后的高频系数和低频系数进行重构,实现图像去噪.实验结果表明,本文方法在改善图像均方误差、峰值信噪比和结构相似度等指标上效果明显,去噪后图像的视觉效果和边缘保持能力得到较大提升.随着噪声方差的逐渐增大,本文方法的优越性得到进一步体现,适用于具有高密度噪声的声呐图像去噪.展开更多
Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to ...Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.展开更多
文摘Shearlet变换作为后小波时代的一个重要的多尺度几何分析工具具有良好的各向异性和方向捕捉性,同时它也可以对诸如图像等多维信号进行一种近最优的稀疏表示.非下采样Shearlet变换(NSST)在保持Shearlet变换特性的同时还具有平移不变特性,这在具有丰富纹理和细节信息的图像处理中发挥着重要作用.该文首先对图像NSST方向子带内系数的概率密度分布进行分析,获得系数的稀疏统计特性和Cauchy分布拟合子带内系数的有效性;其次对NSST方向子带间系数的联合概率分布进行分析,获得方向子带系数间所具有的持续和传递特性,确定了一种NSST子带间树形架构的系数对应关系,进而提出一种NSST域隐马尔可夫模树模型(C-NSSTHMT),该模型通过Cauchy分布来拟合NSST系数,更好地揭示图像NSST变换后相同尺度子带内和不同尺度子带间系数的相关性.进一步提出一种基于所提出C-NSST-HMT模型的图像去噪算法,该算法对于含噪声方差为30和40的噪声图像,其去噪后的PSNR(Peak Signal to Noise Ratio)较NSCT-HMT方法分别提高了1.995dB和1.193dB.特别对纹理和细节丰富的图像,该算法在去噪的同时,有效地保留了图像的几何信息.
文摘传统图像去噪方法在去除声呐图像斑点噪声的同时,难以有效保留细节特征.针对该问题,提出一种基于密度聚类与灰度变换的非下采样剪切波域图像去噪方法.利用非下采样剪切波变换将含噪图像分解为高频系数和低频系数,根据声呐图像中斑点噪声的分布特性,采用基于密度的噪声应用空间聚类(Density-based Spatial Clustering of Applications with Noise,DBSCAN)算法对高频系数进行处理,分离噪声信号,保留细节信息;对低频系数进行灰度变换,以增强图像对比度.通过非下采样剪切波逆变换对处理后的高频系数和低频系数进行重构,实现图像去噪.实验结果表明,本文方法在改善图像均方误差、峰值信噪比和结构相似度等指标上效果明显,去噪后图像的视觉效果和边缘保持能力得到较大提升.随着噪声方差的逐渐增大,本文方法的优越性得到进一步体现,适用于具有高密度噪声的声呐图像去噪.
基金supported by the National Natural Science Foundation of China(6157206361401308)+6 种基金the Fundamental Research Funds for the Central Universities(2016YJS039)the Natural Science Foundation of Hebei Province(F2016201142F2016201187)the Natural Social Foundation of Hebei Province(HB15TQ015)the Science Research Project of Hebei Province(QN2016085ZC2016040)the Natural Science Foundation of Hebei University(2014-303)
文摘Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.