In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is signif...In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.展开更多
A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can...A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can simplify the realization of the non-linear interpolated vector quantization (NLIVQ) technique and make the partial distance search (PDS) algorithm more efficient. Utilizing the relationship of vector L2-norm and its Euclidean distance, some conditions of eliminating unnecessary codewords are obtained. Further, using inequality constructed by the subvector L2-norm, more unnecessary codewords are eliminated. During the search process for code, mostly unlikely codewords can be rejected by the proposed algorithm combined with the non-linear interpolated vector quantization technique and the partial distance search technique. The experimental results show that the reduction of computation is outstanding in the encoding time and complexity against the full search method.展开更多
This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is i...This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is inferior only to the best transform KLT. Its vector quantization can maintain the minimum quantization distortions and greatly increase the compression ratio. In order to improve compression efficiency, an adaptive strategy of selecting reserved region patterns is applied to preserving the high energy at the same compression ratio. The experiment results show that they are satisfactory at the compression ration ratio if greater than 20.展开更多
随着人工智能的发展,深度神经网络成为多种模式识别任务中必不可少的工具,由于深度卷积神经网络(CNN)参数量巨大、计算复杂度高,将它部署到计算资源和存储空间受限的边缘计算设备上成为一项挑战。因此,深度网络压缩成为近年来的研究热...随着人工智能的发展,深度神经网络成为多种模式识别任务中必不可少的工具,由于深度卷积神经网络(CNN)参数量巨大、计算复杂度高,将它部署到计算资源和存储空间受限的边缘计算设备上成为一项挑战。因此,深度网络压缩成为近年来的研究热点。低秩分解与向量量化是深度网络压缩中重要的两个研究分支,其核心思想都是通过找到原网络结构的一种紧凑型表达,从而降低网络参数的冗余程度。通过建立联合压缩框架,提出一种基于低秩分解和向量量化的深度网络压缩方法——可量化的张量分解(QTD)。该方法能够在网络低秩结构的基础上实现进一步的量化,从而得到更大的压缩比。在CIFAR-10数据集上对经典ResNet和该方法进行验证的实验结果表明,QTD能够在准确率仅损失1.71个百分点的情况下,将网络参数量压缩至原来的1%。而在大型数据集ImageNet上把所提方法与基于量化的方法PQF(Permute,Quantize,and Fine-tune)、基于低秩分解的方法TDNR(Tucker Decomposition with Nonlinear Response)和基于剪枝的方法CLIP-Q(Compression Learning by In-parallel Pruning-Quantization)进行比较与分析的实验结果表明,QTD能够在相同压缩范围下实现更好的分类准确率。展开更多
针对传统矢量量化码书设计 L BG算法对初始码书敏感和在迭代过程中容易陷入局部极小的缺陷 ,结合模拟退火算法 ,提出了一种基于模拟退火的 L BG改进算法 ,并给出了退火过程中的扰动因子刻画、扰动策略选取、稳定性判据确定和温度下降策...针对传统矢量量化码书设计 L BG算法对初始码书敏感和在迭代过程中容易陷入局部极小的缺陷 ,结合模拟退火算法 ,提出了一种基于模拟退火的 L BG改进算法 ,并给出了退火过程中的扰动因子刻画、扰动策略选取、稳定性判据确定和温度下降策略等细节 .模拟实验结果表明 ,本文所提出的改进算法能够有效地回避对初始码书的敏感 ,同时在搜索性能和图像压缩后还原质量上都得到很好的改善 .展开更多
文摘In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.
基金the National Natural Science Foundation of China (60602057)the NaturalScience Foundation of Chongqing Science and Technology Commission (2006BB2373).
文摘A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can simplify the realization of the non-linear interpolated vector quantization (NLIVQ) technique and make the partial distance search (PDS) algorithm more efficient. Utilizing the relationship of vector L2-norm and its Euclidean distance, some conditions of eliminating unnecessary codewords are obtained. Further, using inequality constructed by the subvector L2-norm, more unnecessary codewords are eliminated. During the search process for code, mostly unlikely codewords can be rejected by the proposed algorithm combined with the non-linear interpolated vector quantization technique and the partial distance search technique. The experimental results show that the reduction of computation is outstanding in the encoding time and complexity against the full search method.
文摘This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is inferior only to the best transform KLT. Its vector quantization can maintain the minimum quantization distortions and greatly increase the compression ratio. In order to improve compression efficiency, an adaptive strategy of selecting reserved region patterns is applied to preserving the high energy at the same compression ratio. The experiment results show that they are satisfactory at the compression ration ratio if greater than 20.
文摘随着人工智能的发展,深度神经网络成为多种模式识别任务中必不可少的工具,由于深度卷积神经网络(CNN)参数量巨大、计算复杂度高,将它部署到计算资源和存储空间受限的边缘计算设备上成为一项挑战。因此,深度网络压缩成为近年来的研究热点。低秩分解与向量量化是深度网络压缩中重要的两个研究分支,其核心思想都是通过找到原网络结构的一种紧凑型表达,从而降低网络参数的冗余程度。通过建立联合压缩框架,提出一种基于低秩分解和向量量化的深度网络压缩方法——可量化的张量分解(QTD)。该方法能够在网络低秩结构的基础上实现进一步的量化,从而得到更大的压缩比。在CIFAR-10数据集上对经典ResNet和该方法进行验证的实验结果表明,QTD能够在准确率仅损失1.71个百分点的情况下,将网络参数量压缩至原来的1%。而在大型数据集ImageNet上把所提方法与基于量化的方法PQF(Permute,Quantize,and Fine-tune)、基于低秩分解的方法TDNR(Tucker Decomposition with Nonlinear Response)和基于剪枝的方法CLIP-Q(Compression Learning by In-parallel Pruning-Quantization)进行比较与分析的实验结果表明,QTD能够在相同压缩范围下实现更好的分类准确率。