A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, i...A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.展开更多
A blind digital image forensic method for detecting copy-paste forgery between JPEG images was proposed.Two copy-paste tampering scenarios were introduced at first:the tampered image was saved in an uncompressed forma...A blind digital image forensic method for detecting copy-paste forgery between JPEG images was proposed.Two copy-paste tampering scenarios were introduced at first:the tampered image was saved in an uncompressed format or in a JPEG compressed format.Then the proposed detection method was analyzed and simulated for all the cases of the two tampering scenarios.The tampered region is detected by computing the averaged sum of absolute difference(ASAD) images between the examined image and a resaved JPEG compressed image at different quality factors.The experimental results show the advantages of the proposed method:capability of detecting small and/or multiple tampered regions,simple computation,and hence fast speed in processing.展开更多
To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize tr...To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.展开更多
With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acqui...With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.展开更多
In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is signif...In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.展开更多
In the process of image transmission, the famous JPEG and JPEG-2000 compression methods need more transmission time as it is difficult for them to compress the image with a low compression rate. Recently the compresse...In the process of image transmission, the famous JPEG and JPEG-2000 compression methods need more transmission time as it is difficult for them to compress the image with a low compression rate. Recently the compressed sensing(CS) theory was proposed, which has earned great concern as it can compress an image with a low compression rate, meanwhile the original image can be perfectly reconstructed from only a few compressed data. The CS theory is used to transmit the high resolution astronomical image and build the simulation environment where there is communication between the satellite and the Earth. Number experimental results show that the CS theory can effectively reduce the image transmission and reconstruction time. Even with a very low compression rate, it still can recover a higher quality astronomical image than JPEG and JPEG-2000 compression methods.展开更多
This paper proposes an application of compressive imaging systems to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system and a corresponding motion target detectio...This paper proposes an application of compressive imaging systems to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system and a corresponding motion target detection algorithm in video using compressive image data are developed. Coded masks with random Gaussian, Toeplitz and random binary are utilized to simulate the compressive image respectively. For compressive images, a mixture of the Gaussian distribution is applied to the compressed image field to model the background. A simple threshold test in compressive sampling image is used to declare motion objects. Foreground image retrieval from underdetermined measurement using the total variance optimization algorithm is explored. The signal-to-noise ratio (SNR) is employed to evaluate the image quality recovered from the compressive sampling signals, and receiver operation characteristic (ROC) curves are used to quantify the performance of the motion detection algorithm. Experimental results demonstrate that the low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz mask, motion detection algorithms using the random binary phase mask can yield better detection results. However using the random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed images.展开更多
A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can...A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can simplify the realization of the non-linear interpolated vector quantization (NLIVQ) technique and make the partial distance search (PDS) algorithm more efficient. Utilizing the relationship of vector L2-norm and its Euclidean distance, some conditions of eliminating unnecessary codewords are obtained. Further, using inequality constructed by the subvector L2-norm, more unnecessary codewords are eliminated. During the search process for code, mostly unlikely codewords can be rejected by the proposed algorithm combined with the non-linear interpolated vector quantization technique and the partial distance search technique. The experimental results show that the reduction of computation is outstanding in the encoding time and complexity against the full search method.展开更多
基金This project was supported by the National Natural Science Foundation of China (60532060)Hainan Education Bureau Research Project (Hjkj200602)Hainan Natural Science Foundation (80551).
文摘A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.
基金Project(61172184) supported by the National Natural Science Foundation of ChinaProject(200902482) supported by China Postdoctoral Science Foundation Specially Funded ProjectProject(12JJ6062) supported by the Natural Science Foundation of Hunan Province,China
文摘A blind digital image forensic method for detecting copy-paste forgery between JPEG images was proposed.Two copy-paste tampering scenarios were introduced at first:the tampered image was saved in an uncompressed format or in a JPEG compressed format.Then the proposed detection method was analyzed and simulated for all the cases of the two tampering scenarios.The tampered region is detected by computing the averaged sum of absolute difference(ASAD) images between the examined image and a resaved JPEG compressed image at different quality factors.The experimental results show the advantages of the proposed method:capability of detecting small and/or multiple tampered regions,simple computation,and hence fast speed in processing.
基金supported by the National Natural Science Foundationof China (60702012)the Scientific Research Foundation for the Re-turned Overseas Chinese Scholars, State Education Ministry
文摘To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.
基金This project was supported by the National Natural Science Foundation (No. 69972027).
文摘With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.
文摘In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.
文摘In the process of image transmission, the famous JPEG and JPEG-2000 compression methods need more transmission time as it is difficult for them to compress the image with a low compression rate. Recently the compressed sensing(CS) theory was proposed, which has earned great concern as it can compress an image with a low compression rate, meanwhile the original image can be perfectly reconstructed from only a few compressed data. The CS theory is used to transmit the high resolution astronomical image and build the simulation environment where there is communication between the satellite and the Earth. Number experimental results show that the CS theory can effectively reduce the image transmission and reconstruction time. Even with a very low compression rate, it still can recover a higher quality astronomical image than JPEG and JPEG-2000 compression methods.
基金supported by the National Natural Science Foundation of China (61271375)BIT Foundation (2012CX04054)
文摘This paper proposes an application of compressive imaging systems to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system and a corresponding motion target detection algorithm in video using compressive image data are developed. Coded masks with random Gaussian, Toeplitz and random binary are utilized to simulate the compressive image respectively. For compressive images, a mixture of the Gaussian distribution is applied to the compressed image field to model the background. A simple threshold test in compressive sampling image is used to declare motion objects. Foreground image retrieval from underdetermined measurement using the total variance optimization algorithm is explored. The signal-to-noise ratio (SNR) is employed to evaluate the image quality recovered from the compressive sampling signals, and receiver operation characteristic (ROC) curves are used to quantify the performance of the motion detection algorithm. Experimental results demonstrate that the low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz mask, motion detection algorithms using the random binary phase mask can yield better detection results. However using the random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed images.
基金the National Natural Science Foundation of China (60602057)the NaturalScience Foundation of Chongqing Science and Technology Commission (2006BB2373).
文摘A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can simplify the realization of the non-linear interpolated vector quantization (NLIVQ) technique and make the partial distance search (PDS) algorithm more efficient. Utilizing the relationship of vector L2-norm and its Euclidean distance, some conditions of eliminating unnecessary codewords are obtained. Further, using inequality constructed by the subvector L2-norm, more unnecessary codewords are eliminated. During the search process for code, mostly unlikely codewords can be rejected by the proposed algorithm combined with the non-linear interpolated vector quantization technique and the partial distance search technique. The experimental results show that the reduction of computation is outstanding in the encoding time and complexity against the full search method.