Lattice vector quantization (LVQ) has been used for real-time speech and audio coding systems. Compared with conventional vector quantization, LVQ has two main advantages: It has a simple and fast encoding process,...Lattice vector quantization (LVQ) has been used for real-time speech and audio coding systems. Compared with conventional vector quantization, LVQ has two main advantages: It has a simple and fast encoding process, and it significantly reduces the amount of memory required. Therefore, LVQ is suitable for use in low-complexity speech and audio coding. In this paper, we describe the basic concepts of LVQ and its advantages over conventional vector quantization. We also describe some LVQ techniques that have been used in speech and audio coding standards of international standards developing organizations (SDOs).展开更多
Noise feedback coding (NFC) has attracted renewed interest with the recent standardization of backward-compatible enhancements for ITU-T G.711 and G.722. It has also been revisited with the emergence of proprietary ...Noise feedback coding (NFC) has attracted renewed interest with the recent standardization of backward-compatible enhancements for ITU-T G.711 and G.722. It has also been revisited with the emergence of proprietary speech codecs, such as BV16, BV32, and SILK, that have structures different from CELP coding. In this article, we review NFC and describe a novel coding technique that optimally shapes coding noise in embedded pulse-code modulation (PCM) and embedded adaptive differential PCM (ADPCM). We describe how this new technique was incorporated into the recent ITU-T G.711.1, G.711 App. III, and G.722 Annex B (G.722B) speech-coding standards.展开更多
A variable-bit-rate characteristic waveform interpolation (VBR-CWI) speech codec with about 1.8 kbit/s average bit rate which integrates phonetic classification into characteristic waveform (CW) decomposition is p...A variable-bit-rate characteristic waveform interpolation (VBR-CWI) speech codec with about 1.8 kbit/s average bit rate which integrates phonetic classification into characteristic waveform (CW) decomposition is proposed. Each input frame is classified into one of 4 phonetic classes. Non-speech frames are represented with Bark-band noise model. The extracted CWs become rapidly evolving waveforms (REWs) or slowly evolving waveforms (SEWs) in the cases of unvoiced or stationary voiced frames respectively, while mixed voiced frames use the same CW decomposition as that in the conventional CWI. Experimental results show that the proposed codec can eliminate most buzzy and noisy artifacts existing in the fixed-bit-rate characteristic waveform interpolation (FBR-CWI) speech codec, the average bit rate can be much lower, and its reconstructed speech quality is much better than FS 1 016 CELP at 4.8 kbit/s and similar to G. 723.1 ACELP at 5.3 kbit/s.展开更多
To make the multiple descriptions codec adaptive to the packet loss rate, which can minimize the final distortion, a novel adaptive multiple descriptions sinusoidal coder (AMDSC) is proposed, which is based on a sin...To make the multiple descriptions codec adaptive to the packet loss rate, which can minimize the final distortion, a novel adaptive multiple descriptions sinusoidal coder (AMDSC) is proposed, which is based on a sinusoidal model and a noise model. Firstly, the sinusoidal parameters are extracted in the sinusoidal model, and ordered in a decrease manner. Odd indexed and even indexed parameters are divided into two descriptions. Secondly, the output vector from the noise model is split vector quantized. And the two sub-vectors are placed into two descriptions too. Finally, the number of the extracted parameters and the redundancy between the two descriptions are adjusted according to the packet loss rate of the network. Analytical and experimental resuits show that the proposed AMDSC outperforms existing MD speech coders by taking network loss characteristics into account. Therefore, it is very suitable for unreliable channels展开更多
文摘Lattice vector quantization (LVQ) has been used for real-time speech and audio coding systems. Compared with conventional vector quantization, LVQ has two main advantages: It has a simple and fast encoding process, and it significantly reduces the amount of memory required. Therefore, LVQ is suitable for use in low-complexity speech and audio coding. In this paper, we describe the basic concepts of LVQ and its advantages over conventional vector quantization. We also describe some LVQ techniques that have been used in speech and audio coding standards of international standards developing organizations (SDOs).
文摘Noise feedback coding (NFC) has attracted renewed interest with the recent standardization of backward-compatible enhancements for ITU-T G.711 and G.722. It has also been revisited with the emergence of proprietary speech codecs, such as BV16, BV32, and SILK, that have structures different from CELP coding. In this article, we review NFC and describe a novel coding technique that optimally shapes coding noise in embedded pulse-code modulation (PCM) and embedded adaptive differential PCM (ADPCM). We describe how this new technique was incorporated into the recent ITU-T G.711.1, G.711 App. III, and G.722 Annex B (G.722B) speech-coding standards.
文摘A variable-bit-rate characteristic waveform interpolation (VBR-CWI) speech codec with about 1.8 kbit/s average bit rate which integrates phonetic classification into characteristic waveform (CW) decomposition is proposed. Each input frame is classified into one of 4 phonetic classes. Non-speech frames are represented with Bark-band noise model. The extracted CWs become rapidly evolving waveforms (REWs) or slowly evolving waveforms (SEWs) in the cases of unvoiced or stationary voiced frames respectively, while mixed voiced frames use the same CW decomposition as that in the conventional CWI. Experimental results show that the proposed codec can eliminate most buzzy and noisy artifacts existing in the fixed-bit-rate characteristic waveform interpolation (FBR-CWI) speech codec, the average bit rate can be much lower, and its reconstructed speech quality is much better than FS 1 016 CELP at 4.8 kbit/s and similar to G. 723.1 ACELP at 5.3 kbit/s.
文摘To make the multiple descriptions codec adaptive to the packet loss rate, which can minimize the final distortion, a novel adaptive multiple descriptions sinusoidal coder (AMDSC) is proposed, which is based on a sinusoidal model and a noise model. Firstly, the sinusoidal parameters are extracted in the sinusoidal model, and ordered in a decrease manner. Odd indexed and even indexed parameters are divided into two descriptions. Secondly, the output vector from the noise model is split vector quantized. And the two sub-vectors are placed into two descriptions too. Finally, the number of the extracted parameters and the redundancy between the two descriptions are adjusted according to the packet loss rate of the network. Analytical and experimental resuits show that the proposed AMDSC outperforms existing MD speech coders by taking network loss characteristics into account. Therefore, it is very suitable for unreliable channels