In low signal-to-noise ratio(SNR)environments,the traditional radar emitter recognition(RER)method struggles to recognize multiple radar emitter signals in parallel.This paper proposes a multi-label classification and...In low signal-to-noise ratio(SNR)environments,the traditional radar emitter recognition(RER)method struggles to recognize multiple radar emitter signals in parallel.This paper proposes a multi-label classification and recognition method for multiple radar-emitter modulation types based on a residual network.This method can quickly perform parallel classification and recognition of multi-modulation radar time-domain aliasing signals under low SNRs.First,we perform time-frequency analysis on the received signal to extract the normalized time-frequency image through the short-time Fourier transform(STFT).The time-frequency distribution image is then denoised using a deep normalized convolutional neural network(DNCNN).Secondly,the multi-label classification and recognition model for multi-modulation radar emitter time-domain aliasing signals is established,and learning the characteristics of radar signal time-frequency distribution image dataset to achieve the purpose of training model.Finally,time-frequency image is recognized and classified through the model,thus completing the automatic classification and recognition of the time-domain aliasing signal.Simulation results show that the proposed method can classify and recognize radar emitter signals of different modulation types in parallel under low SNRs.展开更多
针对当前多模态情感识别算法在模态特征提取、模态间信息融合等方面存在识别准确率偏低、泛化能力较差的问题,提出了一种基于语音、文本和表情的多模态情感识别算法。首先,设计了一种浅层特征提取网络(Sfen)和并行卷积模块(Pconv)提取...针对当前多模态情感识别算法在模态特征提取、模态间信息融合等方面存在识别准确率偏低、泛化能力较差的问题,提出了一种基于语音、文本和表情的多模态情感识别算法。首先,设计了一种浅层特征提取网络(Sfen)和并行卷积模块(Pconv)提取语音和文本中的情感特征,通过改进的Inception-ResnetV2模型提取视频序列中的表情情感特征;其次,为强化模态间的关联性,设计了一种用于优化语音和文本特征融合的交叉注意力模块;最后,利用基于注意力的双向长短期记忆(BiLSTM based on attention mechanism,BiLSTM-Attention)模块关注重点信息,保持模态信息之间的时序相关性。实验通过对比3种模态不同的组合方式,发现预先对语音和文本进行特征融合可以显著提高识别精度。在公开情感数据集CH-SIMS和CMU-MOSI上的实验结果表明,所提出的模型取得了比基线模型更高的识别准确率,三分类和二分类准确率分别达到97.82%和98.18%,证明了该模型的有效性。展开更多
基金The authors would like to acknowledge National Natural Science Foundation of China under Grant 61973037 and Grant 61673066 to provide fund for conducting experiments.
文摘In low signal-to-noise ratio(SNR)environments,the traditional radar emitter recognition(RER)method struggles to recognize multiple radar emitter signals in parallel.This paper proposes a multi-label classification and recognition method for multiple radar-emitter modulation types based on a residual network.This method can quickly perform parallel classification and recognition of multi-modulation radar time-domain aliasing signals under low SNRs.First,we perform time-frequency analysis on the received signal to extract the normalized time-frequency image through the short-time Fourier transform(STFT).The time-frequency distribution image is then denoised using a deep normalized convolutional neural network(DNCNN).Secondly,the multi-label classification and recognition model for multi-modulation radar emitter time-domain aliasing signals is established,and learning the characteristics of radar signal time-frequency distribution image dataset to achieve the purpose of training model.Finally,time-frequency image is recognized and classified through the model,thus completing the automatic classification and recognition of the time-domain aliasing signal.Simulation results show that the proposed method can classify and recognize radar emitter signals of different modulation types in parallel under low SNRs.
文摘针对当前多模态情感识别算法在模态特征提取、模态间信息融合等方面存在识别准确率偏低、泛化能力较差的问题,提出了一种基于语音、文本和表情的多模态情感识别算法。首先,设计了一种浅层特征提取网络(Sfen)和并行卷积模块(Pconv)提取语音和文本中的情感特征,通过改进的Inception-ResnetV2模型提取视频序列中的表情情感特征;其次,为强化模态间的关联性,设计了一种用于优化语音和文本特征融合的交叉注意力模块;最后,利用基于注意力的双向长短期记忆(BiLSTM based on attention mechanism,BiLSTM-Attention)模块关注重点信息,保持模态信息之间的时序相关性。实验通过对比3种模态不同的组合方式,发现预先对语音和文本进行特征融合可以显著提高识别精度。在公开情感数据集CH-SIMS和CMU-MOSI上的实验结果表明,所提出的模型取得了比基线模型更高的识别准确率,三分类和二分类准确率分别达到97.82%和98.18%,证明了该模型的有效性。