摘要
中文文本分类任务中,深度学习神经网络方法具有自动提取特征、特征表达能力强的优势,但其模型可解释性不强。提出了一种Text-CNN+Multi-Head Attention模型,引入多头自注意力机制克服Text-CNN可解释性的不足。首先采用Text-CNN神经网络,高效提取文本局部特征信息;然后通过引入多头自注意力机制,最大限度发挥Text-CNN的并行运算能力,强调文本序列全局信息的捕捉;最后在时间和空间上完成对文本信息的特征提取。实验结果表明,提出的模型较其他模型在保证运算速度的同时,准确率提升了1%~2%。
In Chinese text classification tasks,deep learning neural network method has the advantage of automatic extraction and strong presentation ability,but its model is weak in interpretability.For this reason,a Text-CNN+Multi-Head Attention model is proposed,and the Multi-Head Self-Attention mechanism is introduced to enhance the interpretability of the model.This paper first uses the Text-CNN neural network to extract the text local feature information effectively.Then,by introducing the Multi-Head Attention mechanism to maximize the parallel computing power of Text-CNN,emphasizing the capture of global information of text sequence.Finally,the functional extraction of text information is completed in time and space.The experiments results show that the accuracy of the proposed model is 1to 2percentage points higher than that of other models while ensuring the running speed of the network model.
作者
熊漩
严佩敏
Xiong Xuan;Yan Peimin(College of Communication and Information Engineering,Shanghai University,Shanghai 200444,China)
出处
《电子测量技术》
2020年第10期125-130,共6页
Electronic Measurement Technology
作者简介
熊漩,硕士研究生,主要研究方向为自然语言处理。E-mail:2444641647@qq.com