摘要
情感分析是目前人工智能与社交媒体研究的热门领域,具有重要的理论意义和实用价值。为了解决由于社交媒体具有随意性、情感主观性等特点造成文本与图像之间的情感互斥问题,提出一种基于图文融合的跨模态社交媒体情感分析方法。该方法不仅可以学习到文本与图像之间的情感互补特性,而且通过引入模态贡献计算,可避免情感表达不一致问题。在Veer和Weibo数据集上的实验结果显示,相比于现有融合方法,采用该方法的情感分类准确率平均提高了约4%。基于图文融合的跨模态社交媒体情感分析方法能够很好地处理模态间的情感互斥问题,具有较强的情感识别能力。
Sentiment analysis is a hot field in artificial intelligence and social media research,which has a very important theoretical and practical value.In order to solve the problem of emotional mutual exclusion between texts and images caused by the randomness and emotional subjectivity of social media,a cross-modal social media sentiment analysis method based on the fusion of image and text is proposed.This method can not only learn the emotional complementarity between texts and images,but also avoid the problem of the inconsistency of emotional expression by introducing the modal contribution calculation.Experimental results on Veer and Weibo datas-ets show that this method is about 4%more accurate than the existing fusion methods.The cross-modal social media sentiment analysis method based on the fusion of image and text can deal with the problem of modal mutual emotional exclusion well,and has strong recog-nition ability.
作者
申自强
SHEN Zi-qiang(School of Computer Science and Communication Engineering,Jiangsu University,Zhenjiang 212013,China)
出处
《软件导刊》
2019年第1期9-13,16,共6页
Software Guide
基金
国家自然科学基金面上项目(61272211)
关键词
社交媒体
情感分析
图文融合
贡献计算
跨模态
social media
sentiment analysis
fusion of image and text
contribution calculation
cross-modal
作者简介
申自强(1991-),男,江苏大学计算机科学与通信工程学院硕士研究生,研究方向为人工智能与情感识别