Facial expression recognition is a hot topic in computer vision, but it remains challenging due to the feature inconsistency caused by person-specific 'characteristics of facial expressions. To address such a chal...Facial expression recognition is a hot topic in computer vision, but it remains challenging due to the feature inconsistency caused by person-specific 'characteristics of facial expressions. To address such a challenge, and inspired by the recent success of deep identity network (DeepID-Net) for face identification, this paper proposes a novel deep learning based framework for recognising human expressions with facial images. Compared to the existing deep learning methods, our proposed framework, which is based on multi-scale global images and local facial patches, can significantly achieve a better performance on facial expression recognition. Finally, we verify the effectiveness of our proposed framework through experiments on the public benchmarking datasets JAFFE and extended Cohn-Kanade (CK+).展开更多
Recent years have seen an explosion in graph data from a variety of scientific,social and technological fields.From these fields,emotion recognition is an interesting research area because it finds many applications i...Recent years have seen an explosion in graph data from a variety of scientific,social and technological fields.From these fields,emotion recognition is an interesting research area because it finds many applications in real life such as in effective social robotics to increase the interactivity of the robot with human,driver safety during driving,pain monitoring during surgery etc.A novel facial emotion recognition based on graph mining has been proposed in this paper to make a paradigm shift in the way of representing the face region,where the face region is represented as a graph of nodes and edges and the gSpan frequent sub-graphs mining algorithm is used to find the frequent sub-structures in the graph database of each emotion.To reduce the number of generated sub-graphs,overlap ratio metric is utilized for this purpose.After encoding the final selected sub-graphs,binary classification is then applied to classify the emotion of the queried input facial image using six levels of classification.Binary cat swarm intelligence is applied within each level of classification to select proper sub-graphs that give the highest accuracy in that level.Different experiments have been conducted using Surrey Audio-Visual Expressed Emotion(SAVEE)database and the final system accuracy was 90.00%.The results show significant accuracy improvements(about 2%)by the proposed system in comparison to current published works in SAVEE database.展开更多
As a key link in human-computer interaction,emotion recognition can enable robots to correctly perceive user emotions and provide dynamic and adjustable services according to the emotional needs of different users,whi...As a key link in human-computer interaction,emotion recognition can enable robots to correctly perceive user emotions and provide dynamic and adjustable services according to the emotional needs of different users,which is the key to improve the cognitive level of robot service.Emotion recognition based on facial expression and electrocardiogram has numerous industrial applications.First,three-dimensional convolutional neural network deep learning architecture is utilized to extract the spatial and temporal features from facial expression video data and electrocardiogram(ECG)data,and emotion classification is carried out.Then two modalities are fused in the data level and the decision level,respectively,and the emotion recognition results are then given.Finally,the emotion recognition results of single-modality and multi-modality are compared and analyzed.Through the comparative analysis of the experimental results of single-modality and multi-modality under the two fusion methods,it is concluded that the accuracy rate of multi-modal emotion recognition is greatly improved compared with that of single-modal emotion recognition,and decision-level fusion is easier to operate and more effective than data-level fusion.展开更多
A critical difference between the right hemisphere hypothesis and valence hypothesis of emotion processing is whether the processing of happy facial expressions is lateralized to the right or left hemisphere. In this ...A critical difference between the right hemisphere hypothesis and valence hypothesis of emotion processing is whether the processing of happy facial expressions is lateralized to the right or left hemisphere. In this study participants from a Chinese sample were asked to classify happy or neutral facial expressions presented either bilaterally in both visual fields or unilaterally in the left visual field(LVF)or right visual field(RVF). They were required to make the speeded responses using either the left or right hand. It was found that for both left and right hand responses, happy(and neutral)expressions presented in the LVF were identified faster than happy(and neutral)expressions presented in the RVF. Bilateral presentation showed no further advantage over LVF presentation. Moreover, left hand responses were generally faster than right hand responses, although this effect was more pronounced for neutral expression. These findings were interpreted as supporting the right hemisphere hypothesis, with happy expression being identified initially by the right hemisphere.展开更多
人脸特征蕴含诸多信息,在面部属性和情感分析任务中具有重要价值,而面部特征的多样性和复杂性使人脸分析任务变得困难。针对上述难题,从面部细粒度特征角度出发,提出基于上下文通道注意力机制的人脸属性估计和表情识别(FAER)模型。首先...人脸特征蕴含诸多信息,在面部属性和情感分析任务中具有重要价值,而面部特征的多样性和复杂性使人脸分析任务变得困难。针对上述难题,从面部细粒度特征角度出发,提出基于上下文通道注意力机制的人脸属性估计和表情识别(FAER)模型。首先,构建基于ConvNext的局部特征编码骨干网络,并运用骨干网络编码局部特征的有效性来充分表征人脸局部特征之间的差异性;其次,提出上下文通道注意力(CC Attention)机制,通过动态自适应调整特征通道上的权重信息,表征深度特征的全局和局部特征,从而弥补骨干网络编码全局特征能力的不足;最后,设计不同分类策略,针对人脸属性估计(FAE)和面部表情识别(FER)任务,分别采用不同损失函数组合,以促使模型学习更多的面部细粒度特征。实验结果表明,所提FAER模型在人脸属性数据集CelebA(CelebFaces Attributes)上取得了91.87%的平均准确率,相较于次优模型SwinFace(Swin transformer for Face)高出0.55个百分点;在面部表情数据集RAF-DB和AffectNet上分别取得了91.75%和66.66%的准确率,相较于次优模型TransFER(Transformers for Facial Expression Recognition)分别高出0.84和0.43个百分点。展开更多
目的探究凉血清肺汤联合硝黄搽剂湿敷治疗面部脂溢性皮炎肺胃热盛证的临床效果。方法回顾性选取2022年1月—2023年1月期间收治的面部脂溢性皮炎肺胃热盛证患者102例,根据随机数字表法分为研究组(51例),常规组(51例)。常规组进行硝黄搽...目的探究凉血清肺汤联合硝黄搽剂湿敷治疗面部脂溢性皮炎肺胃热盛证的临床效果。方法回顾性选取2022年1月—2023年1月期间收治的面部脂溢性皮炎肺胃热盛证患者102例,根据随机数字表法分为研究组(51例),常规组(51例)。常规组进行硝黄搽剂湿敷治疗,研究组采用凉血清肺汤联合硝黄搽剂湿敷治疗。比较治疗结束后患者临床疗效,比较治疗结束后患者湿疹面积及严重程度指数(eczema area severity index,EASI)、整体评分(investigator′s global assessment,IGA)、基质金属蛋白酶-3(matrix metalloproteinase-3,MMP-3)、白细胞介素-1β(interleukin-1β,IL-1β)水平,并比较治疗期间的不良反应情况。结果治疗后,两组患者中医症状评分较治疗前均降低(P<0.05),且研究组低于常规组(P<0.05),研究组、常规组治疗总有效率分别为94.12%(48/51)、72.55%(37/51),研究组较常规组高(P<0.05),治疗后两组IGA、EASI评分降低,研究组较常规组低(P<0.05),两组面部脂溢性皮炎肺胃热盛证患者治疗后MMP-3、IL-1β水平降低,研究组MMP-3、IL-1β水平较常规组低(P<0.05),两组患者不良反应率比较,差异均无统计学意义(P>0.05)。结论凉血清肺汤联合硝黄搽剂湿敷能够提高面部脂溢性皮炎肺胃热盛证患者的临床疗效,缓解症状,降低IGA、EASI评分,改善血清炎症水平,安全性较高。展开更多
基金supported by the Academy of Finland(267581)the D2I SHOK Project from Digile Oy as well as Nokia Technologies(Tampere,Finland)
文摘Facial expression recognition is a hot topic in computer vision, but it remains challenging due to the feature inconsistency caused by person-specific 'characteristics of facial expressions. To address such a challenge, and inspired by the recent success of deep identity network (DeepID-Net) for face identification, this paper proposes a novel deep learning based framework for recognising human expressions with facial images. Compared to the existing deep learning methods, our proposed framework, which is based on multi-scale global images and local facial patches, can significantly achieve a better performance on facial expression recognition. Finally, we verify the effectiveness of our proposed framework through experiments on the public benchmarking datasets JAFFE and extended Cohn-Kanade (CK+).
文摘Recent years have seen an explosion in graph data from a variety of scientific,social and technological fields.From these fields,emotion recognition is an interesting research area because it finds many applications in real life such as in effective social robotics to increase the interactivity of the robot with human,driver safety during driving,pain monitoring during surgery etc.A novel facial emotion recognition based on graph mining has been proposed in this paper to make a paradigm shift in the way of representing the face region,where the face region is represented as a graph of nodes and edges and the gSpan frequent sub-graphs mining algorithm is used to find the frequent sub-structures in the graph database of each emotion.To reduce the number of generated sub-graphs,overlap ratio metric is utilized for this purpose.After encoding the final selected sub-graphs,binary classification is then applied to classify the emotion of the queried input facial image using six levels of classification.Binary cat swarm intelligence is applied within each level of classification to select proper sub-graphs that give the highest accuracy in that level.Different experiments have been conducted using Surrey Audio-Visual Expressed Emotion(SAVEE)database and the final system accuracy was 90.00%.The results show significant accuracy improvements(about 2%)by the proposed system in comparison to current published works in SAVEE database.
基金supported by the Open Funding Project of National Key Laboratory of Human Factors Engineering(Grant NO.6142222190309)。
文摘As a key link in human-computer interaction,emotion recognition can enable robots to correctly perceive user emotions and provide dynamic and adjustable services according to the emotional needs of different users,which is the key to improve the cognitive level of robot service.Emotion recognition based on facial expression and electrocardiogram has numerous industrial applications.First,three-dimensional convolutional neural network deep learning architecture is utilized to extract the spatial and temporal features from facial expression video data and electrocardiogram(ECG)data,and emotion classification is carried out.Then two modalities are fused in the data level and the decision level,respectively,and the emotion recognition results are then given.Finally,the emotion recognition results of single-modality and multi-modality are compared and analyzed.Through the comparative analysis of the experimental results of single-modality and multi-modality under the two fusion methods,it is concluded that the accuracy rate of multi-modal emotion recognition is greatly improved compared with that of single-modal emotion recognition,and decision-level fusion is easier to operate and more effective than data-level fusion.
文摘A critical difference between the right hemisphere hypothesis and valence hypothesis of emotion processing is whether the processing of happy facial expressions is lateralized to the right or left hemisphere. In this study participants from a Chinese sample were asked to classify happy or neutral facial expressions presented either bilaterally in both visual fields or unilaterally in the left visual field(LVF)or right visual field(RVF). They were required to make the speeded responses using either the left or right hand. It was found that for both left and right hand responses, happy(and neutral)expressions presented in the LVF were identified faster than happy(and neutral)expressions presented in the RVF. Bilateral presentation showed no further advantage over LVF presentation. Moreover, left hand responses were generally faster than right hand responses, although this effect was more pronounced for neutral expression. These findings were interpreted as supporting the right hemisphere hypothesis, with happy expression being identified initially by the right hemisphere.
基金Supported by National Natural Science Foundation of China(61303150,61472393) China Postdoctoral Science Foundation(2012M521248) Anhui Province Innovative Funds on Intelligent Speech Technology and Industrialization(13Z02008)
文摘人脸特征蕴含诸多信息,在面部属性和情感分析任务中具有重要价值,而面部特征的多样性和复杂性使人脸分析任务变得困难。针对上述难题,从面部细粒度特征角度出发,提出基于上下文通道注意力机制的人脸属性估计和表情识别(FAER)模型。首先,构建基于ConvNext的局部特征编码骨干网络,并运用骨干网络编码局部特征的有效性来充分表征人脸局部特征之间的差异性;其次,提出上下文通道注意力(CC Attention)机制,通过动态自适应调整特征通道上的权重信息,表征深度特征的全局和局部特征,从而弥补骨干网络编码全局特征能力的不足;最后,设计不同分类策略,针对人脸属性估计(FAE)和面部表情识别(FER)任务,分别采用不同损失函数组合,以促使模型学习更多的面部细粒度特征。实验结果表明,所提FAER模型在人脸属性数据集CelebA(CelebFaces Attributes)上取得了91.87%的平均准确率,相较于次优模型SwinFace(Swin transformer for Face)高出0.55个百分点;在面部表情数据集RAF-DB和AffectNet上分别取得了91.75%和66.66%的准确率,相较于次优模型TransFER(Transformers for Facial Expression Recognition)分别高出0.84和0.43个百分点。
文摘目的探究凉血清肺汤联合硝黄搽剂湿敷治疗面部脂溢性皮炎肺胃热盛证的临床效果。方法回顾性选取2022年1月—2023年1月期间收治的面部脂溢性皮炎肺胃热盛证患者102例,根据随机数字表法分为研究组(51例),常规组(51例)。常规组进行硝黄搽剂湿敷治疗,研究组采用凉血清肺汤联合硝黄搽剂湿敷治疗。比较治疗结束后患者临床疗效,比较治疗结束后患者湿疹面积及严重程度指数(eczema area severity index,EASI)、整体评分(investigator′s global assessment,IGA)、基质金属蛋白酶-3(matrix metalloproteinase-3,MMP-3)、白细胞介素-1β(interleukin-1β,IL-1β)水平,并比较治疗期间的不良反应情况。结果治疗后,两组患者中医症状评分较治疗前均降低(P<0.05),且研究组低于常规组(P<0.05),研究组、常规组治疗总有效率分别为94.12%(48/51)、72.55%(37/51),研究组较常规组高(P<0.05),治疗后两组IGA、EASI评分降低,研究组较常规组低(P<0.05),两组面部脂溢性皮炎肺胃热盛证患者治疗后MMP-3、IL-1β水平降低,研究组MMP-3、IL-1β水平较常规组低(P<0.05),两组患者不良反应率比较,差异均无统计学意义(P>0.05)。结论凉血清肺汤联合硝黄搽剂湿敷能够提高面部脂溢性皮炎肺胃热盛证患者的临床疗效,缓解症状,降低IGA、EASI评分,改善血清炎症水平,安全性较高。