西安电子科技大学学报 ›› 2022, Vol. 49 ›› Issue (4): 127-133.doi: 10.19665/j.issn1001-2400.2022.04.015

• 计算机科学与技术 • 上一篇    下一篇

双通道决策信息融合下的微表情识别

戎如意(),薛珮芸(),白静(),贾海蓉(),谢娅利()   

  1. 太原理工大学 信息与计算机学院,山西 太原 030024
  • 收稿日期:2021-06-02 出版日期:2022-08-20 发布日期:2022-08-15
  • 作者简介:戎如意(1997—),男,太原理工大学硕士研究生,E-mail: ruyrong@foxmail.com|薛珮芸(1990—),女,讲师,博士,E-mail: 236139168@qq.com|白 静(1965—),女,教授,博士,E-mail: bj613@126.com|贾海蓉(1977—),女,教授,博士,E-mail: helenjia722@163.com|谢娅利(1996—),女,太原理工大学硕士研究生,E-mail: 2297614926@qq.com
  • 基金资助:
    山西省应用基础研究计划(201901D111094);山西省留学回国人员科技活动择优资助项目(20200017);山西省基础研究计划(20210302123186)

Micro-expression recognition based on two-channel decision information fusion

RONG Ruyi(),XUE Peiyun(),BAI Jing(),JIA Hairong(),XIE Yali()   

  1. School of Information and Computer Science,Taiyuan University of Technology,Taiyuan 030024,China
  • Received:2021-06-02 Online:2022-08-20 Published:2022-08-15

摘要:

微表情作为揭示潜在情绪的一个重要通道,是一种无意识的、不受大脑控制的非语言面部信息,能够反映人们最真实的心理感受和心理状态。但是,微表情存在动作幅度小且快速出现、不易被捕捉等特性,使得单一模态的微表情识别准确率难以提升。针对上述问题,提出一种微表情面部颜色特征提取算法,并将其提取的特征与微表情纹理特征进行决策融合,从而构成微表情双模态情感识别模型。该模型首先通过均匀动态纹理识别方法从经过预处理的微表情数据中提取相应的纹理特征;其次计算微表情两帧序列图片的每一个像素点之间的Lab色差,由此获得面部的颜色特征,并对其进行嵌入式特征选择以剔除冗余的特征;然后分别训练两种模态的分类器,并将两种模态训练得到的分类信息进行决策融合;最后得到微表情情绪分类结果。模型在微表情数据集CAMSE Ⅱ和SMIC上进行了实验。实验结果表明,微表情的纹理单模态和面部颜色单模态的平均识别准确率约为64.73%、51.64%和63.58%、50.48%,而决策融合后微表情情绪的识别结果约为68.11%和66.43%,高于融合前微表情的识别准确率,说明文中提出的微表情双模态情感识别模型明显提高了微表情的识别能力。

关键词: 面部颜色特征, 纹理特征, 欧拉视频放大, 特征选择, 决策融合

Abstract:

Micro-expression,as an important channel to reveal underlying emotion,is a kind of unconscious nonverbal facial information,not controlled by the brain,can reflect people's real psychological experience and psychological state,but micro-expression movements appear small and quick,and cannot be easily captured,making it difficult for a single mode of micro expression recognition accuracy to ascend.To solve the above problems,this paper proposes an algorithm for extracting the facial color features of micro-expressions,and integrates the extracted features with the texture features of micro-expressions for decision fusion,so as to construct the bi-modal emotion recognition model of micro-expressions.First,the model extracts the corresponding texture features from the preprocessed micro-expression data by the uniform LBP-TOP algorithm.Second,the Lab color difference between each pixel of two frames of micro-expression sequence images is calculated to obtain the facial color features,and the embedded feature selection is carried out to eliminate redundant features.Then,the classifiers of the two modes are trained respectively,and the classification information obtained from the two modes is fused for decision making.Finally,the classification results of micro-expressions are obtained.The model was tested on CAMSE Ⅱ and SMIC micro-expression dataset.Experimental results show that the average recognition accuracies of the micro-expression single mode of texture and face color are 64.73% and 51.64%,and 63.58% and 50.48%,while the results of micro-expression emotion recognition after decision fusion are 68.11% and 66.43%.The recognition accuracy is higher than that before the fusion,which indicates that the proposed bimodal emotion recognition model can significantly improve the recognition ability of micro expressions.

Key words: facial color feature, texture features, euler video amplification, feature selection, decision fusion

中图分类号: 

  • TP183
Baidu
map