Abstract
Recognizing health status through human faces is a challenging research topic. Among them, facial recognition expressions can indirectly reflect the inner health status, which has very significant commercial and research value. Most research work on facial expression recognition uses traditional methods, and the accuracy of traditional methods highly depends on feature extraction. Deep learning has already promoted the research on facial expression recognition. This paper proposes a dual-branch network that uses global facial information and local information obtained by using the attention mechanism to merge and identify human facial emotional information. Use of shared pre-training module to extract low-level semantic information of global and also local images. The dual-branch network architecture utilizes the attention module to capture the relationship between different sub-images to fuse the local features of the face. Experimental results demonstrate that the accuracy of the CK+ Dataset reaches 95.96%, which is improved compared to other existing methods.
Export citation and abstract BibTeX RIS
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.