Latin America. Fujitsu developed an AI-powered facial expression recognition technology that detects subtle changes in facial expression with a high degree of accuracy. The new technology has been developed in collaboration with the School of Computer Science at Carnegie Mellon University (EU).
One of the obstacles to facial expression recognition technology is the difficulty of providing large amounts of data needed to train detection models for each facial pose, because faces are usually captured with a wide variety of poses in real-world applications. To address the problem, Fujitsu has developed a technology to adapt different normalization processes for each facial image.
For example, when the angle of the subject's face is oblique, the technology can adjust the image to look more like the front image of the face, allowing the detection model to be trained with a relatively small amount of data. The technology can accurately detect subtle emotional changes, including uncomfortable or nervous laughter, confusion, etc., and also when the subject's face moves in a real-world context.
Fujitsu anticipates that the new technology will be used in a variety of real-world applications, including facilitating communication to improve employee engagement and also to optimize safety for drivers and workers in a factory.
To "read" human emotions more effectively, it is critical to capture the subtle facial changes associated with emotions, such as understanding, bewilderment, and stress. To achieve this, developers have increasingly relied on Action Units (AU), which express the "units" of motion corresponding to each muscle of the face based on an anatomical classification system. For example, UA have been used by professionals in fields as varied as psychological research and animation.
AU is classified into about 30 types based on the movements of each facial muscle, including those of the eyebrows and cheeks. By integrating these AU into its technology, Fujitsu has pioneered a new approach to discovering even subtle changes in facial expression. To detect AU more accurately, the underlying deep learning techniques require large amounts of data. However, in real-world situations, cameras typically capture faces at various angles, sizes, and positions, making it difficult to prepare large-scale learning data corresponding to each visual/spatial state. Therefore, the images captured by the camera adversely affect the accuracy of detection.
Technologies developed
In collaboration with Carnegie Mellon University's School of Computer Science, Fujitsu Laboratories, Ltd. and Fujitsu Laboratories of America Inc. have developed ai-based facial expression recognition technology that can detect AU with high accuracy, even with limited training data.
1. Normalization process to adjust the face for better resemblance of the front image
2. Analysis of significant regions affecting AU detection for each AU.
To address this problem, the areas that have a significant influence on the detection of AU of the captured face image are analyzed and the degree of rotation, magnification and reduction are adjusted, accordingly. By using different standardization processes for each individual AU, the developed technology can detect AU more accurately.
Results
This technology has achieved a high detection accuracy rate of 81%, even with limited training data. This technology is also more accurate than other existing ones, according to the conclusion of several benchmarks of facial expression recognition technology (Facial Expression Recognition and Analysis 2017).