Facial Expression Recognition Using Fast Walidlet Hybrid Transform

DOI:

https://doi.org/10.36371/port.2020.3.4

Authors

  • Walid Amin Mahmoud Professor of Digital Signal processing. University of Uruk. College of Engineering. Baghdad. Iraq
  • Jane Jaleel Stephan University of Information Technology and Communications (UOITC), Baghdad,Iraq
  • Anmar Abdel Wahab Razzak Mustansiriya University. Faculty of Education. Department of Computer Science. Baghdad. Iraq

Automatic analysis of facial expressions is rapidly becoming an area of intense interest in computer vision and artificial intelligence research communities. In this paper an approach is presented for facial expression recognition of the six basic prototype expressions (i.e., joy, surprise, anger, sadness, fear, and disgust) based on Facial Action Coding System (FACS). The approach is attempting to utilize a combination of different transforms (Walid let hybrid transform); they consist of Fast Fourier Transform; Radon transform and Multiwavelet transform for the feature extraction. Korhonen Self Organizing Feature Map (SOFM) then used for patterns clustering based on the features obtained from the hybrid transform above. The result shows that the method has very good accuracy in facial expression recognition. However, the proposed method has many promising features that make it interesting. The approach provides a new method of feature extraction in which overcome the problem of the illumination, faces that varies from one individual to another quite considerably due to different age, ethnicity, gender and cosmetic also it does not require a precise normalization and lighting equalization. An average clustering accuracy of 94.8% is achieved for six basic expressions, where different databases had been used for the test of the method.

Keywords:

Facial expression recognition, Relative geometric position, Dependency, hybrid feature

[1] Liu, C., Hirota, K., Ma, J., Jia, Z., & Dai, Y. (2021). Facial Expression Recognition Using Hybrid Features of Pixel and Geometry. IEEE Access, 9, 18876–18889. https://doi.org/10.1109/ACCESS.2021.3054332

[2] Dulguerov, P., Marchal, F., Wang, D., & Gysin, C. (1999). Review of objective topographic facial nerve evaluation methods. American Journal of Otology, 20(5), 672–678.

[3] Shanthi, P., & Nickolas, S. (2021). An efficient automatic facial expression recognition using local neighborhood feature fusion. Multimedia Tools and Applications, 80(7), 10187–10212. https://doi.org/10.1007/s11042-020-10105-2

[4] Chen, J., Chen, Z., Chi, Z., & Fu, H. (2018). Facial Expression Recognition in Video with Multiple Feature Fusion. IEEE Transactions on Affective Computing, 9(1), 38–50. https://doi.org/10.1109/TAFFC.2016.2593719

[5] Li, Y., Zeng, J., Shan, S., & Chen, X. (2019). Occlusion Aware Facial Expression Recognition Using CNN With Attention Mechanism. IEEE Transactions on Image Processing, 28(5), 2439–2450. https://doi.org/10.1109/TIP.2018.2886767

[6] Choi, K., Chen, J., Park, M. W., Yang, H., Choi, W., Ikonin, S., … Park, J. (2020). Video Codec Using Flexible Block Partitioning and Advanced Prediction, Transform and Loop Filtering Technologies. IEEE Transactions on Circuits and Systems for Video Technology, 30(5), 1326–1345. https://doi.org/10.1109/TCSVT.2020.2971268

[7] Fasel, B., & Luettin, J. (2003). Automatic facial expression analysis: A survey. Pattern Recognition. Elsevier Ltd. https://doi.org/10.1016/S0031-3203(02)00052-3

[8] Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2), 124–129. https://doi.org/10.1037/h0030377

[9] Darwin, C. (1873). The Expression of the Emotions in Man and Animals. The Journal of the Anthropological Institute of Great Britain and Ireland, 2, 444. https://doi.org/10.2307/2841467

[10] Ekman, P. (2012). Emotional and conversational nonverbal signals. In Gesture, Speech, and Sign (pp. 45–55). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198524519.003.0003

[11] Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive database for facial expression analysis. In Proceedings - 4th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2000 (pp. 46–53). IEEE Computer Society. https://doi.org/10.1109/AFGR.2000.840611

[12] Ekman, P., & Friesen, W. V. (1978). Manual for the facial action coding system. Consulting Psychologist, 104, 0.

[13] Oatley, K., & Johnson-Laird, P. N. (2014, March). Cognitive approaches to emotions. Trends in Cognitive Sciences. https://doi.org/10.1016/j.tics.2013.12.004

[14] Fellenz, W. A., Taylor, J. G., Tsapatsoulis, N., & Kollias, S. (1999). Comparing template-based, feature-based and supervised classification of facial expressions from static images. Computational Intelligence and Applications, 354–359.

[15] Padgett, C., & Cottrell, G. (1997). Representing face images for emotion classification. In Advances in Neural Information Processing Systems (pp. 894–900). Neural information processing systems foundation.

[16] Bartlett, M. S. (2001). Face Image Analysis by Unsupervised Learning. Face Image Analysis by Unsupervised Learning. Springer US. https://doi.org/10.1007/978-1-4615-1637-8

[17] Pantic, M., & Rothkrantz, L. J. M. (2000). Expert system for automatic analysis of facial expressions. Image and Vision Computing, 18(11), 881–905. https://doi.org/10.1016/S0262-8856(00)00034-2

[18] Kulkarni, S. S., Reddy, N. P., & Hariharan, S. I. (2009). Facial expression (mood) recognition from facial images using committee neural networks. BioMedical Engineering Online, 8. https://doi.org/10.1186/1475-925X-8-16

[19] Lisetti, C. L., & Rumelhart, D. E. (1998). Facial Expression Recognition using a Neural Network. In Proceedings of the 11th International Flairs Conference (FLAIRS-98). Menlo Park, CA: AAAI Press.

[20] Kobayashi, H., & Hara, F. (1997). Facial interaction between animated 3D face robot and human beings. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (Vol. 4, pp. 3732–3737). IEEE. https://doi.org/10.1109/icsmc.1997.633250

[21] Yoneyama, M., Iwano, Y., Ohtake, A., & Shirai, K. (1997). Facial expressions recognition using discrete Hopfield neural networks. In IEEE International Conference on Image Processing (Vol. 1, pp. 117–120). IEEE Comp Soc. https://doi.org/10.1109/icip.1997.647398

[22] Pumlumchiak, T., & Vittayakorn, S. (2017). Facial expression recognition using local Gabor filters and PCA plus LDA. In 2017 9th International Conference on Information Technology and Electrical Engineering, ICITEE 2017 (Vol. 2018-January, pp. 1–6). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICITEED.2017.8250446

[23] Hasan. H. M., AL.Jouhar, W, A., Alwan, M. A., (2012). 3-D Face Recognition Using Improved 3D Mixed Transform. International Journal of Biometrics and Bioinformatics (IJBB), Volume (6): Issue (1): 2012. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.740.1016&rep=rep1&type=pdf

[24] Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711–720. https://doi.org/10.1109/34.598228

[25] Bartlett, M. S., Littlewort, G., Braathen, B., Sejnowski, T. J., & Movellan, J. R. (2003). A prototype for automatic recognition of spontaneous facial actions. In Advances in Neural Information Processing Systems. Neural information processing systems foundation.

[26] Bartlett, M. S., Braathen, B., Littlewort-ford, G., Hershey, J., Fasel, I., Marks, T., … Movellan, J. R. (2001). Automatic Analysis of Spontaneous Facial Behavior : A Final Project Report. Cognitive Science, 1–39.

[27] Lyons, M. J., Budynek, J., Plantey, A., & Akamatsu, S. (2000). Classifying facial attributes using a 2-D Gabor wavelet representation and discriminant analysis. In Proceedings - 4th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2000 (pp. 202–207). IEEE Computer Society. https://doi.org/10.1109/AFGR.2000.840635

[28] Aladdin Jassim, R. Amin Mahmoud, W. (2005). A Proposed Walidlet Transform With Its Application In Image Enhancement. Master's thesis, University of Technology - Institute of Informatics for Graduate Studies. Baghdad. https://iqdr.iq/search?view=98adaf824adb11ebb86a77c772d4f5b2

[29] Al-Jouhar, W. (2011). A Smart Single Matrix Realization of Fast Walidlet Transform. International Journal of Research and Reviews in Computer Science, Vol. 1, issue 2, 2011. https://www.researchgate.net/publication/323906455_A_Smart_Single_Matrix_Realization_of_Fast_Walidlet_Transform

[30] Al-Jouhar, W. A., M. Al-Talib, T. M. J. A., & Salman, H. A. R. (2010). Fingerprint image recognition using Walidlet transform. Australian Journal of Basic and Applied Sciences, 4(8), 3970–3976.

[31] Al-Helali, A. H. M., Ali, H. A., Al-Dulaimi, B., Alzubaydi, D., & Mahmmoud, W. A. (2009). Slantlet transform for multispectral image fusion. Journal of Computer Science, 5(4), 263–269. https://doi.org/10.3844/jcs.2009.263.269

[32] Lien, J. J., Cohn, J. F., Kanade, T., & Li, C. C. (1998). Automated facial expression recognition based on FACS action units. In Proceedings - 3rd IEEE International Conference on Automatic Face and Gesture Recognition, FG 1998 (pp. 390–395). IEEE Computer Society. https://doi.org/10.1109/AFGR.1998.670980

[33] Al-Jouhar, W. A., Kadhim, D,J,. (2013). A Proposal Algorithm to Solve Delay Constraint Least Cost Optimization Problem. Journal of Engineering. Volume 19, Issue 1, Pages 155-160. https://www.iasj.net/iasj/article/63899

[34] Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive database for facial expression analysis. In Proceedings - 4th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2000 (pp. 46–53). IEEE Computer Society. https://doi.org/10.1109/AFGR.2000.840611

Mahmoud, W. A., Stephan, J. J., & Razzak, A. A. W. (2020). Facial Expression Recognition Using Fast Walidlet Hybrid Transform. Journal Port Science Research, 3(1), 59–69. https://doi.org/10.36371/port.2020.3.4

Downloads

Download data is not yet available.