Automatic Detection and Classification of Human Emotion in Real-Time Scenario
PDF

How to Cite

Keshri, Ashish, Ayush Singh, Baibhav Kumar, Devenrdra Pratap, and Ankit Chauhan. 2022. “Automatic Detection and Classification of Human Emotion in Real-Time Scenario”. Journal of ISMAC 4 (1): 41-53. https://doi.org/10.36548/jismac.2022.1.005.

Keywords

— CNN
— Deep learning model
— Face emotion recognition
Published: 14-05-2022

Abstract

This work proposes the implementation of the idea of real-time human emotion recognition through digital image processing techniques using CNN. This work presents significant literacy calculations used in facial protestation for exact distinctive verification and acknowledgment that can effectively and capably see sentiments from the vibes of the client. The proposed model gives six probability values based on six different expressions. Large datasets are explored and investigated for training facial emotion recognition model. In support of this work, CNN using Deep learning model, OpenCV, Tensorflow, Keras, Pandas, and Numpy is used for digital computer vision procedures involved, and an lite experiment is conducted for various men and women of different age, race, and colour to descry their feelings and variations for different faces are found. This work is improved in 3 targets as face location, acknowledgment and feeling arrangement. Open CV library, and facial expression images dataset are used in this proposed work. Also python writing computer programs is utilized for computer vision (using webcam) procedures. To demonstrate ongoing adequacy, an investigation is directed for a very long time to distinguish their internal feelings and track down physiological changes for each face. The consequences of the examinations exhibit the idealizations in face investigation framework. At long last, the exhibition of programmed face detection and recognition are measured with very high accuracy and in real-time. This method can be implemented and is widely useful in various domains such as security, schools, colleges and universities, military, airlines, banking etc.

References

  1. Ken Nozaki, HisaoIshibuchi 1996. Adaptive fuzzy rule based Classification systems, IEEE transactions on Fuzzy systems, Vol.4, No.3. S. Miao, H. Xu, Z. Han and Y. Zhu, "Recognizing Facial Expressions using a Shallow Convolutional Neural Network," in IEEE Access, vol. 7, pp. 78000-78011, 2019
  2. Farid Ghareh Mohammadi, Mohammed Saniee Abadeh 2014. Image Steganalysis using abee colony based feature selection Algorithm. Engineering Applications of AI. Science Direct, pp. 35-43A. M. Yusuf, M. B. Mustaafa, M. and M. Malekzedeh, “emotion recognition research: an analysis of research focus,” Int. J. Speech Technol., vol. 0, no. 0, pp. 1–20, 2018.
  3. Liu, F L C Y, 2015. Improving Steganalysis by Fusing SVM Classifiers for JPEG Images. IEEE, pp. 185-190.
  4. Dharma, A. S. A. a. D., April 2018. Novel directional patterns and a Generalized Supervised Dimension Reduction System (GSDRS) for facial emotion recognition. Springer Link, 77(8), p. 9455–9488 .
  5. Ji, Y. Z. a. Q., May 2005. Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences. IEEE, 27(5), p. 699–714.
  6. Dr. Shaik Asif Hussain and Ahlam Salim Abdallah Al Balushi 2020 J. Phys.: Conf. Ser. 1432 012087
  7. Mundher Al-Shabi, W. P. C. a. T. C., Aug 2019. Facial Expression Recognition Using a Hybrid CNN– SIFT Aggregator. ResearchGate(RG), pp. 139-149.
  8. CUDA Toolkit 10.1 Update 2 available from[2 September 2019]
  9. C. Szeigedy, V. Vanhouicke, J. Shleins, and Z. Woijna, “Rethinking the Inception Architecture for Computer Vision,” 2014.
  10. B. Fasel and J. Luettin, “Automatic facial expression analysis: a survey,” Pattern Recognition, vol. 36, no. 1, pp. 259–275, 2003.
  11. Isha Talegaonkar, Kalyani Joshi, Shreya Valunj, Rucha Kohok, Anagha Kulkarni, (2019), Real Time Facial Expression Recognition using Deep Learning, (ICCIP)International Conference on Communication and Information Processing
  12. Dweepna Garg, Parth Goel, Sharnil Pandya (2018), A Deep Learning Approach for Face Detection using YOLO, Institute of Electrical and Electronics Engineers (IEEE).
  13. Hao Wang, S. W. a. B. F., August 2018. Facial expression recognition using iterative fusion of MO-HOG and deep features. The Journal of Supercomputing.
  14. Liu, F L C Y, 2015. Improving Steganalysis by Fusing SVM Classifiers for JPEG Images. IEEE, pp. 185-190.
  15. MalyalaDivya, R Obula Konda Reddy, C Raghavendra (2019), Effective Facial Emotion Recognition using Convolutional Neural Network Algorithm, (IJRTE) International Journal of Recent Technology and Engineering ISSN: 2277-3878, Volume-8 Issue-4
  16. Hao Wang, S. W. a. B. F., August 2018. Facial expression recognition using iterative fusion of MO-HOG and deep features. The Journal of Supercomputing.
  17. Ken Nozaki, HisaoIshibuchi 1996. Adaptive fuzzy rule-based Classification systems, IEEE transactions on Fuzzy systems, Vol.4, No.3.
  18. CUDA Toolkit 10.1 Update 2 available from <https://developer.nvidia.com/cuda-downloads>[2 September 2019]
  19. Hao Wang, S. W. a. B. F., August 2018. Facial expression recognition using iterative fusion of MO-HOG and deep features. The Journal of Supercomputing.
  20. Face Detection and Tracking using OpenCV. S.V.Viraktamath, Mukund Katti, Aditya Khatawkar, Pavan Kulkarni. 3, s.l.: SIJ, July-August 2013, The Standard International Journals (The SIJ) , Vol. 1, pp. 45-50. ISSN: 2321 – 2403