Krunal B Thakkar BTech(IT)
Charusat University of Science and Technology(CSPIT)
Abstract— Humans share a universal and fundamental set of emotions which are exhibited through consistent facial expressions. An algorithm that performs detection, extraction, and evaluation of these facial expressions will allow for automatic recognition of human emotion in images and videos. Presented here is a hybrid feature extraction and facial expression recognition method that utilizes Viola-Jones cascade object detectors and Harris corner key-points to extract faces and facial features from images and uses principal component analysis, linear discriminant analysis, histogram-of-oriented-gradients (HOG) feature extraction, and support vector machines (SVM) to train a multi-class predictor for classifying the seven fundamental human facial expressions. The hybrid approach allows for quick initial classification via projection of a testing image onto a calculated eigenvector, of a basis that has been specifically calculated to emphasize the separation of a specific emotion from others. This initial step works well for five of the seven emotions which are easier to distinguish. If further prediction is needed, then the computationally slower HOG feature extraction is performed and a class prediction is made with a trained SVM. Reasonable accuracy is achieved with the predictor, dependent on the testing set and test emotions. Accuracy is 81% with contempt, a very difficult-to-distinguish emotion, included as a target emotion and the run-time of the hybrid approach is 20% faster than using the HOG approach exclusively
Introduction: Perceiving emotions has been a well known space of investigation in the new events. As of now the assessment is conveyed dependent on changing over picture data into machine noticeable course of action like tables, systems, verifiable procedures, picture examination, feature point examination.
In the basic reasoning, we individuals can’t abuse granting limits as the cycle is passed on by the PCs which is predefined and constrained by human models. There are a couple of methodologies that can be used to execute, expressly we have analyzed the class specifier technique and picture blend framework in the examination. The facial features and emotions are one of the critical prospects through which individuals convey and the examination is carried on this bases comparably. Partial obstructions present on the face are genuine obstacles for FEA. Vision of the face may be discouraged by conceals, cap, scarf, cosmetics, scouring gives up mouth, tattoos or piercings, facial hair development, etc Eventually, the HCI ought to be improved and the benefit in this field of study should be passed on is the basic perspective of this paper.
The main goal of the survey is to understand the different accuracies achieved through different approaches in building these systems.
Some of glimpse of the paper are as follows
In this survey we build up a sample system using python, keras, OpenCV, Matplotlib and using free publicly available DeepFacePy network for face detection and datasets to implement the emotion detection system concepts.
Myself Krunal thakkar and my fellow mate Neel Shah from IT(CHARUSAT UNIVERSITY) have conducted this survey and have concluded that 88% accuracy is achieved through python and keras whereas other algorithms might give better accuracy and precision. In our sample system we build up the system in three phases, Preprocessing or training the model on CNN or ANN, here, CNN was used.
When using the HOG and SVM classifier only, the accuracy for detection is 81%, much better than a Fisherface only approach. When using the dual-classifier method, the accuracy is the same as HOG-only at 81%, but the testing process is 20% faster. This is because not all images must undergo eye and mouth detection, extraction, then undergo HOG feature extraction, but only those test images that are not given a prediction by the much faster Fisherface classifier
Pre-processing and resize. The image pre-processing procedure is a very important step in the facial expression recognition task. The aim of the pre-processing phase is to obtain images which have normalized intensity, uniform size, and shape.
Next, we have face detection, emotion can be detected only when there is a face!( subtle humour to keep up with the reading)!
One common method is to extract the shape of the eyes, nose, mouth, lips and chin, then distinguish the faces by distance and scale of the organs.
Feature1 width of left eye
Feature2 width of right eye
Feature3 width of nose
Feature4 width of mouth and lips
Feature5 width of face
The last step we have emotion detection:
If the feature of face have n dimensions then the generalized Euclidean distance formula is used to measure the distance.
Detection of emotion is based on the calculation of distances between various features points. In this step comparison between distances of testing image and neutral image is done and also it selects the best possible match of testing image from train folder.
write me at : email@example.com