Facial expressions are the fastest means of communication while conveying any type of information. These are not only exposes the sensitivity or feelings of any person but can also be used to judge his/her mental views. This research work includes the introduction of face recognition and facial expression recognition and an investigation on the recent previous researches for extracting the effective and efficient method for facial expression recognition. Human expressive behaviors in realistic applications involve encoding from different perspectives, and the facial expression is only one modality. Although pure expression recognition based on visible face images can achieve promising results, incorporating with other models into a high-level framework can provide complementary information and further enhance the robustness. To considered the audio model to be the second most important element and employed various combination techniques for multimodal affect recognition. Additionally, the fusion of other modalities, such as infrared images, depth information from 3D face models, and physiological data, is becoming a promising research direction due to the large complementarity for facial expressions. To design a smart framework for deep facial expression recognition using max-pooling convolutional deep belief network. To propose a hybrid algorithm for video-based emotion recognition with no manual design of features using a max pooling Convolutional Deep Belief Network. The model considered the visual modality only and achieved an excellent recognition rate for the 10 used emotions.
As we are stepping forward from one generation to another, in numerous technologies are abiding us according to our necessities. Thus, we are thoroughly dependent on these technologies as a part of human-computer interaction. And one of them is facial expression recognition. Face plays an important role in social communication, equally facial expressions are vital. Facial expressions not only exposes the sensitivity or feelings of any person but can also be used to judge his/her mental views. Facial expression recognition is a method to recognize expressions on one’s face. A wide range of techniques have been proposed to detect expressions like happy, sad, fear, disgust, angry, neutral, surprise but others are difficult to be implemented. Facial expression recognition is composed of three major steps:
(1) Face detection and pre-processing of image.
(2)
Expression classification. The objective of this paper is to understand the
basic difference between face recognition and facial expression recognition
and to investigate the effective facial expression recognition rates by
acknowledging the existing proposed models. This paper is organized in six
sections and the second section includes the basic terminologies which are
essential to understand both face recognition and facial expression
recognition. The third section of this paper includes the difference between face recognition and facial expression recognition. The fourth section
explains the procedure being followed for the recognition of facial
expressions. The fifth section includes a review of ten previous researches in expression recognition using various techniques. The sixth section is a conclusion and it is about acknowledging the facial expression rate above 90%,
calculated from the collected review. The final and seventh section discusses the future scope.
Face
Detection:
Face detection is to determine that a certain
picture contains a face we need to be able to define the general structure of
face. Luckily human faces do not greatly differ from each other; we all have
noses, eyes, foreheads, chins and mouths; and all of these compose the general
structure of a face. It is a concept of two-class classification: face versus
nonface.
Face
Identification:
In this the system compares the given
individual to all the other individuals in the database and gives a ranked list
of matches.
Face
Verification:
In
this the system compares the given individual with who that individual says
they are and gives a yes or no decision.
Facial
Expressions: Facial expression is one or more motions or positions of the
muscles beneath the skin of the face. These movements express the emotional
state of the person to observers. It is a form of non-verbal communication. It
plays a communicative role in interpersonal relations.
FACIAL EXPRESSION
RECOGNITION
Generally, face is a union of bones, facial
muscles and skin tissues. When these muscles contract, warped facial features
are produced. Facial expressions are the fastest means of communication while
conveying any type of information. An implementation of facial expression
recognition may lead to a natural human-machine interface. In 1978, Ekman and
Frisen reported that facial expression acts as a rapid signal that varies with
contraction of facial features like eyebrows, lips, eyes, cheeks etc., thereby
affecting the recognition accuracy, also happy, sad, fear, disgust, anger and surprise
are six basic expressions which are readily recognized across very different
cultures. Facial expression recognition involves three steps face detection,
feature extraction and classification of expression.
The
pre-processing step for recognizing facial expressions is face detection. The
steps involved in converting a image to a normalized pure facial image for
feature extraction is detecting feature points, rotating to line up, locating
and cropping the face region using a rectangle, according to the face model.
The face detection involves methods for detecting faces in a single image.
Emotion
Recognition Facial Recognition is the technology that deals with methods and
techniques to identify the emotions from the facial expression. Various
technological developments in the area of Machine Learning and artificial
Intelligence, made the emotion recognition easier. It is expected that
expressions can be next communication medium with computers. A Need for
automatic emotion recognition from facial expression increases tremendously.
Research work in this area mainly concentrates on identifying human emotions
from videos or from acoustic information. Most of the research work recognizes
and matches faces but they have not used convolutional neural networks to
infuse emotions from images. Emotion Recognition deals with the investigation
of identifying emotions, techniques and methods used for identifying. Emotions
can be identified from facial expressions, speech signals etc. Enormous methods
have been adapted to infer the emotions such as machine learning, neural
networks, artificial intelligence, and emotional intelligence. Emotion
Recognition is drawing its importance in research which is primary to solve
many problems. The primary requirement of Emotion Recognition from facial
expressions is a difficult task in emotional Intelligence where images are
given as an input for the systems.
Facial Emotion
Recognition
Facial Emotion Recognition is research area which tries to identify the emotion
from the human facial expression. The surveys states that developments in
emotion recognition makes the complex systems simpler. FER has many
applications which is discussed later. Emotion Recognition is the challenging
task because emotions may vary depending on the environment, appearance,
culture, face reaction which leads to ambiguous data. Survey on Facial emotion
recognition [2] helps a lot in exploring facial emotion recognition.
Deep Learning
Deep
Learning [3] is machine learning technique which models the data that are
designed to do a particular task. Deep learning in neural networks has wide
applications in the area of image recognition, classification, decision making,
pattern recognition etc. [4]. Other deep Learning techniques like multimodal
deep learning is used for feature selection, image recognition etc.
Categorizing
Facial Expressions & its Features:
Facial
expression presents key mechanism to describe human emotion. From starting to
end of the day human changes plenty of emotions, it may be because of their
mental or physical circumstances. Although humans are filled with various
emotions, modern psychology defines six basic facial expressions: Happiness,
Sadness, Surprise, Fear, Disgust, and Anger as universal emotions [2]. Facial
muscles movements help to identify human emotions. Basic facial features are
eyebrow, mouth, nose & eyes.
Table
-1: Universal Emotion Identification
Emotion
|
Definition
|
Motion of facial part |
Anger
|
Anger is one of the
most dangerous emotions. This emotion may be harmful so, humans are trying to
avoid this emotion. Secondary emotions of anger are irritation, annoyance,
frustration, hate and dislike.
|
Eyebrows pulled down,
Open eye, teeth shut and lips tightened, upper and lower lids pulled up. |
Fear
|
Fear is the emotion of
danger. It may be because of danger of physical or psychological harm.
Secondary emotions of fear are Horror, nervousness, panic, worry and dread.
|
Outer eyebrow down,
inner eyebrow up, mouth open, jaw dropped |
Happiness
|
Happiness is most
desired expression by human. Secondary emotions are cheerfulness, pride,
relief, hope, pleasure, and thrill.
|
Open Eyes, mouth edge
up, open mouth, lip corner pulled up, cheeks raised, and wrinkles around
eyes. |
Sadness
|
Sadness is opposite
emotion of Happiness. Secondary emotions are suffering, hurt, despair, pitty
and hopelessness.
|
Outer eyebrow down, inner
corner of eyebrows raised, mouth edge down, closed eye, lip corner pulled
down |
Surprise
|
This emotion comes when
unexpected things happens. Secondary emotions of surprise are amazement,
astonishment.
|
Eyebrows up, open eye,
mouth open, jaw dropped |
Disgust
|
Disgust is a feeling of
dislike. Human may feel disgust from any taste, smell, sound or tough.
|
Lip corner depressor,
nose wrinkle, lower lip depressor, Eyebrows pulled down |
MOTIVATION
Face
recognition is a task that humans perform routinely and effortlessly in their
daily lives. Robert Axelrod has also shown the ability to recognize those they
have met before and distinguish them from strangers is one of the bases for
humans to form cooperation [3]. The last decade has witnessed a trend towards
an increasingly ubiquitous computing environment, where powerful and low-cost
computing systems are being integrated into mobile phones, cars, medical
instruments and almost every aspect of our lives. This has created an enormous
interest in automatic processing of digital images and videos in a number of
applications, including biometric authentication, surveillance, human-computer
interaction, and multimedia management. Research and development in automatic
face recognition follows naturally. Face recognition is a visual pattern
recognition problem where a three-dimensional object is to be identified based
on its two-dimensional image. in recent years, significant progress has been
made in this area; owing to better face models and more powerful computers,
face recognition system can achieve good results under constrained situations.
However because face images are influenced by several factors: illumination,
head pose, expression and so on, in general conditions, face recognition is
still challenging. From a computer vision point of view, among all these
“noises” facial expression maybe the toughest one in the sense that expressions
actually change the three-dimensional object while other factors, such as
illumination and position, only affect imaging parameters. To get rid of
expression “noise”, one first needs to estimate the expression of an image,
this is called “Facial Expression Recognition”. Another, maybe more important
motivation of facial expression recognition is that expression itself is an
efficient way of communication: it’s natural, non-intrusive, and has shown that, surprisingly, expression
conveys more information than spoken words and voice tone. To build a
friendlier Human Computer Interface, expression recognition is essential.
Facial expression
recognition system consists of following steps:
Image
Acquisition: Static image or image sequences are used for facial expression
recognition.2-D gray scale facial image is most popular for facial image
recognition although color images can convey more information about emotion
such as blushing. In future color images will prefer for the same because of
low cost availability of color image equipments. For image acquisition Camera,
Cell Phone or other digital devices are used.
Pre-processing
Pre-Processing
plays a key role in overall process. PreProcessing stage enhances the quality
of input image and locates data of interest by removing noise and smoothing the
image. It removes redundancy from image without the image detail.
Pre-Processing also includes filtering and normalization of image which
produces uniform size and rotated image.
Segmentation
Segmentation
separates image into meaningful reasons. Segmentation of an image is a method
of dividing the image into homogenous, self-consistent regions corresponding to
different objects in the image on the bases of texture, edge and intensity.
Feature Extraction
Feature extraction can be considered as
“interest” part in image. It includes information of shape, motion, colour,
texture of facial image. It extracts the meaningful information form image. As
compared to original image feature extraction significantly reduces the
information of image, which gives advantage in storage
Classification:
Classification
stage follows the output of feature extraction stage. Classification stage
identifies the facial image and grouped them according to certain classes and
help in their proficient recognition. Classification is a complex process
because it may get affected by many factors. Classification stage can also
called feature selection stage, deals with extracted information and group them
according to certain parameters.
Accuracy
is crucially dependant on the extraction of features which is the main methodology
of Facial Gender recognition system. Optimizing the recognition rate for Facial
Gender recognition system by improving LBP technique of feature extraction is
the purpose of this research work. The pattern, shape and edge patterns are
unique for every individual on their face. The pattern of texture in image is
extracted by using LBP. But only the texture pattern and edge pattern generated
by Gabor filter are generated by LBP projection.
Research
show that face of each individual has unique pattern using which the persons
are classified, verified and identified.
The
basic methodology of facial expression recognition includes the following
stages:
·
Acquisition of image
·
Pre-processing using
Convolutional Deep Belief Network
·
Extraction of features
using CNN
·
Feature reduction v)Classification
For data
collection two methods will be used:
• One (UCI Machine Learning
Repository) which is publicly reachable.
• Second Approach: To
collect real time dataset using real-time response by real-time video
capturing.
• Facial expression datasets As the FER literature shifts its main focus to the challenging in-the-wild environmental conditions, many researchers have committed to employing deep learning technologies to handle difficulties, such as illumination variation, occlusions, non-frontal head poses, identity bias and the recognition of low-intensity expressions. Given that FER is a data-driven task and that training a sufficiently deep network to capture subtle expression-related deformations requires a large amount of training data, the major challenge that deep FER systems face is the lack of training data in terms of both quantity and quality.
• Because people of different age ranges,
cultures and genders display and interpret facial expression in different ways,
an ideal facial expression dataset is expected to include abundant sample
images with precise face attribute labels, not just expression but other
attributes such as age, gender and ethnicity, which would facilitate related
research on cross-age range, cross-gender and cross-cultural FER using deep
learning techniques, such as multitask deep networks and transfer learning. In
addition, although occlusion and multipose problems have received relatively
wide interest in the field of deep face recognition, the occlusionrobust and
pose-invariant issues have receive less attention in deep FER. One of the main
reasons is the lack of a large-scale facial expression dataset with occlusion
type and head-pose annotations.
APPLICATION DOMAIN
With the rapid development of technologies it is required to build an
intelligent system that can understand human emotion. Facial emotion
recognition is an active area of research with several fields of applications.
Some of the significant applications are:
·
Alert system for driving.
·
Social Robot emotion
recognition system.
·
Medical Practices.
·
Feedback system for e-learning.
·
The interactive TV
applications enable the customer
·
to actively give feedback
on TV Program.
·
Mental state
identification.
·
Automatic counseling
system.
·
Face expression
synthesis.
·
Music as per mood.
·
In research related to
psychology.
·
In understanding human behaviour.
· In interview.
Extensive
efforts have been made over the past two decades in academia, industry, and
government to discover more robust methods of assessing truthfulness,
deception, and credibility during human interactions. Efforts have been made to
catch human expressions of anyone. Emotions are due to any activity in brain
and it is known through face, as face has maximum sense organs. Hence human
facial activity is considered. The objective of this research paper is to give
brief introduction towards techniques, application and challenges of automatic
emotion recognition system.
For more topics and courses visit www.skillbakery.com
No comments:
Post a Comment