C15 Documentation
C15 Documentation
C15 Documentation
On
Bachelor of Technology
In
Computer Science & Engineering
By
P.RAMYA - 19R25A0517
T.PAVAN - 18R21A05H5
CH.VASANTH - 19R25A0514
G.BHAVANI - 19R25A0516
RAMARAJU - 16R21A05C9
2021-2022
Department of Computer Science & Engineering
CERTIFICATE
This is to certify that the project entitled “Speech Emotion Recognition” has been submitted by
(19R25A0516), Ramaraju (16R21A05C9) in partial fulfillment of the requirements for the award of
degree of Bachelor of Technology in Computer Science and Engineering from Jawaharlal Nehru
Technological University, Hyderabad. The results embodied in this project have not been submitted to
any other University or Institution for the award of any degree or diploma.
External Examiner
i
Department of Computer Science & Engineering
DECLARATION
We hereby declare that the project entitled “SPEECH EMOTION RECOGNITION” is the work done
during the period from December 2021 to May 2022 and is submitted in partial fulfillment of the
requirements for the award of degree of Bachelor of Technology in Computer Science and Engineering
from Jawaharlal Nehru Technology University, Hyderabad. The results embodied in this project have not
been submitted to any other university or Institution for the award of any degree or diploma.
P.RAMYA - 19R25A0517
T.PAVAN - 18R21A05H5
CH.VASANTH - 19R25A0514
G.BHAVANI - 19R25A0516
RAMARAJ - 16R21A05C9
ii
Department of Computer Science & Engineering
ACKNOWLEDGEMENT
The satisfaction and euphoria that accompany the successful completion of any task
would be incomplete without the mention of people who made it possible, whose constant guidance and
encouragement crowned our efforts with success. It is a pleasant aspect that we now have the opportunity
First of all, We would like to express our deep gratitude towards our internal guide Mr
.A.Venkata Siva Rao , Assistant Professor, Department of CSE for his support in the completion of
our dissertation. We wish to express our sincere thanks to Dr. E. ANUPRIYA, HOD, Dept. of CSE and
also principal Dr. K. SRINIVAS RAO for providing the facilities to complete the dissertation.
We would like to thank all our faculty and friends for their help and constructive criticism during
the project period. Finally, we are very much indebted to our parents for their moral support and
P.RAMYA - 19R25A0517
T.PAVAN - 18R21A05H5
CH.VASANTH - 19R25A0514
G.BHAVANI - 19R25A0516
RAMARAJU - 16R21A05C9
iii
Department of Computer Science & Engineering
ABSTRACT
In human machine interface application, emotion recognition from the speech signal has been research
topic since many years. To identify the emotions from the speech signal, many systems have been
developed. The classifiers are used to differentiate emotions such as anger, happiness, sadness, surprice,
neutral state, etc. The database for the speech emotion recognition system is the emotional speech
samples and the features extracted from these speech samples are the energy, pitch, linear prediction. The
classification performance is based on extracted features. Inference about the performance and limitation
of speech emotion recognition system based on the different classifiers are also discussed.
iv
INDEX
Certificate
Declaration
Acknowledgement
Abstract
1.Introduction
1.1 Importance
1.2 Motivation
2.Existing Systems
2.1 Methadology
2.2 Running SVM Approach
2.3 Dimensionality Reduction Method
2.4 LPC coefficient approach
2.5 Extending the feature space
2.6 Domain Specific Classification
3. Dataset
3.1 Toronto Emotional Speech Set(TEES)
3.2 Knowledge Extraction Based On Evalutionary Learning(KEEL)
4. Feature Extraction
4.1 The Process
4.2 List of Features
4.3 Mel Frequency Cepstrum Coefficient(MFCC)
4.31 Coefficient Computation
4.4 Frame Blocking
4.5 Silence Removal
5. Data preaparation
5.1 Data Quality Issues
5.2Missing Value Analysis
5.3 Outlier Identification
5.4 Null Value Handling
5.5 Invalid data
5.6 Duplicate Data
5.7 Normalization And Standardization
5.8 Pearson Correaltion Coefficient
5.9 Clustering
5.91 K-means clustering alogorithm
5.92 The elbow method choosing K value
5.10 Priniciple Component Analysis
5.11 Data Normalization
5.12 Conviance Metrix Computation
5.13 Eigenvalues and Eigenvector Computation
5.14 Choosing Components
5.15 Forming Principle Components
5.16 Scree Plot
6. Alogorithms
6.1 Logistic regression
6.2 Naïve Bayes
6.3 Support Vector Machines
6.4 K-Nearest Neighobur
6.5 Decision Tree
6.7 Random Forest
6.7 Gradient Boosting Tree
7. Evaluation
7.1 Evaluation Metrics
7.2 Accuracy
7.3 Precision
7.4 Recall or Sensitiviy
8.Implementation
8.1 Data Collection
8.2 Python Library
8.3 Data Visualization
8.4 Feature Analysis
8.5 Correlation
8.6 Clustering
8.7 Data Preparation
8.8 Feature Engineering
9. Experiments
9.1 Approach 1-with all extracted features
9.2 Approach 2-with MFCC coefficient
9.3 Approach 3- using PCA for decomposition
9.4 Comparision of Results
9.5 Implementation on the KEEL dataset
10. Results
11. Conclusion
12. Future Enhancement
13. References
LIST OF TABLES
Figure 3. Periodogram
1.INTRODUCTION
For several years now, the growth in the field of Artificial Intelligence (AI) has been accelerated. AI,
which was once a subject understood by computer scientists only, has now reached the house of a
common man in the form of intelligent systems. The advancements of AI have engendered to several
technologies involving Human-Computer Interaction (HCI) . Aiming to develop and improve HCI methods
is of paramount importance because HCI is the front-end of AI which millions of users experience. Some
of the existing HCI methods involve communication through touch, movement, hand gestures, voice and
facial gestures . Among the different methods, the voice-based intelligent devices are gaining popularity
comprehend the human’s speech percept in order to accurately pick up the commands given to it. This
• Speaker Identification
• Speech Recognition
Speech Emotion Detection is challenging to implement among the other components due to its
complexity. Furthermore, the definition of an intelligent computer system requires the system to mimic
human behavior. A striking nature unique to humans is the ability to alter conversations based on the
emotional state of the speaker and the listener. Speech emotion detection can be built as a classification
problem solved using several machine learning algorithms. This project discusses in detail the various
methods and experiments carried out as part of implementing a Speech Emotion Detection system.
1
Speech emotion recognition
1.1 IMPORTANCE
Communication is the key to express oneself. Humans use most part of their body and voice to
effectively communicate. Hand gestures, body language, and the tone and temperament are all
collectively used to express one’s feeling. Though the verbal part of the communication varies by
languages practiced across the globe, the non-verbal part of communication is the expression of feeling
which is most likely common among all. Therefore, any advanced technology developed to produce a
Some of the research areas that benefit from automating the emotion detection technique include
psychology, psychiatry, and neuroscience. These departments of cognitive sciences rely on human
interaction, where the subject of study is put through a series of questions and situations, and based on
their reactions and responses, several inferences are made. A potential drawback occurs as few people
are classified introverts and hesitate to communicate. Therefore, replacing the traditional procedures
with a computer-based detection system can benefit the study. Similarly, the practical applications of the
speech-based emotion detection are many. Smart home appliances and assistants (Examples: Amazon
Alexa and Google Home ) are ubiquitous these days. Additionally, customer care-based call centersoften
have an automated voice control which might not please most of their angry customers. Redirectingsuch
calls to a human attendant will improve the service. Other applications include eLearning, online tutoring,
investigation, personal assistant (Example: Apple Siri and Samsung S Voice ) etc. A very recent application
could be seen in self-driving cars. These vehicles heavily depend on voice-based controlling. An unlikely
situation, such as anxiety, can cause the passenger to utter unclear sentences. In these situations,
2
Speech emotion recognition
Speech emotion recognition
1.2 MOTIVATION
Identifying the emotion expressed in a speech percept has several use cases in the modern day
applications. Human-Computer Interaction (HCI) is a field of research that studies interactive applications
between humans and computers . For an effective HCI application, it is necessary for the computer system
to understand more than just words. On the other hand, the field of Internet of Things (IoT) is rapidly
growing. Many real word IoT applications that are used on a daily basis such as Amazon Alexa, Google
Home and Mycroft function on voice-based inputs. The role of voice in IoT applications is pivotal. The study
in a recent article foresees that by 2022, about 12% of all IoT applications would fully function based on
voice commands only. These voice interactions could be mono-directional or bi-directional, and in both
cases, it is highly important to comprehend the speech signal. Further, there are Artificial Intelligence (AI)
and Natural Language Processing (NLP) based applications that use functions of IoT and HCI to create
complex systems. Self-driving cars are one such application that controls many of its functions using voice-
based commands. Identifying the emotional state of the user comes with a great advantage in this
application. Considering emergency situations in which the user may be unable to clearlyprovide a voice
command, the emotion expressed through the user’s tone of voice can be used to turn on certain
emergency features of the vehicle. A much simpler application of speech emotion detection can be seen
in call centers, in which automated voice calls can be efficiently transferred to customer service agents for
further discussion. Other applications of using a speech emotion detection system can be found in lie
3
Speech emotion recognition
Speech emotion recognition
2. EXISTING SYSTEMS
2.1 METHODOLOGY
The speech emotion detection system is implemented as a Machine Learning (ML) model. The
steps of implementation are comparable to any other ML project, with additional fine-tuning procedures
to make the model function better. The flowchart represents a pictorial overview of the process (see
Figure 1). The first step is data collection, which is of prime importance. The model being developed will
learn from the data provided to it and all the decisions and results that a developed model will produce is
guided by the data. The second step, called feature engineering, is a collection of several machine learning
tasks that are executed over the collected data. These procedures address the several data representation
and data quality issues. The third step is often considered the core of an ML project where an algorithmic
based model is developed. This model uses an ML algorithm to learn about the data and train itself to
respond to any new data it is exposed to. The final step is to evaluate the functioning of the built model.
Very often, developers repeat the steps of developing a model and evaluating it to compare the
performance of different algorithms. Comparison results help to choose the appropriate ML algorithm
4
Speech emotion recognition
Cao et al. proposed a system that considered that the emotion expressed by humans are mostly
a result of mixed feeling. Therefore, they suggested an improvement over the SVM algorithm that would
consider mixed signals and choose the most dominant one. For this purpose, a ranking SVM algorithm was
chosen. The ranking SVM takes all predictions from individual binary classification SVM classifiers also
called as rankers, and applies it to the final multi-class problem. Using the ranking SVM algorithm, an
5
Speech emotion recognition
Chen et al. developed a system that had improvements in the pre-processing stage. Two pre-
processing techniques, namely Fisher and Principle Component Analysis (PCA), were used in combination
with two classifier algorithms, namely SVM and ANN. They carried out four experiments, each with a
different combination of pre-processing and the classifier algorithm. The first experiment used Fisher
method to select features for a multi-level SVM classifier (Fisher + SVM). The second experiment was to
reduce feature dimensionality using Principle Component Analysis (PCA) for the SVM classifier (PCA +
SVM). The third experiment used the Fisher technique over the ANN model (Fisher + ANN). Finally, PCA
was applied before classification using ANN (PCA + ANN). From these experiments, two important
conclusions were made. Firstly, dimensionality reduction improves the performance of the system.
Secondly, SVM classifier algorithm classifies better than the ANN algorithm in the case of emotion
detection. The winning experiment had an accuracy of 86.50% using Fisher for dimensionality reduction
In the Nwe et al. system, a subset of features, similar to the Mel Frequency Cepstral Coefficients
(MFCC), was used. They used the Log Frequency Power Coefficients (LFPC) over a Hidden Markov Model
(HMM) to classify emotions in speech. Their work is not publically available, as they used a dataset
privately available to them. However, they claim that using the LFPC coefficients over the MFFCC
coefficients shows a significant improvement in terms of the accuracy of the model. The average
classification accuracy in their model is 78% and the best accuracy is even higher 96%.
6
Speech emotion recognition
Rong et al. proposed an innovative way to improve the accuracy of existing models. Traditionally,
computer scientists were using various pre-processing techniques to reduce the number of features.
Contrastingly, this new system increased the number of features used for classification. They claimed to
have performed classification over a small dataset containing audio percepts in the Chinese language, but
do not disclose the features that they used. However, they also mentioned that none of their features are
language-dependent. Using a high number of features over an ensemble random forest algorithm
Narayanan , in his work, proposes a system that uses a more real-world dataset. For his work,data
was collected from a call center and he performed a binary classification with only two distinct emotions,
namely happy and angry. The research used numerous features including acoustic, lexical and other
language-based features over the KNN algorithm. Moreover, this research was conducted specifically for
the call center domain and was evaluated across male and female customers. The accuracyvalues showed
7
Speech emotion recognition
3. DATASET
Two datasets created in the English language, namely the Toronto Emotional Speech Set (TESS)
and the emotional dataset from Knowledge Extraction based on Evolutionary Learning (KEEL), contain a
more diverse and realistic audio. The descriptions of the dataset are as follows.
The researchers from the Department of Psychology at the University of Toronto have created a
speech emotion based dataset in 2010, in the English language . The database contains 2800 sound files
of speech utterances in seven basic emotional categories, namely: Happy, Sad, Angry, Surprise, Fear,
Disgust and Neutral. It is an acted recording, where actors from two age groups of Old (64-year-old) and
A few qualities of this dataset which makes it good for this project are:
• The size of the dataset is large enough for the model to be trained effectively. The more
• All basic emotional categories of data are present. A combination of these emotions can be
• Data is collected from two different age groups which will improve the classification.
• The audio files are mono signals, which ensures an error-free conversion with most of the
programming libraries.
8
Speech emotion recognition
KEEL is an online dataset repository contributed by machine learning researchers worldwide .The
emotion for speech dataset contains 72 features extracted for each of the 593 sound files. The data are
labeled across six emotions, namely: Happy, Sad, Angry, Surprise, Fear and Neutral. The repository also
offers data to be downloaded in 10 or 5 folds for the purpose of training and testing.
A few qualities of this dataset which makes it good for this project are:
• Data is represented as features directly, which saves conversion time and procedures.
• All basic emotional categories of data are present. A combination of these emotions can be
9
Speech emotion recognition
4. FEATURE EXTRACTION
Speech is a varying sound signal. Humans are capable of making modifications to the sound
signal using their vocal tract, tongue, and teeth to pronounce the phoneme. The features are a way to
quantify data. A better representation of the speech signals to get the most information from the
speech is through extracting features common among speech signals. Some characteristics of good
features include :
• The features should be independent of each other. Most features in the feature vector are
correlated to each other. Therefore it is crucial to select a subset of features that are individual
• The features should be informative to the context. Only those features that are more
descriptive about the emotional content are to be selected for further analysis.
• The features should be consistent across all data samples. Features that are unique and
• The values of the features should be processed. The initial feature selection process can result
in a raw feature vector that is unmanageable. The process of Feature Engineering will remove
Speech emotion recognition
The features in a speech percept that is relevant to the emotional content can be grouped into
1. Prosodic features
2. Phonetic features.
The prosodic features are the energy, pitch, tempo, loudness, formant, and intensity. The
phonetic features are mostly related to the pronunciation of the words based on the language. Therefore
for the purpose of emotion detection, the analysis is performed on the prosodic features or a combination
of them. Mostly the pitch and loudness are the features that are very relevant to the emotional content.
See Table 1 for the features that were extracted for each frame of the audio signal, along with their
definitions .
11
Speech emotion recognition
1 Zero Crossing “The rate at which the signal changes its sign.”
Rate
2 Energy “The sum of the signal values squared and normalized using frame
length.”
7 Spectral Flux “The square of the difference between the spectral energies of
consecutive frames.”
8 Spectral “The value of the frequency under which 90% of the spectral distribution
Rolloff occurs.”
9-21 MFCCs “Mel Frequency Cepstral Coefficient values of the frequency bands
distributed in the Mel-scale.”
22-33 Chroma “The 12 values representing the energy belonging to each pitch class.”
Vector
12
Speech emotion recognition
A subset of features that are used for speech emotion detection is grouped under a category
called the Mel Frequency Cepstrum Coefficients (MFCC) . It can be explained as follows:
• The word Mel represents the scale used in Frequency vs Pitch measurement (see Figure 2)
[16]. The value measured in frequency scale can be converted into Mel scale using the
• The word Cepstrum represents the Fourier Transform of the log spectrum of the speech
signal.
Below is the mathematical approach to compute the MFCC features from a speech signal :
1. The first step is to frame the audio signal. The method of frame blocking discussed earlier is used to
split the audio signals into frames of an optimal length of 20ms to 30ms, with 50% overlap.
13
Speech emotion recognition
2. The next step is mathematical. In this step, for each frame of the signal, the power spectrum is
computed. The power spectrum, also known as Periodogram, identifies the frequencies present in
each frame (see Figure 3) . In order to select a particular band of frequencies, the value at each frame
is multiplied by a Hamming window value. Mathematically, the periodogram is the squared value of
Fig.3 Periodogram
3. Next, the power spectra obtained can contain many closely spaced frequencies. These variations in
the frequencies make it difficult to obtain the energy values present in the signal. Thus, to scale the
values, a filter named Mel Filterbank is applied to the power spectrum. The Mel Filterbank is a
collection of triangular filters in the frequency domain. Nearing 0Hz, the frequencies are narrow to
each other; further higher, the frequencies become wider (see Figure 4) . The product of the power
spectrum values and the Mel Filterbank values provides the energy in each frame. However, since
overlapping frames are used in the analysis, the energy values obtained for the individual frameswould
14
Speech emotion recognition
function is used. DCT outputs a list of coefficient values corresponding to the pitch and energy values
obtained so far. The lower level coefficients (the first 12 to 13 coefficients) of each frame represents
steady changes in the pitch and energy values, and therefore they are better for analysis. These lower
The frame blocking method is used to analyze sound signals. It is the process of dividing the sound
signal into blocks known as frames and performing analysis over each block, rather than the signal at large.
It is preferred to analyze individual frames because audio signals are stable within short time intervals.
Several acoustic features can be interpreted from a single frame. In order to ensure the time-varying
characteristics of the signal are measured accurately, some part of the neighboring frames is also analyzed
at every step to identify any subtle changes in the sound signal. This value is often termed as frame
15
Speech emotion recognition
overlap, indicating the amount of overlap to include from the neighboring frames. The steps of frame
Each frame should not be too small or too large, as this would mislead the time-varying
characteristics of the features. Standard framing window size is 20ms to 30ms for audio signal
processing.
If the overlap value is too large, more duration from the neighboring frames will need to be
analyzed at each analysis step. This will increase the computation and hence is not recommended.
Each frame is a unit of computation of the sound signal. Feature extraction of frames will quantify
An audio signal at the time of recording can accommodate silent regions where no utterances had
been made. Such silent regions of the audio signal do not provide any useful information regarding the
emotion expressed, and can be removed. A semi-supervised learning approach is used for silence
detection in audio signals. In this approach, a model is initially trained with sample audio signals in order
to be able to distinguish between high and low energy features . Later, a percentage of high and low
energy frames are used as endpoints to detect the regions of actual audio in the signal. Finally, applying
the trained model over the entire signal will provide silence-free audio segments . Threshold values such
as frame size, frame step, and sampling frequency are tunable in order to smooth the output signals.
16
Speech emotion recognition
5. DATA PREPARATION
Data must be cleaned to perform any meaningful analysis. As a next step, the dataset thus collected
had to be inspected for its quality. Some of the data quality issues addressed for this experimentation
includes:
2. Outlier identification
4. Invalid data
5. Duplicate data
Due to several influencing factors, a few or more data rows can contain no values for specific
features. These values are termed ‘missing’ from the dataset. A large number of missing values can
provide insights into the data. For example, if a particular feature has most of its values missing for all
data rows, then it can be inferred that the feature is likely uncommon and can be removed from the
dataset. Contrastingly, a small number of missing values can represent data entry error. Analyzing and
amending individual features over missing values will improve and fix the quality of the dataset. Some of
the methods to handle missing values include; if the number of missing values is large then the
corresponding data rows or features can be removed, whereas if very few values are missing for a feature
it can be imputed which means replacing with the mean or most frequent value of the feature.
17
Speech emotion recognition
Outlier values are also considered as data modifiers because often the prediction algorithm used
will be misled by the outlier values. Outliers also alter the statistics of the overall data such as mean,
variance and standard deviation. The proportion of the outliers amongst the whole dataset can be used
to made decisions on how to handle them. If the outliers lie within a small range of difference and
contribute to a very small proportion, then no fixes will be required. Some methods to handle large
proportions of outliers include, replacing outlier values with boundary, or mean, or median, or mode
values.
A common error that can occur in a dataset is the null value error. It is when the words ‘null’ or
‘NA’ is used in place of missing values as fillers. Null values are mostly treated and handled in a fashion
A dataset can have values irrelevant to the data type, such as symbols and special characters.
These values, despite being meaningless, can cause errors during processing. Depending on the amount
Few features might be a duplicate of each other, with different names or units of measurement.
Such features increase the dimensionality of the data with no further significance. Removal of duplicate
18
Speech emotion recognition
Different characteristics of the audio signal, represented by its features, are computed on
different units or scales. Rescaling the values to a uniform range will ensure accurate calculations are
made. Many algorithms use distance metrics for their computation. Therefore it is necessary that all the
values in the dataset are normalized. Two approaches are commonly used for the purpose of rescaling,
namely Normalization and Standardization. Normalization alters all numeric values to lie in the range 0 to
1. For this purpose, all outliers in the data must be eliminated prior to normalizing the data. The formula
Standardization transforms the data to have a mean value of zero and a variance of one. However,
standardizing the data provides more insights into the data than normalization. The formula for
normalization is given as
x new = (x - µ) / σ
Correlation is the liner association that exists between each pair of features in the dataset and is
used to identify the features that are highly associated or correlated with the decision attribute . The
Pearson correlation coefficient, r is a value that denotes the strength of the correlation. The value of r
ranges between -1 to +1 through 0, where the negative values denote a lesser correlation between
19
Speech emotion recognition
variables and the positive values denote greater correlation. A value of 0 denotes that there is no
The strength of the association can be determined using the value of the Pearson coefficient r.
However, the strength of the association also depends on the type of variables under measurement. The
Person method of correlation computation can be used on all numeric data irrespective of whether they
have been scaled or not. Additionally, this approach treats all variables equally and does not consider any
proposed dependence between the variables. The following guidelines have been proposed to determine
20
Speech emotion recognition
4.9 CLUSTERING
K-Means is an unsupervised clustering algorithm that will form ‘k’ groups within the data based
on feature similarity. It is an iterative process by which data are iteratively grouped based on the similarity
between their features. Clustering the data into groups provides more insights on the distribution of the
training data available, and also easily helps classify any unknown (new) data. The algorithm is a two-step
iterative process in which, based on distance metrics between the data points and centroids of the cluster,
groups of similar data are created. The steps of the algorithm are :
1. Data Assignment
Based on the value of a distance metric (For example, Euclidean distance and Manhattan
21
Speech emotion recognition
2. Centroid re-computation
Centroids or mean value of all data points are calculated and updated at each step following the
Steps 1 and 2 are iteratively performed until the groups are distinctively classified. There are two
conditions that ensure accuracy in clustering, namely: the inter-cluster distance and intra-cluster
difference. The distance between the centroids of each cluster should be larger, ensuring that each group
is well-separated from each other showing distinct differences. Additionally, the distance between each
point within the cluster should be smaller, ensuring the similarity between data points within the group.
By analyzing the final value of each centroid, the characteristics of the data belonging to the cluster can
be quantitatively explained.
The ‘k’ in k-means denote the number of clusters the data needed to be grouped into.
Traditionally, the algorithm is repeated over different values of k and the results are compared by the
average within cluster distance to the centroid. Alternatively, the elbow method can be used to depict an
optimal value of k [20]. In the elbow method, the average within cluster distance to the centroid value is
plotted against different values of k and the point of the curve where the distance sharply bends is the
22
Speech emotion recognition
In some cases, most or all of the features might have an impact on the decision making. However,
a high dimensional dataset with a large feature space could potentially slow the performance of the
system in terms of space and time complexity. Choosing the right set of features for analysis can be
challenging, as it requires high-level domain knowledge. A solution to this problem is the Principle
Component Analysis (PCA) technique for dimensionality reduction. PCA is an approach to bring out the
principal components or the important aspects of the data. By using this method, the original feature
space of the data is transformed into a new set of features while retaining the variation present in the
data.
The technique of PCA analyses the variance of the data by measuring the covariance between the
features. This is done mathematically using the concept of eigenvalues and eigenvectors. Eigenvalues are
23
Speech emotion recognition
numbers denoting the value of variance in each dimension of the data, and the eigenvector is the
dimension with the highest eigenvalue. This eigenvector is the principal component of the dataset . Given
PCA works with the numerical values of the dataset to compute the variance, hence it is necessary
that the values are scaled and normalized. All normalized data variables will have a mean value of
0.
An NxN covariance matrix is computed, where N is the number of features in the dataset. The
elements of the covariance matrix represent the variance between each of the features in the
dataset.
Eigenvalues and eigenvectors are computed using the covariance matrix. This computation is
purely mathematical and many programming libraries have built-in functions for this calculation.
At the end of the computation, N eigenvalues for an N-dimensional dataset is obtained. The
The eigenvector component with the largest eigenvalue is the 1st Principal component, containing
the most information about the dataset. Sorting the eigenvalues in decreasing order can give the
list of principal components with the amount of variance needed. Depending on how much
24
Speech emotion recognition
information is needed, programmers can choose the top P number of components needed for
further analysis.
A new dataset is created using the principal components selected for analysis. Mathematically,
left multiplication of the transposed feature vector with the scaled original dataset will produce
Scree plot is a way to select the optimal number of components to be selected such that enough
information is being retained form the raw dataset. It is a curve plot having the information
maintained and the number of components on the different axis. The elbow point of the curve
25
Speech emotion recognition
6.ALGORITHMS
data belonging to different classes . There are three types of Logistic Regression algorithms, namely Binary
class, Multi-class and Ordinal class logistic algorithms depending on the type of target class. The Wikipedia
definition states that “Logistic regression computes the relationship between the target (dependent)
variable and one or more independent variables using the estimated probability values through a logistic
function” . The logistic function, also known as a sigmoid function, maps predicted values to probability
3. Make the final prediction by computing the maximum probability value amongst all classes
The time complexity of the algorithm is in the order of the number of data samples, represented as
O (n samples).
Naïve Bayes classifier is based on Bayes theorem, which determines the probability of an event
based on a prior probability of events . Bayes theorem is used to compute prior probability values.
26
Speech emotion recognition
This classifier algorithm assumes feature independence. No correlation between the features is
considered. The algorithm is said to be Naïve because it treats all the features to independently contribute
to deciding the target class. The steps of a simple Naïve Bayes algorithm is as follows :
1. Create a frequency table for all features individually. Tag the frequency of each entry
2. Create a likelihood table by computing probability values for each entry in the frequency
table.
3. Calculate posterior probability for each target class using the Bayes theorem.
4. Declare the target class with the highest posterior probability value as the predicted
outcome.
The time complexity of the algorithm is in the order of the number of data samples, represented as
O (n samples). There are three types of Naïve Bayes algorithm, namely: the Gaussian Naïve Bayes (GNB)
which is applicable with features following a normal distribution, the Multinomial Naïve Bayes (MNB)
which is most suited to use when the number of times the outcome occurs is to be computed, and the
Support Vector Machines (SVM) are a supervised algorithm that works for both classification and
regression problems. Support vectors are coordinate points in space, formed using the attributes of a data
point. Briefly, for an N-dimensional dataset, each data point is plotted on an N-dimensional space using
all its feature vector values as a coordinate point . Classification between the classes is performed by
finding a hyperplane in space that clearly separates the distinct classes. SVM works best for high
27
Speech emotion recognition
dimensional data. The important aspect of implementing SVM algorithm is finding the hyperplane. Two
conditions are to be met in the order given while choosing the right hyperplane.
2. The margin distance from the hyperplane to the nearest data point must be maximized.
For a low dimensional dataset, the method of kernel trick in SVM introduces additional features to
transform the dataset to a high dimensional space and thereby make identifying the hyperplane
achievable.
The linear solver based SVM is a better implementation of SVM in terms of time complexity. The
complexity scales between O(n samples X n2 samples) and O(n samples X n3 samples).
K-Nearest Neighbor (KNN) is the simplest classification algorithm. The approach is to plot all data
points on space, and with any new sample, observe its k nearest points on space and make a decision
based on majority voting. Thus, KNN algorithm involves no training and it takes the least calculation time
when implemented with an optimal value of k. The steps of KNN algorithm is as follows :
1. For a given instance, find its distance from all other data points. Use an appropriate distance
2. Sort the computed distances in increasing order. Depending on the value of k, observe the nearest
k points.
3. Identify the majority class amongst the k points, and declare it as the predicted class.
28
Speech emotion recognition
Choosing an optimal value of k is a challenge in this approach. Most often, the process is repeated for
a number of different trials of k. The evaluation scores are then observed using a graph to find the optimal
value of k.
There is no training in the model of KNN and hence there is no training time complexity value. While
testing, the number of nearest samples to be looked up for decides the complexity of the algorithm and
The Decision tree is an approach where the entire dataset is visualized as a tree, as the
classification problem is loosely translated as finding the path from the root to leaf based on several
decision conditions at each sub-tree level. Each feature in the dataset is treated as a decision node at its
sub-tree level. The initial tree is designed using the values of the training sample. For a new data point, its
values are tracked on the built tree from root to leaf where the leaf node represents the target or
predicted class. There are two types of decision trees based on the type of the target class, namely :
The Decision tree is a very simple algorithm to implement, as it requires less data preparation. It is
also widely used in data exploration. However, decision trees are very susceptible to noises in the dataset.
The time complexity of decision tree depends on the height of the tree controlled by the number of data
samples, and by the number of features used for the split. It can be represented asO(n
29
Speech emotion recognition
Random forest is a supervised classification similar to decision trees. While the root node and
splitting features in the decision tree are based on the Gini and Information gain values, the random forest
algorithm does it in a random fashion. The random forest is a collection of decision trees; therefore, a
large number of trees gives better results. Overfitting is a potential drawback of random forests, but
increasing the number of trees can reduce overfitting. Random forest also has several advantages like its
capability to handle missing values and classify multi-class categorical variables. The steps of building a
2. From the selected subset of features, using the best split method, pick a node.
3. Continue the best split method to form child nodes from the subset of features.
The time complexity of Random Forest is higher by the factor of the number of trees used for building the
forest.
Gradient boosting is a class of algorithms which is used over other regular algorithms (for example
Gradient boosting over decision trees algorithm) in order to improve the performance of the regular
30
Speech emotion recognition
function is best suited for classification problems. However, one may define their own loss
function.
A learning algorithm such as decision tree is a greedy approach, as it decides its splitting attribute
at each step using the best split method. Hence this algorithm is usually a good choice to be used
3. An additive model that minimizes the loss function by adding more weak learner
Gradient descent algorithmic models use weak learner algorithms as a sub-model, in this case, it
is a Decision Tree algorithm. Conventionally, the gradient descent procedure is used to reduce the
weights of the parameters used in the weak learner. After calculating the loss induced by the
model, at each step more trees are added to the model to reduce the loss or errors.
31
Speech emotion recognition
7.EVALUATION
The most important characteristic of machine learning models is its ability to improve. Once the
model is built, even before testing the model on real data, machine learning experts evaluate the
performance of the model. Evaluation metrics reveal important model parameters and provides numeric
scores that will help judge the functioning of the model. The most important metric needed to evaluate
The structure of a confusion matrix is against the actual and predicted positive and negative
classes, and contains four values which are used to compute other metrics. The true positive represents
the correct predictions made in the positive class, and the true negatives represent the correct predictions
made in the negative class. The false positives and false negatives are the observations wrongly predicted
32
Speech emotion recognition
Four important metrics can be derived using the values in the confusion matrix, namely :
7.2 ACCURACY
It is the ratio of the observations predicted correctly to the total number of observations. Accuracy
works best for datasets with an equal class distribution, and hence it is not always a good measure to
Accuracy = (True Positives + True Negatives) / (True Positives + False positives+True Negatives +
False Negatives)
7.3 PRECISION
It is the ratio of the positive observations predicted correctly to the total positive observations
predicted. Higher the value of precision, better and more accurate the model actually is. Precision can
It is the ratio of the positive observations predicted correctly to the total positive observations. A
recall score of 50% and more reveals a good performing model. Recall can also work with an uneven class
F1 score
It is the weighted average value of precision and recall. The F1 score is the best metric for uneven
33
Speech emotion recognition
8.IMPLEMENTATION
8.1 DATA COLLECTION
The first step in implementing the Speech Emotion Recognition system is to collect audio samples
under different emotional categories which can be used to train the model. The audio samples are usually
wav or mp3 files and publically available for download. The following steps are explained relative to the
The next step after data collection was to represent these audio files numerically, in order to
perform further analysis on them. This step is called feature extraction, where quantitative values for
different features of the audio is obtained. The pyAudioAnalysis library was used for this purpose . This
python library provides functions for short-term feature extraction, with tunable windowing parameters
such as frame size and frame step. At the end of this step, each audio file was represented asa row in a
CSV file with 34 columns representing the different features. Each feature will have a range of values for
one audio file obtained over the various frames in that audio signal. The python library pyAudioAnalysis
is an open Python library that provides a wide range of audio-related functionalities focusing on feature
extraction, classification, segmentation, and visualization issues. The library depends on several other
• Numpy
• Matplotlib
• Scipy
34
Speech emotion recognition
• Sklearn
• Hmmlearn
• Simplejson
• eyeD3
• pydub
Visualizing the data gives more understanding of the problem and the type of solution to be built.
The distribution of classes, the number of instances under each category, the spread of the data, the
correlation between the features and clustering are a few methods to visualize the data. Python and R
Primarily, the number of rows and columns and a preview of the data is viewed (see Figure 9.A).
35
Speech emotion recognition
Next, the number of examples under each category are counted (see Figure 9.B).
Distribution of the data on each of its feature can be visualized using box plots and violin plots or using
pair plots. The Seaborn package in Python provides methods to draw such plots. Shown here is the data
distribution of the feature ‘Energy’ among the different categories (see Figure 10.A and Figure 10.B).
36
Speech emotion recognition
The statistical language R provides several functions to effectively understand the statistics of the data.
For each feature, the statistical values were visualized and it was observed that the raw data was not
37
Speech emotion recognition
8.5 CORRELATION
The pearson correlation coefficient values were analyzed which provided insights about the
features that correlate positively and negatively with the target class. In this observation, no features had
38
Speech emotion recognition
8.6 CLUSTERING
Clustering the data provided a deeper understanding of the features. The k-means clustering was
performed iteratively for various values of k and evaluated against the sum of squares metric. For k values
less than 7, the cluster grouping was imbalanced, and for values greater than 10, the clusters were
becoming too spread out. The elbow method of plotting reveals a sharp turn at k=7 which produces
balanced clusters (see Figure 13). Interestingly there are 7 categories of emotions tagged in the dataset,
hence making 7 distinct cluster groups of the data shows a clear separation with the feature values among
the groups.
39
Speech emotion recognition
After analyzing the data through various visualizations, the next step is to prepare the data for
processing. The steps of data preparation include fixing quality issues, standardization, and normalization.
First, the data is checked for quality issues such as missing values (see Figure 14), outliers (see Figure 15),
invalid data and duplicate data. There were no missing values, invalid or duplicate values in the dataset.
40
Speech emotion recognition
With the outlier analysis, for each feature, the proportion of the outliers are viewed along with
the changes in the mean value of the feature with and without the outliers (see Figure 17). This insight
will help decide if the outliers are actual outliers or if they contribute to decision making.
Next, normalization was performed on the data as the raw data was recorded on a different scale.
After standardization, all feature values now are in the range 0 to 1 (see Figure 16).
41
Speech emotion recognition
Feature engineering is the process of transforming, reducing or constructing features for the
dataset. As mentioned earlier in the raw data, each feature has multiple values for each frame of the audio
signal. By the frame blocking and windowing techniques, the frame size and frame overlap values can be
tuned to obtain accurate values of the audio signal. Further, using the averaging technique, average values
of different features for the audio signals are obtained. Now the transformed data contains 34 discrete
42
Speech emotion recognition
Reducing the number of features is a crucial decision to take. Considering features to be removed
is generally based on subject knowledge and hence can affect the performance of the system. Next, a
series of experiments are performed with this prepared dataset in order to analyze the important
features.
43
Speech emotion recognition
9.EXPERIMENTS
For this experiment, all the 34 features were considered. The data represented 2399 audio files. The
1. Using K-fold cross-validation method, the dataset is split into training and validation sets for the
2. A classifier model is built using one of the classification algorithms and its parameters are
4. The trained model is evaluated using the validation set and the accuracy score is computed.
5. The model is tested and evaluation metrics such as precision, recall, and F1 scores are computed.
44
Speech emotion recognition
Fig.18 Implementation
The same experiment was repeated with various classification algorithms and the results were compared
(see Table 2). The results had higher accuracy scores than expected. There was an average performance
score of 75%. Besides the accuracy, the F1 score is the arithmetic mean of Precision and Recall, thereby
making it a good metric for comparing the models. SVM has a good F1 score of 83%, making it a winning
algorithm for this approach. It can also be observed that the tree-based algorithms have an improved
performance with enhancements. The decision tree classifier has a score of 70%+, improved with the
Random forest classifier with a score of 75%+, and further improved using the Gradient Decent classifier
with a score of 77%+. The accuracy score of Logistic regression is around 80%. However, the other metrics
45
Speech emotion recognition
have an average score of 70%. The Naïve Bayes classifier has a score of 74%+ and KNN classifier has a
score of 80%+.
Regression
Bayes
Trees
Among all the features retrieved from the audio signal, several researchers suggest that the MFCC
values alone closely relate to the emotional tone of the audio. Studies suggest that using the MFCC
features can reduce the dimensionality of the training set, and thereby take less computation time. This
experiment repeats the same procedure as the previous but using only the MFCC values which are 13
dimensional.
46
Speech emotion recognition
Experiments similar to the first approach were carried out and the results are tabulated (see Table 3). The
results of the second approach had scores lower than the first approach. The average accuracy of the
models built using the second approach is 72%. KNN is the winning algorithm for this approach with a
leading score of 80%+. Similar to the previous approach, there were improvements in the Decision Tree
Regression
Bayes
Trees
From the previous experiments, it can be learned that reducing the dimensions by cutting off
the features, will reduce the performance of the model. This is because by cutting off features most of
the information from the original dataset is not retained. Principal Component Analysis (PCA) is a
47
Speech emotion recognition
dimensionality reduction methodology in which the variation from the raw dataset can be specified to
be retained. PCA has a pre-requisite that the data should be standardized before performing PCA on it.
For this experiment, a scree plot was plotted (see Figure 19) to determine the optimal number of
components to be retained. The elbow point was at 25 component with 95% information being
retained. The data now is 25 dimensional. The steps of model building and evaluation was carried out
2. All the numeric values of the data are standardized in order to maintain a uniform distribution.
3. PCA is performed on the data with 95% information retain value, and the respective number of
48
Speech emotion recognition
The implementation results of the third approach are tabulated (see Table 4) and the performance of the
different classifiers are compared. The average score of this approach is 77%, which is an improvement
over the other two approaches. The winning algorithm, similar to the first approach is SVM with a high
score of 90%+. KNN classifier had scores closer to the winning algorithm of 87%. The Decision tree
classifier improved from 68% to 71% using the Random Forest classifier, and to 75% with Gradient
boosting trees. The logistic regression has an average score of 87%. Naïve Bayes has a score of 75%.
49
Speech emotion recognition
Regression
Bayes
Trees
The various algorithms performed differently with each approach of the implementation. Since
accuracy is not always a good measure for evaluating the model, the F1 scores can be used for
comparison. On comparing the F1 scores from the results of each approach (see Table 5 and Figure 21),
led to useful conclusions. SVM, KNN, and Logistic Regression classifiers have a low performance in the
second approach and an improved performance in the third approach, as compared to the first
approach. The tree-based classifiers, such as Decision Tree, Random Forest, and Gradient Boosted trees
performed the best for the first approach and has a low performance with the other approaches. The
KNN classifier had a constant performance score for all the three approaches.
50
Speech emotion recognition
Regression
Bayes
Trees
51
Speech emotion recognition
The third approach was found performing better than the other two approaches. Therefore the
same implementation procedure was run on the KEEL dataset and the results are tabulated (see Table
6). The performance scores are lower than the TESS dataset, due to the smaller size of the training data
in KEEL. The highest score of the KEEL dataset is by the SVM classifier with an average value of 65%,
making it the winning algorithm. The performance of the different models is similar to the results of the
TESS dataset.
Regression
Bayes
Trees
52
Speech emotion recognition
10. RESULTS
Several observations and conclusions can be derived from the results of the implementation. There is
an overall improvement in the performance scores between the different approaches. The
implementation following the first approach has a fair performance with a high score of 83% using the
SVM algorithm, and the second approach worked well using the KNN algorithm for a high score of 80%,
and the third approach had a 90% score using the SVM algorithm, which is the highest among all three
Observation 1: Upon comparing of the results of the first and third approach, the SVM and KNN algorithm
had improvements in the different performance metric scores. However, the Decision Tree, Random
Forest and, Gradient Boosting Trees had diminishing scores. The Bayesian algorithm performed constantly
The improved scores of the SVM and KNN algorithm can be attributed to the dimensionality reduction
used in the third approach. Reducing the dimensionality of the data increases the ratio of the size of the
dataset to the number of dimensions, which reduces the bias of the classifier towards any particular class.
Contrastingly, the performance of the tree-based algorithms improves with a larger feature set. This is
because, the depth of the decision tree increases with adding more features, and thereby help making
more accurate decisions. The Bayesian principle of the Naïve Bayes algorithm works on the prior
probability value calculated for each data in the training set, which remains unchanged among the two
53
Speech emotion recognition
Observation 2: The overall performance of the second approach was lower than the other two
approaches.
This is because of the selective feature approach fails to contain most of the information from the speech
signal, and can be concluded that using only the MFCC values alone cannot be a good measure to classify
Observation 3: The classification report of the first approach (see Figure 22) shows that the
This bias is due to the common properties of the features in these two categories. The dimensionality
reduction step used in the third approach has greatly minimized this bias (see Figure 23).
On comparison to the baseline system by Chen et al. [8], the proposed system has improvements in the
accuracy score (see Figure 24). The third experiment is the winning approach for the proposed
methodology.
54
Speech emotion recognition
55
Speech emotion recognition
90
85
80
75
70
All 34 features MFCC only PCA components
Baseline Proposed System
56
Speech emotion recognition
11. CONCLUSION
The emerging growth and development in the field of AI and machine learning have led to the new era of
automation. Most of these automated devices work based on voice commands from the user. Many
advantages can be built over the existing systems if besides recognizing the words, the machines could
comprehend the emotion of the speaker (user). Some applications of a speech emotion detection system
are computer-based tutorial applications, automated call center conversations, a diagnostic tool used for
In this thesis, the steps of building a speech emotion detection system were discussed in detail and some
experiments were carried out to understand the impact of each step. Initially, the limited number of
publically available speech database made it challenging to implement a well-trained model. Next, several
novel approaches to feature extraction had been proposed in the earlier works, and selecting the best
approach included performing many experiments. Finally, the classifier selection involved learning about
the strength and weakness of each classifying algorithm with respect to emotion recognition. At the end
of the experimentation, it can be concluded that an integrated feature space will produce a better
12.FUTURE ENHANCEMENT
For future advancements, the proposed project can be further modeled in terms of efficiency, accuracy,
and usability. Additional to the emotions, the model can be extended to recognize feelings such as
depression and mood changes. Such systems can be used by therapists to monitor the mood swings of
the patients. A challenging product of creating machines with emotion is to incorporate a sarcasm
detection system. Sarcasm detection is a more complex problem of emotion detection since sarcasm
cannot be easily identified using only the words or tone of the speaker. A sentiment detection using
Speech emotion recognition
vocabulary, can be integrated with speech emotion detection to identify a possible sarcasm. Therefore,
in the future, there would emerge many applications of a speech-based emotion recognition system
57
Speech emotion recognition
13. References
https://developer.amazon.com/alexa
[2] Store.google.com. (2018). Google Home Tips & Tricks – Google Store. Available at:
https://store.google.com/product/google_home_learn
[3] The Official Samsung Galaxy Site. (2018). What is S Voice?. Available at:
[4] Gartner.com. (2018). Gartner Says 8.4 Billion Connected. Available at:
https://www.gartner.com/newsroom/id/3598917.
[5] H. Cao, R. Verma, and A. Nenkova, “Speaker-sensitive emotion recognition via ranking: Studies
onacted and spontaneous speech,” Comput. Speech Lang., vol. 28, no. 1, pp. 186–202, Jan. 2015.
[6] L. Chen, X. Mao, Y. Xue, and L. L. Cheng, “Speech emotion recognition: Features and classification
models,” Digit. Signal Process., vol. 22, no. 6, pp. 1154–1160, Dec. 2012.
[7] T. L. Nwe, S. W. Foo, and L. C. De Silva, “Speech emotion recognition using hidden Markov
models,” Speech Commun., vol. 41, no. 4, pp. 603–623, Nov. 2003.
[8] J. Rong, G. Li, and Y.-P. P. Chen, “Acoustic feature selection for automatic emotion recognition
from speech,” Inf. Process. Manag., vol. 45, no. 3, pp. 315–328, May 2009.
58
Speech emotion recognition
[9] S. S. Narayanan, “Toward detecting emotions in spoken dialogs,” IEEE Trans. Speech Audio
[11] J. Alcalá-Fdez, A. Fernandez, J. Luengo, J. Derrac, S. García, L. Sánchez, F. Herrera. KEEL Data-
Mining Software Tool: Data Set Repository, Integration of Algorithms and Experimental Analysis
Framework. Journal of Multiple-Valued Logic and Soft Computing 17:2-3 (2011) 255-287.
[12] S, Khalid, T, Khalil and S, Nasreen. (2014). 2014 Science and Information Conference, A survey of
https://github.com/tyiannak/pyAudioAnalysis.
59