Fake News Detection Using Machine (Review Paper)
Fake News Detection Using Machine (Review Paper)
Fake News Detection Using Machine (Review Paper)
Non-Syllabus Project
Semester V, 2022
NSP SEM-V
Poornima College Of Engineering, Jaipur (Rajasthan)
Non-Syllabus Project
Semester V, 2022
to spread false stories. Third, fake news their system for fake news discovery supported
changes the way people interpret and answer the feedback for the precise news within the
real news, for case, some fake news was just micro blogs. In (8) the authors actually develop
created to spark people’s mistrust and make two systems for deception discovery supported
them confused; impeding their capacities to support vector machines and Naive Bayes
separate what is true from what is not. To help classifier (this system is employed within the
alleviate the negative goods caused by fake system described during this paper as well)
news (both to benefit the general public and independently. They collect the word by means
thus the news ecosystem). It’s pivotal that we of asking people to directly give true or false
make up styles to automatically descry fake information on several motifs – revocation,
news broadcast on social media (3). Internet prosecution and fellowship. The delicacy of the
and social media have made the access to the discovery achieved by the system is around 70.
news information much easier and comfortable This textbook describes an easy fake news
(2). Frequently Internet druggies can pursue the discovery system supported one among the
events of their concern in online form, and synthetic intelligence algorithms – naïve Bayes
increased number of the mobile bias makes this classifier, Random Forest and Logistic
process indeed easier. But with great Regression. The thing of the exploration is to
possibilities come great challenges. Mass look at how these particular styles work for this
media have an enormous influence on the particular problem given a manually labelled
society, and because it frequently happens, news dataset and to support (or not) the study
there is someone who wants to bear advantage of using AI for fake news discovery. The
of this fact. occasionally to realize some difference between these composition and
pretensions mass- media may manipulate the papers on the analogous motifs is that during
knowledge in several ways. This result in this paper Logistic Retrogression was
producing of the newspapers that is n‘t fully specifically used for fake news discovery; also,
true or perhaps fully false. There indeed live the advanced system was tested on a
numerous websites that produce fake news comparatively new data set, which gave a
nearly simply. They designedly publish chance to gauge its performance on a recent
phonies, half- trueness, propaganda and data. A. Characteristics of Fake News They
intimation asserting to be real news – frequently frequently have grammatical miscalculations.
using social media to drive web business and They’re frequently emotionally coloured. They
magnify their effect. The most pretensions of frequently try to affect compendiums ‘opinion
dummy news websites are to affect the general on some motifs. Their content isn't always true.
public opinion on certain matters (substantially They frequently use attention seeking words
political). Samples of similar websites could and news format and click baits. They're too
also be set up in Ukraine, United States of good to be true. Their sources aren't genuine
America, Germany, China and important of utmost of the times (9).
other countries (4). therefore, fake news may be
a global issue also as a worldwide challenge. II. LITERATURE REVIEW
numerous scientists believe that fake news issue Mykhailo Granik et.al. in their paper (3) shows
could also be addressed by means of machine a simple approach for fake news discoverusing
literacy and AI (5). There’s a reason for that naive Bayes classifier. This approach enforced
lately AI algorithms have begun to work far as a software system and tested against a data
better on numerous bracket problems (image set of Facebook news posts. They were
recognition, voice discovery also on) because collected from three large Facebook runners
tackle is cheaper and larger datasets are each from the right and from the left wing, as
available. There are several influential papers well as three large mainstream political news
about automatic deception discovery. In (6) the runners (Politico, CNN, ABC News). They
authors give a general overview of the available achieved bracket delicacy of roughly 74.
ways for the matter. In 7) the authors describe Bracket delicacy for fake news is slightly
NSP SEM-V
Poornima College Of Engineering, Jaipur (Rajasthan)
Non-Syllabus Project
Semester V, 2022
worse. This may be caused by the skewness of harmonious with previous work. They calculate
the dataset only4.9 of its fake news. on relating largely retweeted vestments of
Himank Gupta et.al. (10) gave a frame discussion and use the features of these
grounded on different machine literacy vestments to classify stories, limiting this work
approach that deals with colourful problems ‘s connection only to the set of popular tweets.
including delicacy deficit, time pause Since the maturity of tweets are infrequently
(BotMaker) and high processing time to handle retweeted, this system thus is only usable on a
thousands of tweets in 1 sec. originally, they've nonage of Twitter discussion vestments.
collected,000 tweets from HSpam14 dataset.
also, they further characterize the,000 spam III. METHODOLOGY
tweets and,000 non- spam tweets. They also This paper explains the system which is
deduced some featherlight features along with developed in three corridors. The first part is
the Top- 30 words that are furnishing loftiest static which works on machine literacy
information gain from Bag- of- Words model. classifier. We studied and trained the model
4. They were suitable to achieve a delicacy with 4 different classifiers and chose the stylish
of91.65 and surpassed the being result by classifier for final prosecution. The alternate
approximately18. part is dynamic which takes the keyword/
MarcoL. Della Vedova et.al. (11) first textbook from stoner and quests online for the
proposed a new ML fake news discovery verity probability of the news. The third part
system which, by combining news content and provides the authenticity of the URL input by
social environment features, outperforms being stoner. In this paper, we've used Python and its
styles in the literature, adding its delicacy up Sci- tackle libraries (14). Python has a huge set
to78.8. Second, they enforced their system of libraries and extensions, which can be
within a Facebook Messenger Chabot and fluently used in Machine literacy. Sci- tackle
validate it with a real- world operation, carrying Learn library is the stylish source for machine
a fake news discovery delicacy of81.7. Their literacy algorithms where nearly all types of
thing was to classify a news item as dependable machine literacy algorithms are readily
or fake; they first described the datasets they available for Python, therefore easy and quick
used for their test, also presented the content- evaluation of ML algorithms is possible. We've
ground approach they enforced and the system used Django for the web grounded deployment
they proposed to combine it with a social- of the model, provides customer side
grounded approach available in the literature. Perpetration using HTML, CSS and JavaScript.
The performing dataset is composed of,500 We've also used Beautiful Soup(bs4), requests
posts, coming from 32 runners (14 conspiracy for online scrapping.
runners, 18 scientific runners), with further than (A)System Design-
2, 300, 00 likes by,000 druggies.,923(57.6)
posts are phonies and,577(42.4) arenon-hoaxes.
Cody Buntain et.al. (12) develops a system for
automating fake news discovery on Twitter by
learning to prognosticate delicacy assessments
in two credibility- concentrated Twitter datasets
CREDBANK, a crowd sourced dataset of
delicacy assessments for events in Twitter, and
PHEME, a dataset of implicit rumours in
Twitter and journalistic assessments of their
rigor. They apply this system to Twitter content
sourced from BuzzFeed ‟ s fake news dataset. Figure 1 System Design
A point analysis identifies features that are most (B)System Architecture-
prophetic for crowd sourced and journalistic i) Static Search- The armature of stationary
delicacy assessments, results of which are part of fake news discovery system is relatively
NSP SEM-V
Poornima College Of Engineering, Jaipur (Rajasthan)
Non-Syllabus Project
Semester V, 2022
simple and is done keeping in mind the similar as news agency homepages, search
introductory machine literacy process inflow. machines, and social media websites. still,
The system design is shown below and tone- manually determining the veracity of news is a
explicatory. The main processes in the design gruelling task, generally taking evaluators with
are- sphere moxie who performs careful analysis of
claims and fresh substantiation, environment,
and reports from authoritative sources.
Generally, news data with reflections can be
gathered in the following ways Expert
intelligencers, Fact- checking websites,
Assiduity sensors, and Crowd sourced workers.
still, there are no agreed upon standard datasets
for the fake news discovery problem. Data
gathered must Belpre-processed- that is, gutted,
converted and integrated before it can suffer
training process (16). The dataset that we used
Figure 2 System Architecture
is explained below Fabricator This dataset is
collected from fact- checking website PolitiFact
ii) Dynamic Hunt- The alternate hunt field of
through its API (15). It includes,836 mortal
the point asks for specific keywords to be
labelled short statements, which are tried from
searched on the net upon which it provides a
colourful surrounds, similar as news releases,
suitable affair for the chance probability of that
television or radio interviews, crusade
term actually being present in an composition
speeches, etc. The markers for news probity are
or a analogous composition with those keyword
fine- granulated multiple classes pants- fire,
references in it. iii) URL Hunt- The third hunt
false, slightly-true, partial-true, substantially
field of the point accepts a specific website
true, and true. The data source used for this
sphere name upon which the perpetration looks
design is Fabricator dataset which contains 3
for the point in our true spots database or the
lines with. csv format for test, train and
blacklisted spots database. The true spots
confirmation. Below is some description about
database holds the sphere names which
the data lines used for this design. 1. Fabricator
regularly give proper and authentic news and
A Benchmark Dataset for Fake News
vice versa. If the point is n ‘t set up in either of
Discovery William Yang Wang, ― Fabricator,
the databases also the perpetration does n ‘t
Liar Pants on Fire ‖ A New Benchmark Dataset
classify the sphere it simply states that the news
for Fake News Discovery, to appear
aggregator doesn't live.
in Proceedings of the 55th Annual Meeting of
the Association for Computational Linguistics
IV.IMPLEMENTATION
(ACL 2017), short paper, Vancouver, BC,
Canada, July 30- August 4, ACL. Below are the
4.1DATA COLLECTION AND ANALYSIS
columns used to produce 3 datasets that have
We can get online news from different sources
been in used in this design- ● Column1
like social media websites, search machine,
Statement (News caption or textbook). ●
homepage of news agency websites or the fact-
Column2 Marker (Marker class contains True,
checking websites. On the Internet, there are a
False) The dataset used for this design were in
many intimately available datasets for Fake
csv format namedtrain.csv, test.csv
news bracket like Buzzfeed News, LIAR (15),
andvalid.csv. 2.REAL_OR_FAKE.CSV, we
BS Sensor etc. These datasets have been
used this dataset for unresistant aggressive
extensively used in different exploration papers
classifier. It contains 3 columns viz 1- Text/
for determining the veracity of news. In the
keyword, 2- Statement, 3- Marker (Fake/ True)
ensuing sections, I've bandied in brief about the
4.2 DEFINITION AND DETAILS
sources of the dataset used in this work. Online
Pre-processing Data
news can be collected from different sources,
NSP SEM-V
Poornima College Of Engineering, Jaipur (Rajasthan)
Non-Syllabus Project
Semester V, 2022
Social media data is largely unshaped – corpus of words but frequently the factual
maturity of them is informal communication words get neglected. e.g., Entitling, Entitled->
with typos, cants and bad- alphabetetc. (17). Entitle. Note Some hunt machines treat words
Quest for increased performance and with the same stem as antonyms (18).
trustability has made it imperative to develop B. Feature Generation
ways for application of coffers to make We can use textbook data to induce a number
informed opinions (18). To achieve better of features like word count, frequency of large
perceptivity, it's necessary to clean the data words, frequency of unique words, n- grams
before it can be used for prophetic modelling. etc. By creating a representation of words that
For this purpose, introductory pre- processing capture their meanings, semantic connections,
was done on the News training data. This step and multitudinous types of environments
was comprised of- they're used in, we can enable computer to
Data Cleaning: understand textbook and perform Clustering,
While reading data, we get data in the Bracket etc (19).
structured or unshaped format. A structured Vectorizing Data
format has a well- defined pattern whereas Vectorizing is the process of garbling textbook
unshaped data has no proper structure. In as integers i.e., numeric form to produce point
between the 2 structures, we've asemi- vectors so that machine literacy algorithms can
structured format which is a comparably more understand our data.
structured than unshaped format. 1. Vectorizing Data Bag- Of- Words
Drawing up the textbook data is necessary to Bag of Words(arc) or Count Vectorizer
punctuate attributes that we ‘re going to want describes the presence of words within the
our machine literacy system to pick up on. textbook data. It gives a result of 1 if present
drawing (or pre- processing) the data generally in the judgment and 0 if not present. It, thus,
consists of a number of ways creates a bag of words with a document-
a) Remove punctuation matrix count in each textbook document.
Punctuation can give grammatical 2. Vectorizing Data N- Grams
environment to a judgment which supports our N- grams are simply all combinations of
understanding. But for our vectorizer which conterminous words or letters of length n that
counts the number of words and not the we can find in our source textbook. Ngrams
environment, it doesn't add value, so we with n = 1 are called unigrams. also, bigrams
remove all special characters. e.g., How are (n = 2), trigrams (n = 3) and so on can also be
you? -> How are you used. Unigrams generally do n ‘t contain
b) Tokenization important information as compared to bigrams
Tokenizing separates textbook into units and trigrams. The introductory principle
similar as rulings or words. It gives structure to behind n- grams is that they capture the letter
preliminarily unshaped textbook. eg Plata o or word is likely to follow the given word. The
Plomo-> ‗ Plata ‘, ‘o ‘, ‘Plomo ‘. longer the n- gram (advanced n), the further
c) Remove stop words environment you have to work with (20).
Stopwords are common words that will 3. Vectorizing Data TF- IDF
probably appear in any textbook. They don ‘t It computes ― relative frequency ‖ that a
tells us important about our data so we remove word appears in a document compared to its
them. e.g., tableware or lead is fine for me-> frequency across all documents TF- IDF
tableware, lead, fine. weight represents the relative Note Used for
d) Stemming hunt machine scoring, textbook
Stemming helps reduce a word to its stem summarization, document clustering. () IDF
form. It frequently makes sense to treat stands for Inverse Document frequency A
affiliated words in the same way. It removes word isn't of important use if it's present in all
suffices, like ― ing ‖, ― ly ‖, ― s ‖, etc. by a the documents. Certain terms like ― a ‖, ― an
simple rule- grounded approach. It reduces the ‖, ― the ‖, ― on ‖, ― of ‖etc. appear numerous
NSP SEM-V
Poornima College Of Engineering, Jaipur (Rajasthan)
Non-Syllabus Project
Semester V, 2022
times in a document but are of little class. The timber chooses the bracket having
significance. IDF weighs down the the most votes (over all the trees in the
significance of these terms and increase the timber). The arbitrary timber is a bracket
significance of rare bones. The more the value algorithm conforming of numerous opinions
of IDF, the further unique is the word (17).)) trees. It uses bagging and point randomness
TF- IDF is applied on the body textbook, so when erecting each individual tree to try to
the relative count of each word in the rulings produce an uncorrelated timber of trees whose
is stored in the document matrix. ()()() vaticination by commission is more accurate
Note: Vectorizers labors meager matrices. than that of any individual tree. Random
meager Matrix is a matrix in which utmost timber, like its name implies, consists of a
entries are 0(21). large number of individual decision trees that
C. Algorithms used for Classification operate as an ensemble. Each individual tree
This section deals with training the classifier. in the arbitrary timber spits out a class
Different classifiers were delved to vaticination and the class with the most votes
prognosticate the class of the textbook. We become our model ‘s vaticination. The reason
explored specifically four different machine- that the arbitrary timber model works so well
learning algorithms – Multinomial Naïve is A large number of fairly uncorrelated
Bayes Passive Aggressive Classifier and models(trees) operating as a commission will
Logistic retrogression. The executions of outperform any of the individual element
these classifiers were done using Python models. So how does arbitrary timber ensure
library Sci- tackle Learn. that the geste of each individual tree isn't too
Detail preface to the algorithms- identified with the geste of any of the other
1. Naïve Bayes Classifier trees in the model? It uses the following two
This bracket fashion is grounded on Bayes styles:
theorem, which assumes that the presence of a 2.1 Bagging (Bootstrap Aggregation) —
particular point in a class is independent of the opinions trees are veritably sensitive to the
presence of any other point. It provides way data they're trained on — small changes to the
for calculating the posterior probability. () () training set can affect in significantly different
significance of a term in the document and tree structures. Random timber takes
entire corpus (17). () ) advantage of this by allowing each individual
TF stands for Term frequence It calculates tree to aimlessly sample from the dataset with
how constantly a term appears in a document. relief, performing in different trees. This
Since, every document size varies, a term may process is known as bagging or bootstrapping.
appear more in a long-sized document that a
short bone. therefore, the length of the 2.2- Feature Randomness —
document frequently divides Term frequence In a normal decision tree, when it's time to
P (c| x) = posterior probability of class given resolve a knot, we consider every possible
predictor point and pick the bone that produces the most
P(c) = previous probability of class separation between the compliances in the left
P (x| c) = liability (probability of predictor knotvs. those in the right knot. In discrepancy,
given class) each tree in a arbitrary timber can pick only
P(x) = previous probability of predictor from a arbitrary subset of features. This forces
indeed more variation amongst the trees in the
2. Random Forest model and eventually results in lower
Random Forest is a trademark term for an correlation across trees and further
ensemble of decision trees. In Random Forest, diversification (22).
we ‘ve collection of decision trees (so known 3. Logistic Retrogression
as ― timber ‖). To classify a new object It's a bracket not a retrogression algorithm.
grounded on attributes, each tree gives a It's used to estimate separate values (double
bracket and we say the tree ― votes ‖ for that values like0/1, yes no, true/ false) grounded on
NSP SEM-V
Poornima College Of Engineering, Jaipur (Rajasthan)
Non-Syllabus Project
Semester V, 2022
given set of independent variables(s). In Step 3: Once fitting the model, we compared
simple words, it predicts the probability of the f1 score and checked the confusion
circumstance of an event by fitting data to a matrix.
logit function. Hence, it's also known as logit Step 4: After fitting all the classifiers, 2 stylish
retrogression. Since, it predicts the performing models were named as seeker
probability, its affair values lie between 0 and models for fake news bracket.
1(as anticipated). Mathematically, the log Step 5: We've performed parameter tuning by
odds of the outgrowth are modelled as a direct enforcing GridSearchCV styles on these
combination of the predictor variables (23). seeker models and chosen stylish performing
paramters for these classifiers.
Odds = p/ (1- p) = probability of event Step 6: Eventually named model was used for
circumstance/ probability of not event fake news discovery with the probability of
circumstance verity.
ln(odds) = ln (p/ (1- p)) Step 7: Our eventually named and stylish
logit(p) = ln (p/ (1- p)) = b0 b1X1 b2X2 performing classifier was Logistic
b3X3. bkXk Retrogression which was also saved on
fragment. It'll be used to classify the fake
4. Passive Aggressive Classifier news.
the Passive Aggressive Algorithm is an online It takes a news composition as input from
algorithm; ideal for classifying massive stoner also model is used for final bracket
aqueducts of data (e.g., twitter). It's easy to affair that's shown to stoner along with
apply and veritably presto. It works by taking probability of verity.
an illustration, learning from it and also B. Dynamic Search Implementation-
throwing it down (24). Such an algorithm Our dynamic perpetration contains 3 hunt
remains unresistant for a correct bracket fields which are-
outgrowth, and turns aggressive in the event 1) Search by composition content.
of a misapprehension, streamlining and 2) Hunt using crucial terms.
conforming. Unlike utmost other algorithms, 3) Search for website in database.
it doesn't meet. Its purpose is to make updates In the first hunt field we've used Natural
that correct the loss, causing veritably little Language Processing for the first hunt field to
change in the norm of the weight vector (25). come up with a proper result for the problem,
and hence we've tried to produce a model
4.3 IMPLEMENTATION STEPS which can classify fake news according to the
A Static Search perpetration- terms used in the review papers. Our operation
In static part, we've trained and used 3 out of uses NLP ways like CountVectorization and
4 algorithms for bracket. They're Naïve Bayes, TF- IDF Vectorization before passing it
Random Forest and Logistic Regression. through a Passive Aggressive Classifier to
Step 1: In first step, we've uprooted features affair the authenticity as a chance probability
from the formerly-processed dataset. These of a composition.
features are; Bag- of- words, Tf- Idf Features The alternate hunt field of the point asks for
and N- grams. specific keywords to be searched on the net
Step 2: Then, we've erected all the classifiers upon which it provides a suitable affair for the
for prognosticating the fake news discovery. chance probability of that term actually being
The uprooted features are fed into different present in a composition or a analogous
classifiers. We've used Naive- bayes, Logistic composition with those keyword references in
Retrogression, and Random timber classifiers it.
from sklearn. Each of the uprooted features The third hunt field of the point accepts a
was used in all of the classifiers. specific website sphere name upon which the
perpetration looks for the point in our true
spots database or the blacklisted spots
NSP SEM-V
Poornima College Of Engineering, Jaipur (Rajasthan)
Non-Syllabus Project
Semester V, 2022
database. The true spots database holds the algorithm. A confusion matrix is a summary
sphere names which regularly give proper and of vaticination results on a bracket problem.
authentic news and vice versa. If the point is The number of correct and incorrect
n‘t set up in either of the databases also the prognostications are epitomized with count
perpetration does n‘t classify the sphere it values and broken down by each class. This is
simply states that the news aggregator doesn't the key to the confusion matrix. The confusion
live. matrix shows the ways in which your bracket
Working- model is confused when it makes
The problem can be broken down into 3 prognostications. It gives us sapience not only
statements- into the crimes being made by a classifier but
1) Use NLP to check the authenticity of a news more importantly the types of crimes that are
composition. being made (26).
2) If the stoner has a query about the
authenticity of a hunt query also, we he she Total Class 1 Class 2
can directly search on our platform and using (Predicted) (Predicted)
our custom algorithm we affair a confidence Class 1
TP FN
score. (Actual)
Class 2
3) Check the authenticity of a news source. FP TN
(Actual)
These sections have been produced as hunt
fields to take inputs in 3 different forms in our
perpetration of the problem statement.
These criteria are generally used in the machine
4.4 EVALUATION MATRICES
learning community and enable us to estimate
estimate the performance of algorithms for
the performance of a classifier from different
fake news discovery problem; colourful
perspectives. Specifically, delicacy measures
evaluation criteria have been used. In this
the similarity between prognosticated fake
subsection, we review the most extensively
news and real fake news.
used criteria for fake news discovery. utmost
4.5 SNAPSHOT OF SYSTEM WORKING
being approaches consider the fake news
Static System-
problem as a bracket problem that predicts
whether a news composition is fake or not
True Positive (TP) when prognosticated fake
news pieces are actually classified as fake
news;
True Negative (TN) when prognosticated true Figure 3 stationary affair (True)
news pieces are actually classified as true
news;
False Negative (FN) when prognosticated true
news pieces are actually classified as fake
news; Figure 4 stationary Affair (False)
False Positive (FP) when prognosticated fake
news pieces are actually classified as true V. RESULTS
news. Implementation was done using the below
algorithms with Vector features Count Vectors
Confusion Matrix and Tf- Idf vectors at Word position and
A confusion matrix is a table that's frequently Ngram- position. delicacy was noted for all
used to describe the performance of a bracket models. We used-fold cross confirmation
model (or ― classifier ‖) on a set of test data fashion to ameliorate the effectiveness of the
for which the true values are known. It allows models.
the visualization of the performance of an
NSP SEM-V
Poornima College Of Engineering, Jaipur (Rajasthan)
Non-Syllabus Project
Semester V, 2022
NSP SEM-V
Poornima College Of Engineering, Jaipur (Rajasthan)
Non-Syllabus Project
Semester V, 2022
apparent over for static hunt, our stylish model chancing fake news ‖ at Proceedings of the
came out to be Logistic Retrogression with a Association for Information Science and
delicacy of 65. Hence, we also used grid hunt Technology, 52(1), pp.1- 4.
parameter optimization to increase the
performance of logistic retrogression which
also gave us the delicacy of 75. Hence, we can
say that if a stoner feed a particular news
composition or its caption in our model, there
are 75 chances that it'll be classified to its true
nature.
The stoner can check the news composition or
keywords online; he can also check the
authenticity of the website. The delicacy for
dynamic system is 93 and it increases with
every replication. We intent to make our own
dataset which will be kept up to date according
to the rearmost news. All the live news and
rearmost data will be kept in a database using
Web straggler and online database.
VII. REFERENCES
(1) Kai Shu, Amy Sliva, Suhang Wang, Jiliang
Tang, and Huan Liu, ― Fake News Discovery
on social media A Data Mining Perspective ‖
arXiv1708.01967 v3(cs.SI), 3 Sep 2017
(2) Kai Shu, Amy Sliva, Suhang Wang, Jiliang
Tang, and Huan Liu, ― Fake News Discovery
on social media A Data Mining Perspective ‖
arXiv1708.01967 v3(cs.SI), 3 Sep 2017
(3) M. Granik andV. Mesyura," Fake news
discovery using naive Bayes classifier," 2017
IEEE First Ukraine Conference on Electrical
and Computer Engineering (UKRCON), Kiev,
2017, pp. 900- 903.
(4) Fake news websites. (n.d.) Wikipedia.
(Online). Available
https//en.wikipedia.org/wiki/Fake_news_websi
te. Accessed. 6, 2017
(5) Cade Metz. (2016, Dec. 16). The bittersweet
sweepstakes to make an AI that destroys fake
news.
( 6) Conroy, N., Rubin, V. and Chen, Y. (2015).
― Automatic deception discovery styles for
NSP SEM-V