NLP 833
NLP 833
NLP 833
ON
Natural Language Processing
OF
: SUBMITTED BY : : GUIDED BY :
RollNo – 833 Seat No – 5324 name>
Kaklotar Preet P.
1|Page
This is to certify that the seminar presentation and report titled “NLP : Natural Language
Processing” is the bonafide work carried out by Kaklotar Preet Popatbhai (5234), student of
TYBCA Sem-VI of Sutex Bank College of Computer Application and Science, Amroli, (Surat)
affiliated to Veer Narmad South Gujrat University. He/She Has successfully completed his/her
seminar work in fulfilment of the requirements Of the award of the degree of “Bachelor of
Computer Application” during the academic year 2023-24.
2|Page
INDEX
Sr Page
Content
No. Number
1.
Abstract 4
2. Introduction 4
4. Level of NLP 8
6. History of NLP 13
7. Related Work 14
3|Page
Natural Language Processing
Abstract
1. Introduction
A language can be defined as a set of rules or a set of symbols. Symbols are combined and
used for conveying information or broadcasting the information. Symbols are tyrannized
by the rules. Natural Language Processing basically can be classified into two parts i.e.
Natural Language Understanding and Natural Language Generation which evolves the
task to understand and generate the text (Figure 1).
4|Page
Figure 1. Broad Classification of NLP
Linguistics is the science of language which includes Phonology that refers to sound,
Morphology that refers to word formation, Syntax referring to sentence structure,
Semantics referring to syntax and Pragmatics which refers to understanding.
Noah Chomsky, one of the first linguists of the twelfth century who started syntactic
theories, marked a unique position in the field of theoretical linguistics because he
revolutioned the area of syntax1 which can be broadly categorized into two levels: Higher
Level which includes speech recognition and Lower Level which corresponds to natural
language. Few of the researched tasks of NLP are Automatic Summarization, Co-
Reference Resolution, Discourse Analysis, Machine Translation, Morphological
Segmentation, Named Entity Recognition, Optical Character Recognition, Part of Speech
Tagging etc. Some of these tasks have direct real world applications such as Machine
translation, Named entity recognition, Optical character recognition etc. Automatic
summarization produces an understandable summary of a set of text and provides
summaries or detailed information of text, of a known type. Co-reference resolution refers
5|Page
to a sentence or a large set of text that determines which words refer to the same object.
Discourse analysis refers to the task of identifying the discourse structure of connected
text. Machine translation refers to the automatic translation of text from one human
6|Page
that modules can be adapted and replaced. Furthermore, modular architecture allows the
different configurations and dynamic distribution.
The structure of natural language processing (NLP) can be broken down into several
different components, each of which plays a critical role in the overall process of
understanding and generating human language. These components can be broadly grouped
into three main categories:
7|Page
text into individual words or smaller units), part-of-speech tagging (labeling
words with their corresponding part of speech), and named entity recognition
(identifying and classifying named entities in the text).
3. Levels of NLP
The ‘levels of language’ are one of the most explanatory methods for representing the
Natural Language Processing which helps to generate the NLP text by realizing Content
Planning, Sentence Planning and Surface Realization. (Figure 2)
8|Page
Figure 2. Phases of NLP architecture
Linguistic is the science which involves meaning of language, language context and
various forms of language. The various important terminologies of Natural Language
Processing are:-
1. Phonology
Phonology is the part of linguistics which refers to the systematic arrangement of sound.
The term phonology comes from Ancient Greek and the term phono- means voice or
sound, and the suffix –logy refers to word or speech. In 1993 Nikolai Trubetzkoy stated
that Phonology that is “the study of sound pertaining to the system of language,whereas
Lass in 1998 wrote that phonology refers broadly with the sounds of language, concerned
with the lathe sub discipline of linguistics, it could be better explained as, "phonology
proper is concerned with the function, behaviour and organization of sounds as linguistic
items. It includes semantic use of sound to encode meaning of any human language .
2. Morphology
The different part of the word represent the smallest units of meaning known as
Morphemes. Morphology which comprises of nature of words, are initiated by
morphemes. An example of Morpheme could be, the word precancellation which can be
morphologically scrutinized into three separate morphemes: the prefix pre, the root
cancella, and the suffix -tion. The interpretation of morpheme stays same across all the
words and just to understand the meaning humans can break any unknown word into
morphemes. For example, adding the suffix –ed to a verb, conveys that the action of the
verb took place in the past. The words that cannot be divided and have meaning by
themselves are called Lexical morphemes (e.g.: table, chair). The words (e.g. -ed, -ing, -
est, -ly, -ful) that are combined with the lexical morpheme are known as Grammatical
morphemes (e.g. Worked, Consulting, Smallest, Likely, Use). Those grammatical
morphemes that occur in combination called bound morphemes (e.g. -ed, -ing).
Grammatical morphemes can be further divided into bound morphemes and derivational
morphemes.
9|Page
3. Lexical
In Lexical, humans as well as NLP systems can interpret the meaning of individual words.
Several types of processing bestow to word-level understanding – the first of these being
a part-of-speech tag to each word. In this processing, words that can act as more than one
part-of-speech which are assigned to the most probable part-of speech tag based on the
context in which they occur. At the lexical level, semantic representations can be replaced
by the words that have one meaning. In NLP system, the nature of the representation varies
according to the semantic theory deployed.
• Syntactic
This level emphasis to examine the words in a sentence so as to uncover the grammatical
structure of the sentence. Both grammar and parser are required in this level. The output
of this level of processing is the representation of the sentence that communicate the
structural dependency relationships between the words.There are various grammars that
can be hindered. Not all NLP applications require a full parse of sentences. They even face
a lot of challenges in parsing of prepositional phrase attachment and conjunction audit no
longer impede that plea for which phrasal and clausal dependencies are adequate 7.The
syntax conveys meaning in most languages because order and dependency contribute to
connotation. For example, the two sentences: ‘The cat chased the mouse’ and ‘The mouse
chased the cat’ differ only in terms of syntax, yet convey quite different meaning.
• Semantic
In semantic, most people think that meaning is determined, however this is not it is all the
levels that bestow to meaning. Semantic processing determines the possible meanings of
a sentence by pivoting on the interactions among word-level meanings in the sentence.
This level of processing can incorporate the semantic disambiguation of words with
multiple senses; in a cognate way to how syntactic disambiguation of words that can work
as multiple parts-of-speech is handy at the syntactic level. For example, amongst other
meanings, ‘file’ as a noun can mean either a binder for gathering papers, or a tool to form
one’s fingernails, or a line of individuals in a queue. The semantic level survey words for
their dictionary report, but also for the report they derive from the context of the sentence.
10 | P a g e
Semantics context that most words have more than one report but that we can spot the
appropriate one by looking at the rest of the sentence.
• Discourse
While syntax and semantics travail with sentence-length units, the discourse level of NLP
travail with units of text longer than a sentence i.e. it does not interpret multi sentence
texts as just sequence sentences, a piece of which can be reported singly. Rather, discourse
focuses on the properties of the text as a whole that convey meaning by making
connections between component sentences. The two of the most common levels are
Anaphora Resolution -It is the replacing of words such as pronouns, which are
semantically stranded, with the pertinent entity to which they refer. Discourse/Text
• Pragmatic:
Pragmatic is concerned with the firm use of language in situations and utilizes nub over
and above the nub of the text for understanding the goal and to explain how extra meaning
is read into texts without literally being encoded in them. This requisite much world
knowledge, including the understanding of intentions, plans, and goals. For example, the
following two sentences need aspiration of the anaphoric term ‘they’, but this aspiration
requires pragmatic or world knowledge.
11 | P a g e
goals maybe achieved by evaluating the situation and available communicative sources
and realizing the plans as a text as shown in Figure 3. It is opposite to Understanding.
Speaker and Generator - To generate the text we need to have a speaker or an application
and a generator or a program that renders the application’s intentions into fluent phrase
relevant to the situation.
12 | P a g e
to choices of particular words, idioms, syntactic constructs etc. Realization: The selected
and organized resources must be realized as an actual text or voice output.
Application or Speaker – This is only for maintaining the model of the situation. Here
the speaker just initiates the process doesn’t take part in the language generation. It stores
the history, structures the content that is potentially relevant and deploys a representation
of what is actually known. All these from the situation, while selecting subset of
propositions that speaker has. The only requirement is the speaker that has to make a sense
of the situation.
5. History of NLP
In late 1940s the term wasn’t even in existence, but the work regarding machine translation
(MT) had started. Research in this period was not completely localized. Russian and
English were the dominant languages for MT, but others, like Chinese were used for MT
(Booth ,1967). MT/NLP research almost died in 1966 according to ALPAC report, which
concluded that MT is going now here. But later on some MT production systems were
providing output to their customers. By this time, they started working on the use of
computers for literary and linguistic studies had also started.
As early as 1960 signature work influenced by AI began, with the BASEBALL Q-A
systems. LUNAR and Winograd SHRDLU were natural successors of these systems but
they were seen as stepped up sophistication, in terms of their linguistic and their task
processing capabilities. There was a widespread belief that progress could only be made
on the two sides, one is ARPA Speech Understanding Research (SUR) project and other
in some major system developments projects building database front ends. The front-end
projects were intended to go beyond LUNAR in interfacing the large databases.
In early 1980s computational grammar theory became a very active area of research linked
with logics for meaning and knowledge’s ability to deal with the user’s beliefs and
intentions and with functions like emphasis and themes.
13 | P a g e
By the end of the decade the powerful general purpose sentence processors like SRI’s
Core Language Engine15 and Discourse Representation Theory16 offered a means of
tackling more extended discourse within the grammatic co-logical framework. This period
was one of the growing community. Practical resources, grammars, and tools and parsers
became available e.g. the Alvey Natural Language Tools. The (D)ARPA speech
recognition and message understanding information extraction conferences were not only
for the tasks they addressed but for the emphasis on heavy evaluation, thus starting a trend
that became a major feature in the 1990s. They work on user modelling that was one strand
in research paper and on discourse structure serving. At the same time, as McKeown
showed, rhetorical schemas which could be used for producing both linguistically
coherent and communicatively effective text. Some researches in NLP marked important
topics for future like word sense disambiguation and probabilistic networks, statistically
colored NLP, the work on the lexicon, also pointed in this direction.
Statistical language processing was a major thing in 90s, because this not only involves
data analysts. Information extraction and automatic summarising was also a point of focus.
6. Related Work
Many researchers worked on NLP, building tools and systems which makes NLP what it
is today. Tools like Sentiment Analyser, Parts of Speech (POS)Taggers, Chunking, Named
Entity Recognitions (NER), Emotion detection, Semantic Role Labelling made NLP a
good topic for research.
Sentiment analyser works by extracting sentiments about given topic. Sentiment analysis
consists of a topic specific feature term extraction, sentiment extraction, and association
by relationship analysis. Sentiment Analysis utilizes two linguistic resources for the
analysis: The sentiment lexicon and the sentiment pattern database. It analyses the
documents for positive and negative words and try to give ratings on scale -5 to +5.
14 | P a g e
Parts of speech taggers for the languages like European languages, research is being done
on making parts of speech taggers for other languages like Arabic, Sanskrit, Hindi etc. It
can efficiently tag and classify words as nouns, adjectives, verbs etc. The most procedures
for part of speech can work efficiently on European languages, but it won’t on Asian
languages or middle eastern languages. Sanskrit part of speech tagger is specifically uses
treebank technique. Arabic uses Support Vector Machine (SVM) approach to
automatically tokenize, parts of speech tag and annotate base phrases in Arabic text.
Usage of Named Entity Recognition in places such as Internet is a problem as people don’t
use traditional or standard English. This degrades the performance of standard natural
language processing tools substantially. By annotating the phrases or tweets and building
tools trained on unlabelled, in domain and out domain data. It improves the performance
as compared to standard natural language processing tools.
Emotion Detection is similar to sentiment analysis, but it works on social media platforms
on mixing of two languages I.e English + Any other Indian Language. It categorizes
statements into six groups based on emotions. During this process, they were able to
identify the language of ambiguous words which were common in Hindi and English and
tag lexical category or parts of speech in mixed script by identifying the base language of
the speaker.
Sematic Role Labelling – SRL works by giving a semantic role to a sentence. For example
in the PropBank formalism, one assigns roles to words that are arguments of a verb in the
15 | P a g e
sentence. The precise arguments depend on verb frame and if there exists multiple verbs
in a sentence, it might have multiple tags. State-of-the-art SRL systems comprise of
several stages: creating a parse tree, identifying which parse tree nodes represent the
arguments of a given verb, and finally classifying these nodes to compute the
corresponding SRL tags.
Event discovery in social media feeds, using a graphical model to analyse any social media
feeds to determine whether it contains name of a person or name of a venue, place, time
etc. The model operates on noisy feeds of data to extract records of events by aggregating
multiple information across multiple messages, despite the noise of irrelevant noisy
messages and very irregular message language, this model was able to extract records with
high accuracy. However, there is some scope for improvement using broader array of
features on factors.
7. Applications of NLP
Natural Language Processing can be applied into various areas like Machine Translation,
Email Spam detection, Information Extraction, Summarization, Question Answering etc.
A. Machine Translation
As most of the world is online, the task of making data accessible and available to all is a
great challenge. Major challenge in making data accessible is the language barrier. There
are multitude of languages with different sentence structure and grammar. In this the
phrases are translated from one language to another with the help of a statistical engine
like Google Translate. The challenge with machine translation technologies is not directly
translating words but keeping the meaning of sentences intact along with grammar and
tenses. The statistical machine learning gathers as much data as they can and then find that
seems to be parallel between two languages and they crunch their data to find the
likelihood that something in Language A corresponds to something in Language B. As for
Google, in September 2016, announced a new machine translation system based on
Artificial neural networks and Deep learning. In recent years, various methods have been
proposed to automatically evaluate machine translation quality by comparing hypothesis
16 | P a g e
translations with reference translations. Examples of such methods are word error rate,
position-independent word error rate, generation string accuracy, multi-reference word
error rate, BLEU score, NIST score All these criteria try to approximate human assessment
and often achieve an astonishing degree of correlation to human subjective evaluation of
fluency and adequacy.
B. Text Categorization
Categorization systems inputs a large flow of data like official documents, military
casualty reports, market data, newswires etc. and assign them to predefined categories or
indices. For example, The Carnegie Group’s Construe system, inputs Reuters articles and
saves much more time by doing the work that is to be done by staff or human indexers.
Some companies have been using categorization systems to categorize trouble tickets or
complaint requests and routing to the appropriate desks. Another application of text
categorization is email spam filters. Spam filters is becoming important as the first line of
defense against the unwanted emails. A false negative and false positive issues of spam
filters are at the heart of NLP technology, it brought down to the challenge of extracting
meaning from strings of text. A filtering solution that is applied to an email system uses a
set of protocols to determine which of the incoming messages are spam and which are not.
There are several types of spam filters available: Content filters: Review the content within
the message to determine whether it is a spam or not. Header filters: Review the email
header looking for fake information. General Blacklist filters: Stops all emails from
blacklisted recipients. Rules Based Filters: It uses user-defined criteria. Such as stopping
mails from specific person or stopping mail including a specific word. Permission Filters:
Require anyone sending a message to be pre-approved by the recipient. Challenge
Response Filters: Requires anyone sending a message to enter a code in order to gain
permission to send email.
17 | P a g e
C. Spam Filtering
It works using text categorization and in recent times, various machine learning techniques
have been applied to text categorization or Anti-Spam Filtering like Rule Learning, Naïve
Bayes, Memory based Learning, Support vectr machines, Decision Trees, Maximum
Entropy Model which Sometimes combine different learners. Using these approaches is
better as classifier is learned from training data rather than making by hand. The naïve
bayes is preferred because of its performance despite its simplicity In Text Categorization
two types of models have been used. Both modules assume that a fixed vocabulary is
present. But in first model a document is generated by first choosing a subset of vocabulary
and then using the selected words any number of times, at least once irrespective of order.
This is called Multi-variate Bernoulli model. It takes the information of which words are
used in a document irrespective of number of words and order. In second model, a
document is generated by choosing a set of word occurrences and arranging them in any
order. this model is called multi-nomial model, in addition to the Multi-variate Bernoulli
model, it also captures information on how many times a word is used in a document.
Most text categorization approaches to anti-spam Email filtering have used multi-variate
Bernoulli model.
D. Information Extraction
Information extraction is concerned with identifying phrases of interest of textual data.
For many applications, extracting entities such as names, places, events, dates, times and
prices is a powerful way of summarize the information relevant to a user’s needs. In the
case of a domain specific search engine, the automatic identification of important
information can increase accuracy and efficiency of a directed search. There is use of
hidden Markov models (HMMs) to extract the relevant fields of research papers. These
extracted text segments are used to allow searched over specific fields and to provide
effective presentation of search results and to match references to papers. For example,
noticing the pop up ads on any websites showing the recent items you might have looked
on an online store with discounts. In Information Retrieval two types of models have been
18 | P a g e
used. Both modules assume that a fixed vocabulary is present. But in first model a
document is generated by first choosing a subset of vocabulary and then using the selected
words any number of times, at least once without any order. This is called Multi-variate
Bernoulli model. It takes the information of which words are used in a document
irrespective of number of words and order. In second model, a document is generated by
choosing a set of word occurrences and arranging them in any order. this model is called
multi-nomial model, in addition to the Multi-variate Bernoulli model , it also captures
information on how many times a word is used in a document
Discovery of knowledge is becoming important areas of research over the recent years.
Knowledge discovery research use a variety of techniques in order to extract useful
information from source documents like
Parts of Speech (POS) tagging, Chunking or Shadow Parsing, Stop-words I.e Keywords
that are used and must be removed before processing documents, Stemming I.e Mapping
words to some base for, it has two methods, dictionary based stemming and Porter style
stemming. Former one has higher accuracy but higher cost of implementation while latter
has lower implementation cost and is usually insufficient for IR. Compound or Statistical
Phrases Compounds and statistical phrases index multi token units instead of single
tokens. Word Sense Disambiguation refers to Word sense disambiguation is the task of
understanding the correct sense of a word in context. When used for information retrieval,
terms are replaced by their senses in the document vector.
Its extracted information can be applied on a variety of purpose, for example to prepare a
summary, to build databases, identify keywords, classifying text items according to some
pre-defined categories etc. For example CONSTRUE, it was developed for Reuters, that
is used in classifying news stories. It has been suggested that many IE systems can
successfully extract terms from documents, acquiring relations between the terms is still
a difficulty. PROMETHEE is a system that extracts lexico-syntactic patterns relative to a
specific conceptual relation. IE systems should work at many levels, from word
19 | P a g e
recognition to discourse analysis at the level of the complete document. An application of
the Blank Slate Language Processor (BSLP) approach for the analysis of a real life natural
language corpus that consists of responses to open-ended questionnaires in the field of
advertising.
There’s a system called MITA (Metlife’s Intelligent Text Analyzer) that extracts
information from life insurance applications. Ahonen et al. suggested a mainstream
framework for text mining that uses pragmatic and discourse level analyses of text.
E. Summarization
Overload of information is the real thing in this digital age, and already our reach and
access to knowledge and information exceeds our capacity to understand it. This trend is
not slowing down, so an ability to summarize the data while keeping the meaning intact
is highly required. This is important not just allowing us the ability to recognize the
understand the important information for a large set of data, it is used to understand the
deeper emotional meanings; For example, a company determine the general sentiment on
social media and use it on their latest product offering. This application is useful as a
valuable marketing asset.
The types of text summarization depends on the basis of the number of documents and
the two important categories are single document summarization and multi document
summarization. Summaries can also be of two types: generic or query-focused.
Summarization task can be either supervised or unsupervised. Training data is required in
a supervised system for selecting relevant material from the documents. Large amount of
annotated data is needed for learning techniques. Few techniques are as follows–
- Bayesian Sentence based Topic Model (BSTM) uses both term-sentences and
term document associations for summarizing multiple documents.
20 | P a g e
- Factorization with Given Bases (FGB) is a language model where sentence bases
are the given bases and it utilizes document-term and sentence term matrices.
This approach groups and summarizes the documents simultaneously.
F. Dialogue System
Perhaps the most desirable application of the future, in the systems envisioned by large
providers of end user applications is Dialogue systems, which focuses on a narrowly
defined applications (like refrigerator or home theater systems) currently uses the phonetic
And lexical levels of language. It is believed that these dialogue systems when utilizing
all levels of language processing offer potential for fully automated dialog systems.
Whether on text or via voice. This could lead to produce systems that can enable robots to
interact with humans in natural languages. Examples like Google’s assistant, Windows
Cortana, Apple’s Siri and Amazon’s Alexa are the software and devices that follow
Dialogue systems.
G. Medicine
NLP is applied in medicine field as well. The Linguistic String Project-Medical Language
Processor is one the large scale projects of NLP in the field of medicine. The LSP-MLP
helps enabling physicians to extract and summarize information of any signs or symptoms,
drug dosage and response data with aim of identifying possible side effects of any
medicine while highlighting or flagging data items. The National Library of Medicine is
developing The Specialist System. It is expected to function as Information Extraction
21 | P a g e
tool for Biomedical Knowledge Bases, particularly Medline abstracts. The lexicon was
created using MeSH (Medical Subject Headings), Dorland’s Illustrated Medical
Dictionary and general English Dictionaries. The Centre d’Informatique Hospitaliere of
the Hopital Cantonal de Geneve is working on an electronic archiving environment with
NLP features. In first phase, patient records were archived. At later stage the LSP-MLP
has been adapted for French, and finally , a proper NLP system called RECIT has been
developed using a method called Proximity Processing. It’s task was to implement a robust
and multilingual system able to analyze/comprehend medical sentences, and to preserve a
knowledge of free text into a language independent knowledge representation. The
Columbia university of New York has developed an NLP system called MEDLEE
(MEDical Language Extraction and Encoding System) that identifies clinical information
in narrative reports and transforms the textual information into structured representation.
• Approaches:
Natural language processing approaches fall roughly into four categories: symbolic,
statistical, connectionist, and hybrid. Symbolic and statistical approaches have coexisted
since the early days of this field. Connectionist NLP work first appeared in the 1960’s.
For a long time, symbolic approaches dominated the field. In the 1980’s, statistical
approaches regained popularity as a result of the availability of critical computational
resources and the need to deal with broad, real-world contexts. Connectionist approaches
also recovered from earlier criticism by demonstrating the utility of neural networks in
NLP. This section examines each of these approaches in terms of their foundations, typical
techniques, differences in processing and system aspects, and their robustness, flexibility,
and suitability for various tasks.
Symbolic approaches perform deep analysis of linguistic phenomena and are based on
explicit representation of facts about language through well-understood knowledge
22 | P a g e
representation schemes and associated algorithms. In fact, the description of the levels of
language analysis in the preceding section is given from a symbolic perspective. The
primary source of evidence in symbolic systems comes from human-developed rules and
lexicons.
Statistical approaches employ various mathematical techniques and often use large text
corpora to develop approximate generalized models of linguistic phenomena based on
actual examples of these phenomena provided by the text corpora without adding
significant linguistic or world knowledge. In contrast to symbolic approaches, statistical
approaches use observable data as the primary source of evidence.
A frequently used statistical model is the Hidden Markov Model (HMM) inherited from
the speech community. HMM is a finite state automaton that has a set of states with
23 | P a g e
probabilities attached to transitions between states. Although outputs are visible, states
themselves are not directly observable, thus “hidden” from external observations. Each
state produces one of the observable outputs with a certain probability.
Statistical approaches have typically been used in tasks such as speech recognition, lexical
acquisition, parsing, part-of-speech tagging, collocations, statistical machine translation,
statistical grammar learning, and so on.
Some connectionist models are called localist models, assuming that each unit represents
a particular concept. For example, one unit might represent the concept “mammal” while
another unit might represent the concept “whale”. Relations between concepts are encoded
by the weights of connections between those concepts. Knowledge in such models is
spread across the network, and the connectivity between units reflects their structural
relationship. Localist models are quite similar to semantic networks, but the links between
units are not usually labeled as they are in semantic nets. They perform well at tasks such
as word-sense disambiguation, language generation, and limited inference.
24 | P a g e
Other connectionist models are called distributed models. Unlike that in localist models,
a concept in distributed models is represented as a function of simultaneous activation of
multiple units. An individual unit only participates in a concept representation. These
models are well suited for natural language processing tasks such as syntactic parsing,
limited domain translation tasks, and associative retrieval.
From the above section, we have seen that similarities and differences exist between
approaches in terms of their assumptions, philosophical foundations, and source of
evidence. In addition to that, the similarities and differences can also be reflected in the
processes each approach follows, as well as in system aspects, robustness, flexibility, and
suitable tasks.
Process: Research using these different approaches follows a general set of steps, namely,
data collection, data analysis/model building, rule/data construction, and application of
rules/data in system. The data collection stage is critical to all three approaches although
statistical and connectionist approaches typically require much more data than symbolic
approaches. In the data analysis/model building stage, symbolic approaches rely on human
analysis of the data in order to form a theory while statistical approaches manually define
a statistical model that is an approximate generalization of the collected data.
Connectionist approaches build a connectionist model from the data. In the rule / data
construction stage, manual efforts are typical for symbolic approaches and the theory
formed in the previous step may evolve when new cases are encountered. In contrast,
statistical and connectionist approaches use the statistical or connectionist model as
guidance and build rules or data items automatically, usually in relatively large quantity.
After building rules or data items, all approaches then automatically apply them to specific
tasks in the system. For instance, connectionist approaches may apply the rules to train
the weights of links between units.
25 | P a g e
26 | P a g e