Unit-2 AI
Unit-2 AI
Unit-2 AI
Logic Reasoning:
- In each case where the agent draws a conclusion from the available
information, that conclusion is guaranteed to be correct if the
available information is correct.
- The above is the fundamental of logic
reasoning Logic in General
- Logics are formal languages for representing information such that
conclusion can be drawn
- Syntax defines the sentence in the language.
- Semantics define the “meaning” of sentences;
a. i.e., define truth of a sentence in a world.
- E.g., the language of arithmetic
a. X+2 ≥ y is sentence; x2 + y > {} is not a sentence
b. X + 2 ≥ y is true iff the number x+2 is no less than the number y
c. X + 2 ≥ y is true in a world where x = 7, y = 1
d. X + 2 ≥ y is false in a world where x = 0, y = 6
Entailment:
- We have a notion of truth, we are ready to study logical reasoning, which
involves logical entailment between sentences
- Entailment means that one thing follows from another:
KB |= α
| = means double turnstile, read as “entails”
- Knowledge base KB entails sentence α if and only if α is true in all worlds
where KB is true
Models:
- We use the term “model” to represent “possible world”
- We say m is a model of a sentence α if α is true in model m
- M(α) is the set of all models of α
- Then KB |= α iff M(KB) ⊆ M(α)
Soundness and Completeness:
- An inference algorithm that derives only entailed sentences is
called sound.
- An inference algorithm is complete if it can derive any sentence that
is entailed.
- If KB is true in the real world, then any sentence derived from KB by
a sound inference procedure is also true in the real world.
5. Write a note on Entailment, Model, Soundness and
Completeness.
Entailment:
- We have a notion of truth, we are ready to study logical reasoning, which
involves logical entailment between sentences
- Entailment means that one thing follows from another:
KB |= α
| = means double turnstile, read as “entails”
- Knowledge base KB entails sentence α if and only if α is true in all worlds
where KB is true
Models:
- We use the term “model” to represent “possible world”
- We say m is a model of a sentence α if α is true in model m
- M(α) is the set of all models of α
- Then KB |= α iff M(KB) ⊆ M(α)
Soundness and Completeness:
- An inference algorithm that derives only entailed sentences is
called sound.
- An inference algorithm is complete if it can derive any sentence that
is entailed.
- If KB is true in the real world, then any sentence derived from KB by
a sound inference procedure is also true in the real world.
6. List the basic syntax in Propositional Logic.
The syntax of propositional logic defines the allowable sentences for the
knowledge representation. There are two types of propositions:
a. Atomic Propositions
b. Compound Propositions
Atomic Proposition: Atomic proposition are the simple propositions. It
consists of a single proposition symbol. These are the sentence which must
be either true or false.
Compound Proposition: Compound preposition are constructed by combining
simpler or atomic proposition, using parenthesis and logical connectives.
7. List the basic semantics in Propositional Logic.
P Q P⇔Q
false false true
false true false
true false false
true true true
And Elimination:
If an And proposition is true, then any atomic proposition within it is true
as well.
Implication Elimination:
An implication is equivalent to an Or relation between the negated
antecedent and the consequent.
However, consider the following truth table:
P Q P🡪Q ~P v Q
De Morgan’s Law:
It is possible to turn an And connective into an Or connective.
a. Turn α β into (α 🡪 β) ^ (β 🡪 α)
- Eliminate implications
a. Turn (α 🡪 β) into ~α v β
- Move negation inwards until only literals are being negated, using De
Morgan’s Laws.
a. Turn ~(α ^ β) into ~α v ~β
Occasionally, through the process of inference by resolution, we might end
up in cases where a clause contains the same literal twice. In these cases,
a process called factoring is used, where the duplicate literal is removed.
Resolving a literal and its negation, i.e. ~P and P, gives the empty clause ().
The empty clause is always false, and this makes sense because it is
impossible that both P and ~P are true. Thus fact is used by the resolution
algorithm.
- To determine KB |= α:
a. Check: is (KB ^ ~α) a contradiction?
If so, then KB |= α
Otherwise, no entailment.
Proof by contradiction is a tool used often in computer science. If our
knowledge base is true, and it contradicts ~α, it means that ~α is false, and,
therefore, α must be true. More technically, the algorithm would perform
the following actions:
- To determine if KB |= α:
a. Convert (KB ^ ~α) to conjunctive normal form
b. Keep checking to see if we can use resolution to produce a new clause.
c. If we ever produce the empty clause, congratulations! We have
arrived at a contradiction, thus proving that KB |= α.
d. However, if contradiction is not achieved and no more clauses can be
inferred, there is no entailment.
10. Label the term “Horn Clauses” and list its properties.
a1, …, an a1 ^ … ^ an 🡪 β
β
- Definite clause: horn clauses with exactly one positive literal
a. The positive literal is called the head and the negative literals from
the body
b. Definite clauses form the basis for logic programming
- Can be used with forward chaining or backward chaining
- These algorithms are very natural and run in linear time.
-
LONG Answers:
Step-2:
Step-3:
Backward Chaining:
Backward chaining algorithm is a form of reasoning, which starts with
the goal and works backward, chaining through rules to find known facts
that support the goal.
Properties of Backward chaining:
- It is known as a top-down approach
- Backward-chaining is based on modus ponens inference rule.
- In Backward chaining, the goal is broken into sub-goal or sub-goals to
prove the facts true.
- It is called a goal driven approach, as a list of goals decides which rules
are selected and used.
- Backward – chaining algorithm is used in game theory, automated
theorem proving tools, inference engines, proof assistants, and various
applications.
- The backward-chaining method mostly used a depth-first search strategy
for proof.
Example:
o American (p) 𝖠 weapon(q) 𝖠 sells (p, q, r) 𝖠 hostile(r) → Criminal(p)
...(1)
Owns(A, T1) (2)
o Missile(T1)
o ?p Missiles(p) 𝖠 Owns (A, p) → Sells (Robert, p, A) (4)
o Missile(p) → Weapons (p) .......(5)
o Enemy(p, America) →Hostile(p) ........(6)
o Enemy (A, America) .........(7)
o American(Robert). ..........(8)
Backward-chaining proof:
Step-1:
Step-2:
Step-3:
Step-4:
Step-5:
2. Summarize the advantages and disadvantages of
Propositional Logic.
Pros and Cons of propositional Logic:
- Propositional logic is declarative: pieces of syntax correspond to facts.
- Propositional logic allows partial/disjunctive/negated information
(unlike most data structures and databases)
- Propositional logic is compositional: meaning of B1,1 ^ P1,2 is derived from
meaning of B1,1 and P1,2
- Meaning is propositional logic is context-independent (unlike natural
language, where meaning depends on context)
- Propositional logic has very limited expressive power (unlike
natural language)
3. Label the term FOL and lists its elements.
Equality ==
Quantifier ∀, ∃
Atomic Sentences:
- Atomic sentences are the most basic sentences of first-order
logic. These sentences are formed from a predicate symbol
followed by a parenthesis with a sequence of terms.
- We can represent atomic sentences as Predicate (term1, term2, ,
term n).
Complex Sentences:
- Complex sentences are made by combining atomic sentences
using connectives.
First order logic statements can be divided into two parts:
- Subject: It is the main part of the statement.
- Predicate: It can be defined as a relation, which binds two
atoms together in a statement.
4. Write a note on Universal Quantifier, Existential
Quantifier and Nested Quantifiers.
Quantifiers allows us to determine or identify the range and scope of the
variable in a logical expression.
There are two quantifiers:
a. Universal quantifier: For all, everyone, everything.
b. Existential quantifier: for some, at least
one. Universal quantifier:
Universal quantifiers specify that the statement within the range is true for
everything or every instance of a particular.
Universal quantifiers are denoted by a symbol(∀) that looks like an inverted
Existential Instantiation:
- Existential instantiation is also known as Existential Elimination, and it
is a legitimate first-order logic inference rule.
- It can only be used to replace the existential sentence once.
- Although the new KB is not conceptually identical to the old KB, it will
be satisfiable if the old KB was.
- This rule states that for a new constant symbol c, one can deduce
P(c) from the formula given in the form of x P(x).
- The only constraint with this rule is that c must be a new word for which
P(c) is true.
- It’s written like this:
Existential Introduction:
- An existential generalization is a valid inference rule in first-order
logic that is also known as an existential introduction.
- This rule argues that if some element c in the universe of discourse
has the property P, we can infer that something in the universe has the
attribute P.
- It’s written like this:
Unification:
- GMP is a lifted version of Modus Ponens – raises Modus Ponens from
propositional to first-order logic
- Lifted inference rules require finding substitutions that make
different logical expression look identical
- This process is called unification and is a key component of all first-order
inference algorithms.
- The unify algorithm takes two sentences and returns a unifier for
them if one exists:
UNIFY(p, q) = θ where Subst(θ, p) = Subst(θ, q)
p q θ
Knows(John, x) Knows(John, Jane) {x/Jane}
Knows(John, x) Knows(y, OJ) {x/OJ, y/John}
Knows(John, x) Knows(y, Mother(y)) {y/John,
x/Mother(John)}
Knows(John, x) Knows(x, OJ) {fail}
Standardizing apart: renaming variables to avoid name clashes
UNIFY(Knows(John, x), Knows(z, OJ)) = {x/OJ, z/John}
- To unify Knows(John, x) and Knows(y, z), θ = { y/John, x/z } or θ =
{ y/John, x/John, z/John}
- The first unifier is more general than the second
- There is a single most general unifier (MGU)
- MGU = {y/John, x/z}
Its goal is to get the conclusion Its goal is to get the possible
facts or the required data.
Slow as it has to use all the rules. Fast as it has to use only few rules.
It operates in forward direction i.e. It operates in backward direction
it works from initial state to final
i.e. it works from goal to reach
decision.
initial state.
Forward chaining is used for the It is used in automated inference
planning, monitoring, control, and engines, theorem proofs, proof
interpretation application assistants and other artificial
intelligence applications.
Hence, we have achieved the given goal with the help of Proof by
Contradiction. Thus, it is proved that Gita likes almond.
10. Consider the following statements.
a. John likes all kind of food
∀x ¬ food(x) V likes(John, x)
food(Apple) Λ food(vegetables)
∀x ∀y ¬ [eats(x, y) Λ ¬ killed(x)] V food(y)
eats (Anil, Peanuts) Λ alive(Anil)
∀x ¬ eats(Anil, x) V eats(Harry, x)
∀x¬ [¬ killed(x) ] V alive(x)
∀x ¬ alive(x) V ¬ killed(x)
likes(John, Peanuts).
- Move negation (~) inwards and rewrite
∀x ¬ food(x) V likes(John, x)
food(Apple) Λ food(vegetables)
∀x ∀y ¬ eats(x, y) V killed(x) V food(y)
eats (Anil, Peanuts) Λ alive(Anil)
∀x ¬ eats(Anil, x) V eats(Harry, x)
∀x ¬killed(x) ] V alive(x)
∀x ¬ alive(x) V ¬ killed(x)
likes(John, Peanuts).
- Rename variables or standardize variables
∀x ¬ food(x) V likes(John, x)
food(Apple) Λ food(vegetables)
∀y ∀z ¬ eats(y, z) V killed(y) V food(z)
eats (Anil, Peanuts) Λ alive(Anil)
∀w¬ eats(Anil, w) V eats(Harry, w)
∀g ¬killed(g) ] V alive(g)
∀k ¬ alive(k) V ¬ killed(k)
likes(John, Peanuts).
- Eliminate existential instantiation quantifier by elimination.
In this step, we will eliminate existential quantifier ∃, and this process is
known as Skolemization. But in this problem since there is no existential
quantifier so all the statements will remain same in this step.
- Drop Universal quantifiers.
In this step we will drop all universal quantifier since all the statements
are not implicitly quantified so we don’t need it.
¬ food(x) V likes(John, x)
food(Apple)
food(vegetables)
¬ eats(y, z) V killed(y) V food(z)
eats (Anil, Peanuts)
alive(Anil)
¬ eats(Anil, w) V eats(Harry, w)
killed(g) V alive(g)
¬ alive(k) V ¬ killed(k)
likes(John, Peanuts).
- Distribute conjunction ^ over disjunction ~.
This step will not make any change in this problem.
Step-3: Negate the statement to be proved
In this statement, we will apply negation to the conclusion statements, which
will be written as ~likes(John, Peanuts)
Step-4: Draw Resolution graph