Belief Revision for Resource
Bounded Agents
Mark Jago
School of Computer Science
University of Nottingham
Nottingham, UK
[email protected]
Abstract
Since the influential work of Alchourrón, Gärdenfors and
Makinson, accounts of belief revision have assumed that
agents are ideal, in the sense that they know all consequences of their beliefs. However, in some cases, this assumption is not warranted. This note represents preliminary steps taken to find an account of belief revision suitable for resource bounded agents. We concentrate on the
case of time-bounded reasoners and the operation of contraction. Our aim is to find operations that, if given adequate resources, will find a subset of the agent’s beliefs
which does not entail the contraction formula, but which
does so in a way that allows an agent to respect a given
deadline.
1. Introduction
This note represents preliminary steps taken to find an
account of belief revision suitable for resource bounded
agents.
Since the influential work of Alchourrón, Gärdenfors and
Makinson [1], accounts of belief revision have assumed
that agents are ideal, in the sense that they know all consequences of their beliefs. So defined, an agent’s belief set
is assumed to be closed under consequence and so operations on that base are concerned with preserving consistency. However, the requirement of consistency has a high
computational cost: if we add a new belief, for example, we
may have to check every subset of the resulting set in order to be able to best decide which to believe.
It is not uncommon for research in logics for belief,
knowledge and action (see, for example, [4, 5, 7]) to make
the strong assumption that whatever reasoning abilities an
agent may have, the results of applying those abilities to a
given problem are available immediately. As such, one need
not consider the complexity of a revision operation in situations in which these assumptions are warranted.
However, there are cases in which such assumptions cannot be made, for example, where the time taken to do deliberation is of critical importance. One obvious example is
that of planning in a dynamic environment: an agent may be
able to produce a perfectly good plan to reach its goal, but
if planning takes too long the result may be irrelevant, e.g.,
if the problem that the agent is trying to solve has changed.
Such considerations are addressed, for example, in Timed
Reasoning Logics [2] and [3].
The standard AGM approach has been criticised on several fronts, and accordingly modified; here, we argue that
any AGM-based approach can not adequately deal with various cases of resource boundedness. We focus our account
of resource boundedness to time bounded reasoners, namely
agents which are able to produce plans or derive consequences of their beliefs but take time to deliberate. However, the considerations we address apply equally to agents
with limited memory resources.
In the sequel we concentrate on the operation of contraction; that is, on finding a subset of the agent’s beliefs which
do not entail the formula being contracted. One advantage
of the AGM approach is that it is based on several highly intuitive postulates, which govern what kind of operation can
count as a contraction operation. In section 2 below, we argue that, intuitive as these postulates are, they cannot capture the notion of resource boundedness. We suggest operations in which the postulates may turn out to be false, but
which we might nevertheless want to term contraction operations.
2. Belief Revision using AGM Postulates
In this section, we briefly review some of the most common approaches to belief revision and indicate a few of the
problems we foresee in relation to resource bounded agents.
The accounts we look out are based on postulates which
are very intuitively appealing. However, when we consider
the time and space limitations which real-world agents encounter, the postulates may well lose their appeal. By beginning this note with this critical approach, we also anticipate the approach we present in section 4.
Any function satisfying K1-K6
below is said to be a con.
.
traction operation, written − (B − α can be read as “contract B by α”).
.
K1 B − α is a belief set (closure)
.
K2 B − α ⊆ B (inclusion)
.
K3 If α ∈
/ B, then B − α = B (vacuity)
.
K4 If not ⊢ α, then α ∈
/ B − α (success)
.
K5 If α ∈ B, then B ⊆ (B − α) + α (recovery)
.
.
K6 if ⊢ α ↔ β, then B − α = B − β (equivalence)
Consider Success (K4): since we are allowed to contract
a base by any formula in our language,
including ⊥, we can
.
define an operation B! =df B − ⊥, or “make B consistent!” Can we guarantee that the operation will be successful in purging our belief set of inconsistency, as K4 says
it will? Well, that may well depend on the resources available to us; the B! operation may well be too expensive.
In an account of belief revision which centres on resource bounded agents, K4 could be replaced with a postulate to the effect that how successful one is depends on
the resources available for belief revision. Note also that the
qualification that α is not valid may be dropped in the case
of agents whose beliefs are not closed under consequence
(non-omniscient agents); indeed, an agent may well delete
some valid beliefs (i.e. when it doesn’t believe them to be
valid).
Recovery (K5) can also be seen to be false for the majority of resource bounded agents. Suppose an agent believes
α and β and derives α ∧ β, keeping track of the dependency.
If the agent deletes α and also removes its beliefs that depended on α, it will also delete α ∧ β. By re-introducing
α, it would be wrong to insist that the agent also believe
α ∧ β immediately (although, for a class of agents with
logically complete deduction rules we can say that, given
enough space and time, the agent should eventually recover
α ∧ β).
Lastly, equivalence (K6) can be seen to be inaccurate for
the general resource bounded case, for an agent may not
know that α and β are equivalent. Again, we can introduce
a restricted principle for the class of agents with a logically
complete set of deduction rules: given enough memory, any
such agent who deletes α should also delete β, eventually.
It is this notion, that an operation on a belief set may have
different consequences for α than for β, even when α ≡ β,
that motivates the approach we describe below. In section
4, we investigate this idea in more detail; we then present
our approach to resource bounded belief revision and prove
a set of results concerning some particular types of knowledge base.
3. Related Work
Wasserman [9] considers the plight of resource bounded
agents and develops the notion of compartments of a belief
set, based on some notion of relevance between formulas
in a knowledge base. A consequence operator is defined in
terms of compartments, the idea being that, given a formula
φ, we can consider the consequences of only those formulas relevant to φ in the knowledge base (the φ-compartment
of the base). Since relevance is a matter of degree, there
is a family of such operators, each considering the consequences of formulas of relevance degree n to φ in the
knowledge base. Such operations do not guarantee the consistency of a knowledge base after revision; they only guarantee consistency of those formulas in the φ-compartment
of the base.
The suggestion is that, in resource bounded situations,
one can live with the complexity of the classic approach
(i.e., checking entailments in every subset of the base) by
defining revision in terms of small compartments. In other
words, the complexity of the suggested method is exponential in the size of the chosen compartment, rather than in the
size of the agent’s entire knowledge base.
The less resource an agent has, the smaller the compartment needs to be. An agent with very limited resources trying to revise its knowledge base by a formula φ should
therefore consider only formulas of a very high degree of
relevance to φ; an agent with slightly more resources can
afford to consider less relevant formulas in its revision.
However, there may well be cases in which even this approach is not fine-grained enough to account for some gradations of resource boundedness. Because the complexity
of the approach is exponential in the number of relevant
formulas, a relatively small increase in the number of relevant formulas may make a previously possible revision become too expensive; a single formula may be added to an
agent’s knowledge base which shows previously unrelated
formulas to be, in fact, closely related, and this single addition could cause an explosion in the amount of resource
required to perform revision on those related formulas. Alternatively, an agent may receive some information which
it finds to be immediately relevant to a great deal of information in its knowledge base, for which the agent needs to
make a very quick analysis.
One might suggest that an approach to these problems
would be to first consider the amount of resource available,
and then to find a compartment of the knowledge base of
a suitably small size. But how can we guarantee there will
be one, without resorting to picking formulas in an ad hoc
way?
It is these very consideration which motivate the approach we now present (section 4). We discuss how our approach can deal with cases in which we have a known fixed
amount of resource, and have to tailor our revision accordingly, in section 4.6.
Some of the problems we consider are similar to those
discussed in [8]; however, unlike that work, we are not concerned with finding a minimal set of premises from which a
contraction formula follows.
4. Resource Bounded Belief Revision Systems
In using the term agent we have in mind rule-based
agents; moreover, we base our approach on agents which
have a fixed set of rules, or knowledge base. Each agent
takes its given knowledge base for granted, and so we assume that no agent has been supplied with undesirable rules
such as ¬p ← p or p ← p, and that the transitive closure
of ← (which we write ←∗ ) is similarly free of such troublesome schema. By turning a blind eye to any problems that
an agent’s program may have, we gain a key advantage: we
only need concern ourselves with operations involving literals.
One immediate objection .may be that we cannot define
the B!, usually defined as B − ⊥. B! says, “make B consistent!” We cannot define B! because ⊥ stands for any arbitrary contradiction. However, we may wish to question the
use of an operation as general as “make this belief base consistent!” Since our concern is with resource bounded agents,
we should always keep an eye on being as economical as
we can; we may, therefore, satisfy ourselves by replacing
B! by an operation which searches for particular contradictions {p, ¬p} which are of interest to us, deciding which of
{p, ¬p} we can live without most easily and then contracting by the latter.
We work with the notion of a knowledge base (see definition 2 below), and assume the rules therein are not questioned when revising KB. In revising an agent’s beliefs, the
only changes that can be made are the addition or deletion
of literals from KB. This being so, literals behave as if they
were assumptions, to be confirmed (and thus kept) or falsified (and thus retracted) at some point in the future.
We should perhaps mention that the kind of agent we
have in mind is only interested in what facts (i.e., what literals) it can derive from its beliefs, with purely implicative
reasoning, using the rules it has been provided with. We
do not consider an agent which can derive new rules from
the rules it already has (by resolution, for example). Future
work may consider a full first-order reasoner, in which case
we must address the issue of rules being given which are
not in minimal form. For the time being, however, we consider a simple agent who is only interested in acquiring and
revising new beliefs qua facts.
One major consideration which, we feel, lends support
to this approach is the notion that beliefs are sentential,
not propositional, objects. That is, in ascribing an attitude
to an agent, we ascribe an attitude towards some particular state of affairs, entertained via some mode of presentation or sense. As a consequence, we should not in general
expect an operation on a sentence σ, whose content is the
proposition that Φ, to affect every sentence σ ′ whose content is also that Φ.
We now give several definitions and results concerning
this notion of resource bounded belief revision.
4.1. Knowledge Bases
Definition 1 A rule R is an implication whose antecedent
is a conjunction of literals and whose consequent is a single
literal.
We write rules using ← with conjunction of antecedents implicit, and refer to the consequent of a rule Ri as its head,
denoted head(Ri ), and to the set of its antecedents as its
body, denoted by body(Ri ). We also write P ∪ P − to denote the set of literals over P.
Definition 2 A knowledge base KB over P is the union of
a set of rules of the form
λ ← λ1 · · · λk
(1)
with conjunction of λ1 · · · λk implicit, and a set of literals
λ 1 · · · λm ∈ P ∪ P −
.
Any set of propositional formulas over P and the usual
Boolean connectives can be written as a knowledge base
under this definition. (1) is equivalent to
^
λ ∨ ¬( λ1 · · · λk )
(2)
Hence, one way to convert an arbitrary formula φ into a
knowledge base is to first write φ in conjunctive normal
form (CNF), then write each clause i as a rule Ri using (2)
and De Morgan rules.
Definition 3 Given a rule R = λ ← λ1 · · · λk ∈ KB, if we
have λi ∈ KB for each 1 ≤ i ≤ k, then we write KB ⊢ λ.
We write KB ⊢ Λ to mean that, for each literal λ ∈ Λ, we
have KB ⊢ λ. We write ⊢∗ to denote the transitive closure
of ⊢.
Definition 4 ←KB is the transitive closure of ← with respect to the rules R ∈ KB:
- If (λ ← Λ) ∈ KB then λ ←KB Λ; and
- if λ1 ←KB Λ, . . . , λn ←KB Λ and
(λ0 ← λ1 , · · · , λn ) ∈ KB, then λ0 ←KB Λ.
Definition 5 We say a literal λ is a consequence of a knowledge base KB whenever λ ←KB Λ and Λ ⊆ KB, or equivalently when KB ⊢∗ λ.
In what follows, we impose the following simplifying
condition on KB:
Condition 1 For any subset Λ of P ∪ P − and any literal
λ ∈ P ∪ P − , if λ ←KB Λ and there is no Λ′ such that Λ′ ⊂
Λ and λ ←KB Λ′ , then λ ∈
/ Λ.
This condition says that λ does not depend upon itself
(amongst other literals), i.e., the structure of any derivation
of λ cannot be cyclic. Future work may drop this assumption; we use it here merely to simplify our approach.
We now investigate the effect that various restrictions on
KB have on the complexity of the contraction operation.
4.2. 1-Dependent Rule Bases
Definition 6 A 1-dependent knowledge base KB is a
knowledge base such that, for every literal λ ∈ P ∪ P −
and every rule R ∈ KB, if head(R) = λ, then there exists
no rule R′ ∈ KB distinct from R such that head(R′ ) = λ.
In other words, each literal can appear as the head of at most
one rule in a 1-dependent rule base.
Definition 7 Given a 1-dependent knowledge base KB
over P, a dependency tree for a literal λ ∈ P ∪ P − is a
tree τ λ such that:
1. the nodes of τ λ are literals ξ ∈ P ∪ P − ;
2. the root of τ λ is λ; and
3. A node ν of τ λ is a child of a node µ of τ λ precisely
when there is a rule R ∈ KB such that µ = head(R)
and ν ∈ body(R).
The level of a node of τ λ is such that level(root) = 1
and level(ν) = level(µ) + 1 whenever parent(ν) =
µ.
Proposition 2 For any literal λ ∈ P ∪ P − and any 1dependent knowledge base KB over P, the dependency tree
τ λ for λ can only contain λ at level 1 (i.e., λ only appears
as the root of τ λ ).
Proof. Suppose, contrary to proposition 2, that λ appears at
level l > 1 in τ λ . By proposition 1, there is a set of literals
Λ appearing at level l in τ λ such that λ ∈ Λ and λ ←KB Λ.
However, this contradicts condition 1 and so there can be no
such tree for a 1-dependent knowledge base. ⊣
Proposition 2 says that no subtree of a dependency tree τ
repeats itself; this corresponds to condition 1.
Definition 8 A path π through a dependency tree τ is a sequence of nodes [ν1 . . . νk ] such that each ν1≤i<k is the parent of νi+1 , ν1 = root and νk is a leaf.
This is unusual terminology, as usually a path is defined
without the latter two restrictions (i.e., usually a path need
not begin with the root or end at a leaf). However, since
these are the only paths we will make use of here, we hope
our abuse of terminology will be forgiven.
We can define a total order ≺ on nodes of a tree τ ,
for example by ordering the children of each node from
left to right and setting ν1 ≺ ν2 whenever level(ν1 ) <
level(ν1 ), else when level(ν1 ) = level(ν1 ) and ν1
appears to the left of ν2 in τ .
Consider a dependency tree τ in which a pair of subtrees
share a literal. An example of such a tree is tree ➁ in figure
1. We write simplify(τ ) to denote the operation of “glueing together” all such subtrees of τ , resulting in the directed
acyclic graph (DAG) ➂ of figure 1.1 This ensures that no literal appears at more than one node of simplify(τ ).
➀
Proposition 1 If the literals that appear at level l in a dependency tree τ over a 1-dependent knowledge base KB are
precisely λ1 · · · λk , then for any literal λ appearing at any
level j ≤ l, there exists a set of literals Λ ⊆ {λ1 · · · λk }
such that λ ←KB Λ.
Proof. Clearly holds when j = l. So suppose for every
j < i < l we have that ξ ←∗KB {λ1 . . . λk } whenever ξ
appears at level i in τ . Now consider any literal ζ appearing at level l in τ . By hypothesis, every child ζ ′ of ζ is such
that ζ ′ ←KB {λ1 . . . λk }. For some rule R ∈ KB such that
head(R) = ζ, body(R) = children(ζ) (definition 7).
Hence, ζ ←KB {λ1 . . . λk }. ⊣
Corollary 1 λ is a consequence of a 1-dependent knowledge base KB if the leaves of the dependency tree for λ, τ λ ,
are λ1 . . . λk and each λ1≤i≤k ∈ KB.
Proof. Immediate from proposition 1 and definitions 4 and
5. ⊣
~λ @@@@
~~~
λ1
λ2
ξ
➁
~λ @@@@
~~~
λ2
λ1
ξ
ξ
➂
~λ @@@@
~~~
λ2
λ1>>
>>
ξ
Figure 1. Tree ➁ simplifies to the DAG ➂
Proposition 3 simplify preserves the structure of trees
in the sense that, for any path π ∈ τ , we have π ∈
simplify(τ ).
Proof. Clearly, π ∈ simplify(τ ) when the number of
rules in KB, k, is 1. So assume that, for all i < k, the subpath of π, [ν1 · · · νi ] ∈ simplify(τ ); we show π must
be found in simplify(τ ) as well. We consider a tree in
1
As the example ➁ and ➂ shows, the operation of simplifying a tree
τ might not itself result in a tree. By definition, a tree is a DAG in
which every node has at most one parent; when we simplify, we can
only guarantee a DAG.
which there are two nodes µ, µ′ = λ and µ ≺ µ′ . Take
any path π = [ν1 · · · νk ] through τ which marks first µ and
then µ′ . By hypothesis, [ν1 · · · νi ] ∈ simplify(τ ) and so
we only need consider the case µ′ = νk . Since τ is a simplified DAG, µ′ ∈ children(µ). Since µ = ν1≤j≤i , we
have [ν1 · · · µ] ∈ simplify(τ ) and, given definition 7,
[ν1 · · · µ′ ] = π ∈ simplify(τ ). ⊣
How complex is the operation of simplifying a tree? For
a knowledge base KB containing k rules, the longest of
which contains l literals, an unsimplified tree over KB may
contain anything up to k · l nodes. Thus, we have an upper bound of (k · max{|R| : R ∈ KB})2 on the number of
operations required to simplify a given dependency tree τ .
We will soon improve on this upper bound; first, we give
a complexity measure for simplified dependency trees over
1-dependent rules bases.
Proposition 4 The maximum number of nodes in a simplified dependency tree simplify(τ ) over a 1-dependent
knowledge base KB containing k rules, the largest of which
contains l literals in its body, is O(k · l).
Proof. We can establish the result by showing that there
are at most k internal nodes in simplify(τ ) (i.e., nodes
which are not leaves). Suppose there are more than k internal nodes in simplify(τ ). Since only k literals are
the head of some rule R ∈ KB, some literal λ appears
at more than one node in simplify(τ ). Consider a node
ν1 = λ such that, for all nodes ν ′ = λ, ν1 ≺ µ. Suppose for some ν2 = λ, we have parent(ν2 ) = µ; since
τ is a simplified DAG, ν1 is the only node ν ′′ such that
ν ′′ ∈ children(µ) and ν ′′ λ. Hence, there can be no such
node ν2 and so there can be at most k internal nodes, and
therefore at most O(k · l) nodes, in simplify(τ ). ⊣
We can thus improve on the upper bound given above
for simplifying a tree τ , by performing the simplification as
we build a tree, node by node. Given a total order ≪ on
literals, we can then denote literals by integers. For example, suppose for the literals found in the dependency tree ➀
above, we have λ ≪ λ1 ≪ λ2 ≪ ξ, and so use the integers 0, 1, 2 and 3 to denote these literals, respectively.
When we add a node ν according to definition 7, we note
down the position of ν relative to root in a constant-time
lookup table (assume we order the children of a node 1, 2,
3,. . . ).2 So suppose we have built tree ➀ (figure 1) and we
have recorded the following information:
0
root
1
1.root
2
2.root
3
1.1.root
Suppose we can apply a rule λ2 ← ξ; since we know that ξ
is denoted by the integer 3, we jump to cell 3 in our lookup
table. where we find the entry 1.1.root. Therefore, we do
not add a new ξ node to obtain ➁; instead, we only add
an arc from 2.root to 1.1.root, i.e., from λ2 to ξ, to obtain
graph ➂, the simplified graph of tree ➁. Since we have an
upper bound of O(k · l) nodes in the simplified tree (proposition 4), the upper bound for the number of operations required for the simplification of a tree τ is O(k · l).
4.3. Contraction for 1-Dependent Bases
To contract a 1-dependent knowledge base KB by a literal λ, we find a path π in the dependency tree τ λ for λ over
KB and, for each literal ξ ∈ π, we check
whether ξ ∈ KB
.
and, if it is, we delete it. We write KB −π λ to denote this
operation of contraction of a knowledge base KB by a literal λ using a path π, defined as:
.
KB −π λ = {φ | φ ∈ KB ∧ φ ∈
/ π}
We now show that this method using paths is adequate, in
the following sense.
Proposition 5 Given a dependency tree τ λ for a literal λ,
and any path π ∈ τ λ ,
.
KB −π λ 6⊢∗ λ
Sketch of the Proof. By induction on the number of rules k
in KB. When k = 1, any tree τ λ is just the root node λ, and
so all paths select λ. We then assume the result holds for all
trees built with only j < l rules. We take any tree τ ′ built
with k − 1 rules and consider what happens when we expand a node using the k − th rule R. We have two cases: either (1) the chosen path π in τ also runs to a leaf of τ and
then, by hypothesis, KB\π ′ 0∗ λ; or (2) we have to extend π by some ξ in the body of Rk to obtain a path π ′
through τ ′ . Then KB\π ′ 0 head(R1 ) and so, by hypothesis, KB\π ′ 6⊢∗ λ. ⊣
The idea in this proof is that, given a tree τ λ such that
any path π ∈ τ λ allows us to contract by λ, we expand a
leaf of τ λ and look at any path passing through to one of
the new leaves (see figure 2).
λ1
λ @@@
b@
λ2
a~~~
~
λ CCC
bC
λ2
a1 {{
{{a2
λ1 B
BaB2
a1||
λ3
2
To look up a value in the table, we do not have to search through until
we find the one we want; if we are looking up node n, we simply jump
to cell n in the table. The lookup is thus a constant time operation.
|
λ4
Figure 2. Illustration of proposition 5
In the tree to the left of figure 2, we can choose either
path a or path b in order to contract by λ. By adding a new
rule λ1 ← λ3 , λ4 , we can expand λ1 (as seen in the tree to
the right of figure 2); path b is still an available option. Otherwise, we have to remove λ1 , and we see that we have a
free choice of path a1 and a2 .
The precise way in which we select a path π from a dependency tree lies outside the scope of this paper. We do
not, for example, guarantee that a minimal set of literals
will be removed (the task of finding a minimal set is NPcomplete). In general, there are two principles which intuition suggests should guide a revision. The first is the principle of conservativeness, according to which a revision operation should alter one’s beliefs as little as possible. The second is the principle of prioritisation, which comes into play
when we have (possibly partial) information concerning priority relationships between beliefs. The principle states that
we should never delete a belief in favour of another unless the latter is at least as favourable as the former. However, these principles can conflict with one another. Moreover, because of the complexity involved in adhering to such
rules, we may view them as heuristics rather than hard-andfast rules in the the case of resource bounded agents. Such
considerations are left for further work.
each λ1≤i≤k . Figure 3 illustrates this for the case of a 2dependent knowledge base KB, with arcs labelled either
“a” or “b”.
4.4. n-Dependent Rule Bases
Proof. Similar to the proof for proposition 2 above.
We now look at how relaxing definition 6 affects these
results. In this section, we consider rule bases where each
literal can appear at the head of at most n rules.
We extend the notion of a simplified tree simplify(τ )
by adding a proviso to the effect that we always distinguish
̺-arcs from ς-arcs whenever ̺ 6= ς. We can now give a similar result to proposition 4 above for n-dependent rule bases.
Definition 9 An n-dependent knowledge base KB may
contain rules of the form (1) above such that, for every literal λ ∈ P ∪ P − and all rules R1 . . . Rn ∈ KB,
if head(R1 ) = . . . = head(Rn ) = λ, then there exists no rule R′ ∈ KB distinct from R1 and . . . and Rn such
that head(R′ ) = λ.
Since a literal may now depend on up to n rules, we
should distinguish these possibilities in our dependency
trees. We may assume we have a total order ≺ on rules that
enables us to pick a first rule and a second and . . . upon
which a literal depends (if any).
Definition 10 Given an n-dependent knowledge base KB
over P, a dependency tree τ λ for a literal λ is a structure
similar to a 1-dependent tree, except that:
λ
3. a node ν of τ is a child of a node µ when there
is a rule R ∈ KB such that µ = head(R) and
ν ∈ body(R). If R is the nth such rule, we label the
arc from µ to ν ‘n’ and say that ν is an n-child of µ.
To visually signal that arcs in a tree have the same label, we draw ⌣ between them. Thus, if there is a rule
λ ← λ1 , · · · , λk , we draw ⌣ between the arcs from λ to
KB = {λ ← λ1 , λ2 , λ ← ξ1 , ξ2 }
λ
ppK---KKKK
p
p
-- b KKK
pp⌣ ⌣
KKK
ppp a
p
p
p
ξ1
ξ2
λ1
λ2
Figure 3. A 2-dependent Tree
To find the consequences of an n-dependent rule base,
given a dependency tree τ , we proceed as follows. We begin
by picking a path π through τ ; if π visits an ̺-child at level
m, we add all paths visiting ̺-children at level m. We do
this for all levels. This gives us a set of literals Λ such that
KB ⊢∗ λ (we state this without proof). In fact, by collecting
these paths together, we have built a 1-dependent subtree of
τ.
Proposition 6 For any literal λ ∈ P ∪ P − and any ndependent knowledge base KB over P, the dependency tree
τ λ for λ over KB can only contain λ at level 1.
⊣
Proposition 7 The maximum number of nodes in a simplified dependency tree simplify(τ ) over an n-dependent
knowledge base KB containing k rules, the largest of which
contains l literals in its body, is O(k · l).
Sketch of the Proof. Again, we proceed by showing there
can be no more than k internal nodes in simplify(τ ).
Suppose a literal ξ appears at two distinct levels in τ λ . By
similar reasoning to the proof of proposition 4, there cannot
be nodes ν1 , ν2 ∈ simplify(τ ) such that ν1 = ν2 = ξ.
Hence, there can only be at most k internal nodes, and therefore at most O(k · l) nodes, in simplify(τ ). ⊣
4.5. Contraction for n-dependent Bases
Definition 11 Given a path π through a dependency tree
τ (with at least depth m) over an n-dependent knowl∗
edge base KB, we define a level m alternative πm
=
[ν1 , · · · , νk≥m ] to π = [µ1 , · · · , µk′ ] to be any path through
τ such that:
1. at each level i < m of τ , νi = µi
∗
is
2. if µm ∈ π is a ̺-child of µm−1 ∈ π, then νm ∈ πm
∗
not a ̺-child of νm−1 ∈ πm
∗
3. πm
ends at a leaf of τ
The idea of an alternative to a path π is that it agrees with
π in the choice of arc type (i.e., whether it travels through a
̺-arc or a ς-arc or . . . ) up to a certain level, at which it differs. After that level, the path and its alternative may converge again; the important point is that they diverge at the
specified level. Since our definition of a path (definition 8
above) states that a path must begin at the root, any alternative path must do so also.
It is sometimes convenient to talk of subpaths of
a path rather than of the entire path. A sequence
of nodes π ′ = [µ1 · · · µn ] is a subpath of π when
π = [ν1 · · · µ1 · · · µn · · · νm ]. When a subpath π ′ of π begins at the root node (as π must), we say that π extends
π′ .
Suppose we mark a path π through a dependency tree τ
and, for some level l, mark a set P of all level l alternatives
to π, such that no π ′ ∈ P travels from level l − 1 to level l
through the same type of arc. Since this set depends on our
choice of path and level, we write Pπl to indicate which set
we intend. We thus mark one path for each arc type from
levels l − 1 to l.
We have to ensure that, whenever any path π ′ that we
have marked passes through a node of a tree τ , we also mark
all alternatives to π ′ at the level of that node. We write Pπ
to denote the set of all such marked paths.
Definition 12 Given a path π through a dependency tree τ ,
Pπ is then any set which minimally satisfies: π ∈ Pπ and,
′
for for every level l of τ , for every path π ′ ∈ Pπ , Pπl ⊆ Pπ .
To illustrate how path selection works, consider the 3dependent knowledge base:
λ ← λ1 , λ2 , λ ← ξ1 , ξ2 , λ ← ζ1 , ζ2 ,
λ1 ← α1 , α2 , λ1 ← β, ξ2 ← γ1 , γ2 ,
KB =
ξ2 ← δ1 , δ2 , α1 , α2 , λ2 , ξ1 , δ1 , δ2
Figure 4 shows a dependency tree τ λ for λ in KB. The arcs
from λ to its children have been labelled as either a, b or c
arcs, such that two children of λ are reached via similarly
labelled arcs only when both literals appear together in the
body of some rule R ∈ KB whose head is λ.
λS
Kipipipip --KSKSKSKSSS
i
i
i
p
i
⌣
KKKSSSS
-- ⌣
i
p
⌣
i
p
i
KcKK SSSS
iiiipppapp b i
SS
i
i
i
ξ1
ξ2 O?OO ζ1
ζ2
λ2
λ1 A
A
}
~
?
O
}⌣ AA
~⌣
?? OOO
⌣
}
~
a
a
O
b
b
A
?? OOO
AA
~~
}}
OOO
?
A
~~
}}
~
}
γ1
γ2
α1
α2
β
δ1
δ2
Figure 4. Dependency tree for λ in KB
We begin by first marking any path through τ λ . Suppose
we mark the path π = [λ, λ1 , α1 ]. There are no level 1 alternatives to π, for only one literal, λ, appears at level 1.
Since π travels from λ at level 1 to λ1 at level 2 through
an a-arc, the level 2 alternatives to π are all paths extending one of the following subpaths:
π1′ = [λ, ξ1 ] π2′ = [λ, ξ2 ] π3′ = [λ, ζ1 ] π4′ = [λ, ζ2 ]
The set Pπ2 must then contain all level 2 alternatives to π
which travel from level 1 to level 2 through a distinct type
of node. Thus, if we choose a path which extends either
π1 or π2 , we then have to choose a path which extends either π3 or π4 and vice versa. We could, for example, mark
the paths π2′ = [λ, ξ2 , γ2 ] (which extends π2 ) and π3 (for π3
marks a leaf and so is a path in its own right).
We then need to consider level 3 alternatives to all paths
included in Pπ2 : we need a level 3 alternative to π, to π2′ and
to π3 . In the case of π, we have only one option: since π
travels to α1 via and a-node, we need to pick a b-child of λ1 ,
and the only such child is β. We thus add the path [λ, λ1 , β]
to Pπ3 . In the case of π2′ , we can choose either [λ, ξ2 , δ1 ] or
[λ, ξ2 , δ1 ]: suppose we opt for the latter path. Finally, since
π3 has already reached a leaf, ζ1 , it has no level 2 alternatives. Since the paths we have selected in Pπ3 all travel to a
leaf of τ λ , we only need to combine Pπ2 with Pπ3 to form Pπ.
These paths are highlighted in figure 5.
ipλ+SJ+ JJS
ipiipi⌣
+ JJS S
i
i
i
JJ S S
iai p⌣ b +++ ⌣
i
JcJ
i
i
J SSS
iiii p p
i
ζ2
ξ1
λ2
ξ2 ?OOO ζ1
λ 1@
} @@
⌣
OOO
}⌣
⌣
?
@
}
O
a
@b@
?b OOOO
}} a
@@
OOO
?
}}
}
γ1
α2
γ2
α1
δ1
β
δ2
Figure 5. Selecting alternative paths
Pπ is a set of paths, which are themselves no more than
sequences of literals. To obtain the set of literals marked by
those paths, we only need to remove the structure of Pπ . We
use a function ℓ, which takes a set of sequences of literals
and returns the set of literals featuring in those sequences.
Writing X\Y for {x | x ∈ X ∧ x ∈
/ Y }, we can now define
.
a contraction operator KB −π for some path π as:
.
KB −π λ = KB \ℓ(Pπ )
In our example, ℓ(Pπ ) = {λ, λ1 , ξ2 , ζ2 , α1 , β, γ2 , δ2 }; so
λ ← λ1 , λ2 , λ ← ξ1 , ξ2 , λ ← ζ1 , ζ2 ,
.
KB −π λ = λ1 ← α1 , α2 , λ1 ← β, ξ2 ← γ1 , γ2 ,
ξ2 ← δ1 , δ2 , α2 , λ2 , ξ1 , δ1
Notice that whilst λ is a consequence of KB, it is not after contraction. We now show that this holds in the general
case.
Proposition 8 For any knowledge base KB and literal λ,
KB \ℓ(Pπ ) 0∗ λ where π is any path through a dependency tree τ λ for λ over KB and Pπ is as above.
Proof. By induction on the number of rules. When KB
contains exactly one rule R, KB ⊢∗ λ iff KB ⊢ λ iff
λ = head(R) and body(R) ⊂ KB. Since any path π
through a tree τ λ over KB marks some child ξ of the root,
we have λ, ξ ∈ ℓ(Pπ ) and hence KB \ℓ(Pπ ) 0∗ λ.
We then make the following inductive hypothesis: for
all knowledge bases KB ′ containing 1 < j < k rules,
′
for any path π ′ through the tree τ λ over KB ′ , we have
′
KB ′ \ℓ(Pπ ) 0∗ λ′ . Consider a tree τ1λ over some knowledge base KB 1 containing k rules, and a tree τ2λ over
KB 2 = KB 1 \R, for some rule R ∈ KB 1 such that
head(R) is some node ν of τ1 . τ2 is thus a subtree of τ2 ; by
hypothesis, proposition 8 follows if, for every ξ ∈ ℓ(Pπ2 ),
KB 1 \ℓ(Pπ1 ) 0 ξ, where π1 ⊇ π2 . We have two cases to
consider:
- ν ∈
/ ℓ(Pπ2 ): then ℓ(Pπ1 ) = ℓ(Pπ2 ). Since
KB 1 \ℓ(Pπ1 ) 0 ξ ∈ ℓ(Pπ1 ) (definition 3), it follows that KB 1 \ℓ(Pπ1 ) 0 ξ for any ξ ∈ ℓ(Pπ2 ).
- ν ∈ ℓ(Pπ2 ): Since KB 1 = KB 2 ∪ {R}
and head(R) = ν, it is sufficient to show
KB 1 \ℓ(Pπ1 ) 0 ν. Suppose level(ν) = l;
then µ ∈ ℓ(Pπl ) for some µ ∈ body(R) (from definitions 10 and 12) and hence µ ∈ ℓ(Pπ1 ). Given the
definition of ⊢, KB 1 \ℓ(Pπ1 ) 0 ν follows immediately.
⊣
As mentioned in section 4.2 above, our operation does
not guarantee the removal of a minimal set of literals from
a knowledge base. The problem of deciding which set of
paths to remove is distinct from the issues addressed here,
and is left for future work.
4.6. Resource Bounded Approaches
We have only considered the contraction operation here
in detail; however, we may also wish to consider a query operation ?λ, which asks “does λ follow from the assumptions
in the agent’s knowledge base KB by applying the agent’s
rules?” We can use the idea of a dependency tree τ for λ
that we developed above in answering the query. In the simple case of a 1-dependent rule base, for example, we only
need to check whether all the leaves of τ can be found in
KB.
Suppose an agent is asked to make a snap decision, for
example, whether it would like to come to the meeting
starting now. Since a near-immediate answer is required, it
would be foolish to spent too long building a dependency
tree for “I’d like to attend the meeting.” One would expect
to briefly consider the matter and try to find either a conclusive reason for accepting or for rejecting the offer (such as
having a prior engagement).
Depending on the disposition of the agent, it may be able
to make a decision without a decisive reason (either way);
for example, an agent who has a natural disposition towards
avoiding meetings may, in the absence of a conclusive reason to attend (such as the meeting being compulsory), wish
to decline the offer. Given a time limit, the agent may start
to build a dependency tree for “I’d like to attend the meeting” and, unless it can find literals {λ1 . . . λk } ∈ KB upon
which the root of the tree depends, decline the offer. Note
that this may be the case even if KB entails “I’d like to attend the meeting”!
It is this very approach which is made possible by our notion of resource bounded belief revision; one simply cannot
modify AGM-style approaches to this kind of partial dependency checking. Consider the Kernal operation B⊥⊥α,
which finds all minimal subsets of a knowledge base B (not
necessarily defined as restrictively as KB above) which imply a formula α. Contraction can be defined in terms of ⊥⊥
by removing one formula from each set X ∈ B⊥⊥α. Now,
clearly any subset of B either does entail α or does not;
however, we have seen an example in which an agent’s belief base may well entail “I’d like to go to the meeting” but
in which, because of limited time in which to decide, declines the offer.
5. Conclusions and Further Work
We began by suggesting cases in which belief revision
operations based on AGM postulates do not appear suitable
for agents with limited resources, such as having limited
memory or a deadline imposed on some reasoning. In this
preliminary investigation, we have only considered a contraction operation for resource bounded rule based agents,
but have highlighted how the approach we suggest can overcome these problems, firstly by having favourable complexity results – the complexity of our contraction operator is
linear in the number of rules an agent has – and secondly by
showing how the strategy we adopt for contraction can be
adapted to meet fixed deadlines.
If an agent is required to make a snap decision, or unexpectedly runs out of time whilst reasoning, our notion
of contraction allows agents to respond in a suitable way,
given these limitations imposed upon them. We would not
expect any useful agent, faced with such limited resources,
to maintain a perfectly consistent knowledge base; in fact,
given the complexity of the operations required to do so, it
would often be irrational for an agent to attempt to do so.
In future work, we aim to extend the approach presented
here to include other revision operators; to look at more advances agents, such as a full first-order reasoner who can derive new rules from his rule base; to implement algorithms
along the lines briefly sketched above; and formalise these
notions of resource bounded belief revision within a multiagent epistemic framework such as Timed Reasoning Logics [2, 3].
References
[1] Carlos E. Alchourrón, Peter Gärdenfors, and David Makinson, “On the Logic of Theory Change: Partial Meet Contraction and Revision Functions.” Journal of Symbolic Logic 50
(1985) pp.510–530
[2] N. Alechina, B. Logan and M. Whitsey, “Epistemic Logic for
Rule-Based Agents,” proc. JELIA 04, Lisbon, Portugal. Lecture Notes on Artificial Intelligence (3229), Springer-Verlag.
[3] N. Alechina, B. Logan and M. Whitsey,, “A Complete and
Decidable Logic for Resource Bounded Agents,” proc. AAMAS 04, (Columbia Univseristy, New York: ACM).
[4] R. Fagin and J.Y. Halpern, “Belief, Awareness and Limited
Reasoning”, proc. Ninth International Joint Conference on
AI (1985) pp.491–50
[5] R. Fagin, J.Y. Halpern, Y. Moses and M.Y. Vardi, Reasoning
about Knowledge, MIT Press, Cambridge, Mass. (1995)
[6] Mark
Jago,
“Logical
Omniscience:
A
Survey” University of Nottingham working paper:
http://www.cs.nott.ac.uk/ mtw/papers/survey.pdf
[7] H.J. Levesque, “A Logic of Implicit and Explicit Belief”,
proc. National Conference on Artificial Intelligence (AAAI
’84) (1984) pp. 198–202
[8] R. Reiter and J. de Kleer, “Foundations of Assumption
Based Truth Maintenence Systems: Preliminary Report”
proc. AAAI’87 (1987) pp. 183–188
[9] R. Wasserman, “Resource-Bounded Belief Revision”, PhD
thesis, ILLC, University of Amsterdam (ILLC Dissertation
Series 2000-01)