Hierarchical Decision Making By Autonomous Agents
Stijn Heymans, Davy Van Nieuwenborgh⋆, and Dirk Vermeir⋆⋆
Dept. of Computer Science
Vrije Universiteit Brussel, VUB
Pleinlaan 2, B1050 Brussels, Belgium
{sheymans,dvnieuwe,dvermeir}@vub.ac.be
Abstract. Often, decision making involves autonomous agents that are structured in a complex hierarchy, representing e.g. authority. Typically the agents
share the same body of knowledge, but each may have its own, possibly conflicting, preferences on the available information.
We model the common knowledge base for such preference agents as a logic
program under the extended answer set semantics, thus allowing for the defeat of
rules to resolve conflicts. An agent can express its preferences on certain aspects
of this information using a partial order relation on either literals or rules. Placing
such agents in a hierarchy according to their position in the decision making
process results in a system where agents cooperate to find solutions that are jointly
preferred.
We show that a hierarchy of agents with either preferences on rules or on literals
can be transformed into an equivalent system with just one type of preferences.
Regarding the expressiveness, the formalism essentially covers the polynomial
P
hierarchy. E.g. the membership problem for a hierarchy of depth n is Σn+2
complete. We illustrate an application of the approach by showing how it can
easily express a generalization of weak constraints, i.e. “desirable” constraints
that do not need to be satisfied but where one tries to minimize their violation.
1 Introduction
In answer set programming[16, 2] one uses a logic program to modularly describe the
requirements that must be fulfi lled by the solutions to a particular problem, i.e. the answer sets of the program must correspond to the intended solutions of the problem. The
technique has been successfully applied to the area of agents and multi-agent systems[3,
8, 26]. While [3] and [8] use the basic answer set semantics to represent the agents domain knowledge, [26] applies an extension of the semantics incorporating preferences
among choices in a program.
The idea of extending answer set semantics with some kind of preference relation is
not new. We can identify two directions for these preferences relations on programs. On
the one hand, we can equip a logic program with a preference relation on the rules [18,
⋆
⋆⋆
Supported by the FWO
This work was partially funded by the Information Society Technologies programme of the
European Commission, Future and Emerging Technologies under the IST-2001-37004 WASP
project.
J.J. Alferes and J. Leite (Eds.): JELIA 2004, LNAI 3229, pp. 44-56, 2004.
c Springer-Verlag Berlin Heidelberg 2004
17, 15, 10, 7, 5, 27, 1, 22], while on the other hand we can consider a preference relation
on the (extended) literals in the program: [21] proposes explicit preferences while [4, 6]
encodes dynamic preferences within the program.
The traditional answer set semantics is not universal, i.e. programs may not have any
answer sets at all. This behavior is not always feasible, e.g. a route planner agent may
contain inconsistent information regarding some particular regions in Europe, which
should not stop it from providing travel directions in general. The extended answer set
semantics from [22, 23] allows for the defeat of problematic rules. Take, for example,
the program consisting of a ← b, b ← and ¬a ← . Clearly this program has no answer
sets. It has, however, extended answer sets {a, b}, where the rule ¬a ← is defeated by
the applied a ← b, and {¬a, b}, where a ← b is defeated by ¬a ← .
However, not all extended answer sets may be equally preferred by the involved
parties: users traveling in “error free” regions of Europe do not mind faults in answers
concerning the problematic regions, in contrast to users traveling in these latter regions
that want to get a “best” approximation. Therefore, we extend the above semantics
by equipping programs with a preference relation over either the rules or the literals
in a program. Such a preference relation can be used to induce a partial order on the
extended answers, the minimal elements of which will be preferred.
Different agents may exhibit different, possibly contradicting, preferences, that need
to be reconciled into commonly accepted answer sets, while taking into account the
relative authority of each agent.
For example, sending elderly workers on early pension, reducing the wages, or sacking people are some of the measures that an ailing company may consider. On the other
hand, management may be asked to sacrifi ce expense accounts and/or company cars.
Demanding efforts from the workers without touching the management leads to a bad
image for the company. Negotiations between three parties are planned: shareholders,
management and unions. The measures under consideration, together with the influence
on the company’s image are represented by the extended answer sets
M4 = {¬bad image, expense, sack }
M1 = {bad image, pension }
M2 = {bad image, wages }
M5 = {¬bad image, car , wages }
M3 = {¬bad image, expense, wages } .
The union representative, who is not allowed to reduce the options of the management, has a preference for the pension option over the wages reduction over the
sacking option of people, not taking into account the fi nal image of the company, i.e.
pension < wages < sack < {bad image, ¬bad image}. This preference strategy
will result in M1 being better than M2 , while M3 is preferred upon M4 . Furthermore, M5 is incomparable w.r.t. the other options. Thus M1 , M3 and M5 are the
choices to be defended by the union representative. Management, on the other hand,
would rather give up its expense account than its car, regardless of company image, i.e.
expense < car < {bad image, ¬bad image}, yielding M 1 , M3 and M4 as negotiable
decisions for the management.
Finally, the shareholders take only into account the decisions that are acceptable to
both the management and the unions, i.e. M1 and M3 , on which they apply their own
preference ¬bad image < bad image, i.e. they do not want their company to get a bad
45
image. As a result, M3 ❁ M1 , yielding that M3 is the preferred way to go to save the
company, taking into account each party’s preferences.
Decision processes like the one above are supported by agent hierarchies, where a
program, representing the shared world of agents, is equipped with a tree of preference
relations on either rules or literals, representing the hierarchy of agents preferences.
Semantically, preferred extended answer sets for such systems will result from fi rst
optimizing w.r.t. the lowest agents in the hierarchy, then grouping the results according
to the hierarchy and let the agents on the next level optimize these results, etc. Thus,
each agent applies its preferences on a selection of the preferred answers of the agents
immediately below it in the hierarchy, where the lowest agents apply their preferences
directly on the extended answer sets of the shared program.
Reconsidering our example results in the system depicted below, i.e. union and
management prefer directly, and independently, among all possible solutions, while
the shareholders only choose among the solutions preferred by both union and management, obtaining a preferred solution for the complete system.
M1 , M2 , M3 , M4 , M5
M1 , M3 , M4
<management
<union
∧
M3
M1 , M3 , M5
M1 , M3
<shareholders
Such agent hierarchies turn out to be rather expressive. More specifi cally, we show
that such systems can solve arbitrary complete problems of the polynomial hierarchy.
We also demonstrate how systems with combined preferences, i.e. either on literals or
on rules, can effectively be reduced to systems with only one kind of preference.
Finally, we introduce a generalization of weak constraints[9], which are constraints
that should be satisfi ed but may be violated if there are no other options, i.e. violations
of weak constraints should be minimized. Weak constraints have useful applications in
areas like planning, abduction and optimizations from graph theory[13, 11]. We allow
for a hierarchy of agents having their individual preferences on the weak constraints
they wish to satisfy in favor of others. We show that the original semantics of [9] can
be captured by a single preference agent.
The remainder of the paper is organized as follows. In Section 2, we present the extended answer set semantics together with the hierarchy of preference agents, enabling
hierarchical decision making. The complexity of the proposed semantics is discussed
in Section 3. Before concluding and giving directions for further research in Section 5,
we present in Section 4 a generalization of weak constraints and show how the original
semantics can be implemented. Due to lack of space, detailed proofs have been omitted.
2 Agent Hierarchies
We give some preliminaries concerning the extended answer set semantics[22]. A literal
is an atom a or a negated atom ¬a. An extended literal is a literal or a literal preceded
by the negation as failure-symbol not. A program is a countable set of rules of the form
46
α ← β with α a set of literals, |α| ≤ 1, and β a set of extended literals. If α = ∅, we
call the rule a constraint. The set α is the head of the rule while β is called the body.
We will often denote rules either as a ← β or, in the case of constraints, as ← β.
For a set X of literals, we take ¬X = {¬l | l ∈ X} where ¬¬a is a; X is consistent
if X ∩ ¬X = ∅. The positive part of the body is β + = {l | l ∈ β, l literal}, the
negative part is β − = {l | not l ∈ β}, e.g. for β = {a, not ¬b, not c}, we have that
β + = {a} and β − = {¬b, c}. The Herbrand Base BP of a program P is the set of
all atoms that can be formed using the language of P . Let LP be the set of literals and
L∗P the set of extended literals that can be formed with P , i.e. LP = BP ∪ ¬BP and
L∗P = LP ∪ {not l | l ∈ LP }. An interpretation I of P is any consistent subset of
LP . For a literal l, we write I |= l, if l ∈ I, which extends for extended literals not l
to I |= not l if I 6|= l. In general, for a set of extended literals X, I |= X if I |= x for
every extended literal x ∈ X. A rule r : a ← β is satisfied w.r.t. I, denoted I |= r, if
I |= a whenever I |= β, i.e. r is applied whenever it is applicable. A constraint ← β
is satisfi ed w.r.t. I if I 6|= β. The set of satisfi ed rules in P w.r.t. I is the reduct PI . For
a simple program P (i.e. a program without not), an interpretation I is a model of P if
I satisfi es every rule in P , i.e. PI = P ; it is an answer set of P if it is a minimal model
of P , i.e. there is no model J of P such that J ⊂ I. For programs P containing not, we
defi ne the GL-reduct w.r.t. an interpretation I as PI , where P I contains α ← β + for
α ← β in P and β − ∩ I = ∅. I is an answer set of P if I is an answer set of P I . A
rule a ← β is defeated w.r.t. I if there is a competing rule ¬a ← γ that is applied w.r.t.
I, i.e. {¬a} ∪ γ ⊆ I. An extended answer set I of a program P is an answer set of P I
such that all rules in P \PI are defeated.
Example 1. Take a program P expressing an intention to vote for either the Democrats
or the Greens. Voting for the Greens will, however, weaken the Democrats, possibly
resulting in a Republican victory. Furthermore, you have a Republican friend who may
benefi t from a Republican victory.
¬dem vote ←
dem vote ←
rep win ← green vote
green vote ← not dem vote
fr benefit ← rep win
¬fr benefit ← rep win
This program results in 3 different extended answer sets M 1 = {dem vote}, M2 =
{¬dem vote, green vote, rep win, fr benefit}, and M3 = {¬dem vote, green vote,
rep win, ¬fr benefit}.
As mentioned in the introduction, the background knowledge for agents will be
described by a program P . Agents can express individual preferences either on extended
literals or on rules of P , corresponding to literal and rule agents respectively.
Definition 1. Let P be a program. A rule agent (RA) A for P is a well-founded strict
partial1 order < on rules in P . The order < induces a relation ⊑ among interpretations
M and N of P , such that M ⊑ N iff ∀r2 ∈ PN \PM · ∃r1 ∈ PM \PN · r1 < r2 .
1
A strict partial order on X is an anti-reflexive and transitive relation on X. A strict partial
order on a finite X is well-founded, i.e. every subset of X has a minimal element w.r.t. <.
47
A literal agent (LA) for P is a strict well-founded partial order < on L∗P , and M ⊑
N iff ∀n ∈ {l ∈ L∗P | N |= l ∧ M 6|= l}, ∃m ∈ {l ∈ L∗P | M |= l ∧ N 6|= l} · m < n.
The extended answer sets of an agent for P correspond to the extended answer sets
for P . As usual, we have M ❁ N iff M ⊑ N and not N ⊑ M . A preferred answer set
M is an extended answer set that is minimal w.r.t. ❁ among the extended answer sets.
Note that a RA < for P corresponds to an ordered logic program (OLP) hP, <i from
[22].
We refer to the order of an agent A with <A and ❁A . Intuitively, for rule agents,
an extended answer set M is “better” than N if each rule that is satisfi ed by N but not
by M is countered by a better rule satisfi ed by M and not by N . Similarly, for literal
agents we have that M ⊑ N if every extended literal that is true in N , but not in M , is
countered by a better one true in M but not in N .
E.g., defi ne a rule agent fr benefit ← rep win < ¬fr benefit ← rep win for
the program P in Example 1, indicating that one rather satisfi es the former rule than
the latter. We have, with PM1 = P \ {¬dem vote ← }, PM2 = P \ {demo vote ←
, ¬fr benefit ← rep win} and PM3 = P \{demo vote ← , fr benefit ← rep win},
that M2 ❁ M3 , yielding that M1 and M2 are the only preferred answer sets.
A literal agent might insist on voting for the Democrats: demo vote < L∗P \
{demo vote}, making M1 its only preferred answer set.
The cooperation of agents for a program P is established by arranging them in a
tree-structure2, such that decisions are made bottom-up, starting with agents that have
no successors, all the way up in the hierarchy to the root agent, each agent processing
the results of its successor agents. Formally, an agent hierarchy (AH) is a pair hP, T i
where P is a program and T is a fi nite and/or-tree of agents A for P .
We will denote the root agent A of the tree T with Aε .The m successors of an agent
Ax are denoted as Ax·1 , . . . , Ax·m . An agent without successors is called an independent agent, other agents are dependent. An agent associated with an and-node (or-node)
will be called an and-agent (or-agent). We defi ne what it means for an extended answer
set to be preferable by a certain agent in the hierarchy.
Definition 2. Let hP, T i be an AH. An extended answer set M of P is preferable by an
independent agent A of T if M is a preferred answer set of A for P . An extended answer
set M of P is preferable by a dependent and-agent (or-agent) Ax , with m successors,
if
– M is preferable by every (some) Ax·i , 1 ≤ i ≤ m, and
– there is no N , preferable by every (some) Ax·j , 1 ≤ j ≤ m, such that N ❁Ax M .
An extended answer set M of P is preferred if it is preferable by Aε .
Rule agents (OLPs) are rather convenient to formulate diagnostic problems[24, 25], using “normal” and “fault” model rules to describe the system under consideration, where
the former are preferred over the latter. Examples of this approach can be found in
[24, 25], where it also has been shown that the OLP semantics yields minimal possible explanations. However, to decide which explanations to check fi rst, an engineer
2
For simplicity we restrict ourselves to trees, however, the results remain valid for any wellfounded strict partial order of agents that has a unique maximal agent.
48
typically uses another preference order, preferring e.g. explanations that are cheaper to
verify. Such a situation can be modelled by an agent hierarchy containing an extra agent
“above” the diagnostic RA. More generally, one may imagine situations where multiple engineers each have their own experience (expressed by preference) and where the
head of the group has to take the fi nal decision on which possible explanations to check
fi rst, taking into account the proposals of her colleagues. Such systems can easily be
expressed using the proposed framework.
Reasoning w.r.t. agent hierarchies, containing both rule and literal agents, can be
reduced to reasoning w.r.t. rule agent hierarchies (RAHs) or literal agent hierarchies
(LAHs), i.e. hierarchies containing only rule or literal agents.
We show the reduction from RAs to LAs and vice versa. For the reduction of RAs
for P to LAs, we introduce for every rule r in P a corresponding atom that is in an
answer set iff r is satisfi ed. Intuitively, the newly introduced atoms will be ordered
according to the original order on the rules they correspond with.
Theorem 1. Let P be a program and R = {ri ← not b | ri : α ← β ∈ P, b ∈
β + } ∪ {ri ← b | ri : α ← β ∈ P, b ∈ β − } ∪ {ri ← a | ri : a ← β ∈ P } with a
new atom ri for each rule ri in P . M is a preferred answer set of a RA Ar for P iff
M ′ = M ∪ {ri | ri ∈ PM } is a preferred answer set of the LA Al for P ∪ R where
{ri } <Al L∗P ∪ not({ri }) with additionally ri <Al rj iff ri <Ar rj .
Moreover, preferred answer sets of a LA for P are in one to one correspondence
with the preferred answer sets of a LA for P ∪ R by simply ignoring the newly added
atoms ri .
Theorem 2. Let P be a program and R as in Theorem 1. M is a preferred answer set
of a LA A for P iff M ′ = M ∪ {ri | ri ∈ PM } is a preferred answer set of the LA
A′ for P ∪ R where <A′ is equal to <A with additionally k <A′ L∗P ∪R \L∗P for every
extended literal k appearing in <A .
The opposite simulation of a LA by a RA can be done by introducing for each
literal l and its extended version not l rules l ′ ← and ¬l ′ ← and ordering those rules
according to the order on the extended literals.
Theorem 3. Let P be a program and L = {l ′ ← ; ¬l ′ ← | l ∈ LP }∪{ ← l ′ , not l ; ←
¬l ′ , l | l ∈ LP }. M is a preferred answer set of a LA Al for P iff M ′ = M ∪ {(¬)l ′ |
l ∈ LP , M |= (not)l} is a preferred answer set of the RA Ar for P ∪ L with L <Ar P
and additionally (¬)l ′ ← <Ar (¬)k ′ ← iff (not)l <Al (not)k .
Example 2. Take a LA Al for P where P consists of the rules b ← a, a ← , and
¬a ← , and ¬a <Al {a, b, not ¬a}. This agent has two extended answer sets {¬a} and
{a, b}, of which the fi rst one is preferred. The corresponding RA Ar is defi ned by the
following program3
b←a
a←
¬a ←
a′ ←
b′ ←
(¬a)′ ←
(¬b)′ ←
′
′
′
¬a ←
¬b ←
¬(¬a) ←
¬(¬b)′ ←
′
′
′
← a , not a
← b , not b
← (¬a) , not ¬a
← (¬b)′ , not ¬b
′
′
′
← ¬a , a
← ¬b , b
← ¬(¬a) , ¬a
← ¬(¬b)′ , ¬b
3
Rules below the line are smaller than the ones above w.r.t. <Ar .
49
and (¬a)′ ← <Ar {a ′ ← , b ′ ← , ¬(¬a)′ ← }. This RA has the preferred answer set
{¬a, (¬a)′ , ¬a′ , ¬b′ , ¬(¬b)′ } = {¬a} ∪ {(¬)l ′ | {¬a} |= (not)l}.
Similarly to Theorem 2, we have that RAs for P can be simulated by RAs for P ∪L.
Theorem 4. Let P be a program and L as in Theorem 3. M is a preferred answer set
of a RA A for P iff M ′ = M ∪ {(¬)l ′ | l ∈ LP , M |= (not)l} is a preferred answer set
of a RA A′ for P ∪ L where <A′ is equal to <A with additionally r <A′ L for every r
in P .
Theorem 1 and 2 allow the simulation of an arbitrary AH by a LAH. This is done
by extending the program P with the set of rules R as in Theorem 1, and by transforming the rule agents to literal agents (Theorem 1), while the literal agents are adapted
according to Theorem 2.
Theorem 5. Let hP, T i be an AH. M is a preferred answer set of hP, T i iff M ′ =
M ∪ {ri | ri ∈ PM } is a preferred answer set of the LAH hP ∪ R, T ′ i, with T ′ defined
as T but with every rule or literal agent replaced by a literal agent as in Theorems 1
and 2.
Similarly, but now with Theorems 3 and 4, we can reduce arbitrary AHs to RAHs.
Theorem 6. Let hP, T i be an AH. M is a preferred answer set of hP, T i iff M ′ =
M ∪ {(¬)l ′ | l ∈ LP , M |= (not)l} is a preferred answer set of the RAH hP ∪ L, T ′ i,
with T ′ defined as T but with every rule or literal agent replaced by a rule agent as in
Theorems 3 and 4.
3 Complexity
We briefly recall some relevant notions of complexity theory (see e.g. [20, 2] for a nice
introduction). The class P (NP) represents the problems that are deterministically (nondeterministically) decidable in polynomial time, while coNP contains the problems
whose complement are in NP.
The polynomial hierarchy, denoted PH , is made up of three classes of problems,
P
P
P
P
P
P
i.e. ∆P
k , Σk and Πk , k ≥ 0, which are defi ned as ∆0 = Σ0 = Π0 = P , ∆k+1 =
P
P
P
P
P
P
P
= coΣk+1
. The class P Σk (NP Σk ) represents
= NP Σk , and Πk+1
P Σk , Σk+1
the problems decidable in deterministic (nondeterministic) polynomial time using an
of solving ΣkP
oracle for problems in ΣkP , where an oracle is a subroutine
S∞capable
P
problems in unit time. The class PH is defi ned by PH = k=0 Σk . Note that ΣkP ⊆
P
ΣkP ∪ ΠkP ⊆ ∆P
k+1 ⊆ Σk+1 . In the following, we will usually omit the P -superscript
to avoid cluttered up lines. A language L is called complete for a complexity class C if
both L is in C and L is hard for C. Showing that L is hard is normally done by reducing
a known complete decision problem to a decision problem in L.
First of all, checking whether an interpretation I is an extended answer set of a program P is in P , because (a) checking if each rule in P is either satisfi ed or defeated
w.r.t. I, (b) applying the GL-reduct on PI w.r.t. I, i.e. computing (PI )I , and (c) checking whether the positive program (PI )I has I as its unique minimal model, can all be
done in polynomial time.
50
For an agent A and a program P , checking whether M is not a preferred answer set
is in NP , because one can guess a set N ❁A M in polynomial time, and subsequently
verify that N is an extended answer set of P , which can also be done in P .
On the other hand, the complexity of checking whether an extended answer set M
is not preferable by a certain agent Ax in a hierarchy hP, T i depends on the location of
the agent in the tree T . For an agent Ax in T , we denote with d(Ax ) the length of the
longest path from Ax to an independent agent Ax·y , y ∈ N⋆ , i.e. d(Ax ) = maxAx·y |y|
over independent agents Ax·y , where |y| is the length of the string y. We defi ne the
depth of T as the longest path from the root, i.e. d(T ) = d(Aε ).
Lemma 1. Let hP, T i be an AH4 , and let M be an extended answer set of P . Checking
whether M is not preferable by Ax is in Σd(Ax )+1 .
Proof. The proof is by induction. In the base case, i.e. Ax is an independent agent, we
have that d(Ax ) = 0. Checking whether M is not preferable by Ax means checking
whether M is not a preferred answer set of the agent Ax for P , which is in NP = Σ1 =
Σd(Ax )+1 .
For the induction step, checking that M is not preferable by a dependent and-agent
(or-agent) Ax with m successors can be done by (a) checking that M is (or is not)
preferable by every (some) Ax·i , 1 ≤ i ≤ m. Since checking whether M is (or is not)
preferable by an Ax·i can be done, by the induction hypothesis, in Σd(Ax·i )+1 , we have
that checking whether M is preferable by an Ax·i is also in C ≡ Σmax1≤i≤m d(Ax·i )+1 ,
and (b) guessing, if M is preferable by every (some) Ax·i , 1 ≤ i ≤ m, an interpretation
N ❁Ax M and checking that it is not the case that N is not preferable by every (some)
Ax·i , 1 ≤ i ≤ m, which is again in C due to the induction hypothesis.
As a result, at most 2m calls are made to a C-oracle and at most one guess is made,
yielding that the problem itself is in NP C = Σmax1≤i≤m d(Ax·i )+1+1 = Σd(Ax )+1 . ⊔
⊓
Using the above yields the following theorem about the complexity of AHs.
Theorem 7. Let hP, T i be an AH and l a literal. Deciding whether there is a preferred
answer set containing l is in Σd(T )+2 . Deciding whether every preferred answer set
contains l is in Πd(T )+2 .
Proof. The fi rst task can be performed by an NP-algorithm that guesses an interpretation M ∋ l and checks that it is not the case that M is not preferable up to the root
agent Aε . Due to Lemma 1, the latter is in Σd(Aε )+1 = Σd(T )+1 , so the former is in
NP Σd(T )+1 = Σd(T )+2 .
By the previous, fi nding a preferred answer set M not containing l, i.e. l 6∈ M , is in
Σd(T )+2 . Hence, the complement of the problem is in Πd(T )+2 .
⊔
⊓
Consider a LAH hP, T i where the tree T is a linear order containing n literal agents
{Aε , A1 , A11 , . . . A11...1 }, i.e. a linear LAH. Deciding whether there is a preferred answer set of a linear LAH, containing a literal, is Σn+1 -complete, i.e. Σd(T )+2 -complete,
as is shown in [19]. Furthermore, deciding whether every preferred answer set of a linear LAH contains a literal, is Πd(T )+2 -complete [19]. Hardness for AHs follows then
immediately from the hardness of linear LAHs.
4
The depth of the tree is assumed to be bounded by a constant.
51
Theorem 8. The problem of deciding, given an AH hP, T i and a literal l, whether there
exists a preferred answer set containing l is Σd(T )+2 -hard. Deciding whether every
preferred answer set contains l is Πd(T )+2 -hard.
Proof. Checking whether there is a preferred answer set containing l for a linear LAH
hP, T i is Σd(T )+2 -complete, and since a linear LAH is a AH, the result follows.
The second problem can be similarly shown to be Πd(T )+2 -hard.
⊔
⊓
The following is immediate from Theorem 7 and 8.
Corollary 1. The problem of deciding, given an arbitrary AH hP, T i and a literal l,
whether there is a preferred answer set containing l is Σd(T )+2 -complete. On the other
hand, deciding whether every preferred answer set contains l is Π d(T )+2 -complete.
4 Relationship with Weak Constraints
Weak constraints were introduced in [9] as a relaxation of the concept of a constraint.
Intuitively, a weak constraint is allowed to be violated, but only as a last resort, meaning
that one tries to minimize the number of violated constraints. Additionally, weak constraints may be hierarchically layered by means of a totally ordered set of sets of weak
constraints W = {W1 , W2 , . . . , Wn }, where it is assumed that Wi < Wi+1 , 1 ≤ i < n,
if the weak constraints in Wi are more important than the ones in Wi+1 . Intuitively, one
fi rst chooses the answer sets that minimize the number of violated constraints in the
most important W1 , and then, among those, one chooses the extended answer sets that
minimize the number of violated constraints in W2 , etc.
Formally, a weak logic program (WLP) is a pair hP, W i where P is a program,
and W is a totally ordered set of sets of weak constraints, specifi ed syntactically as
constraints ← β.
Definition 3. Let hP, W = {W1 , . . . , Wn }i be a WLP. An extended answer set M of
N
P is preferable up to W1 if no extended answer set N of P exists such that |VW
|<
1
X
M
|VW1 |, where VWi are the weak constraints in Wi that are violated by an interpretation
X. An extended answer set M of P is preferable up to Wi , 1 < i ≤ n, if
– M is preferable up to Wi−1 , and
M
N
|.
| < |VW
– there is no N , preferable up to Wi−1 , such that |VW
i
i
An extended answer set M of P is preferred if it is preferable up to Wn .
In [9] a Datalog not -program LP is used5 disallowing empty heads and classical
negation, but allowing for a set of strong constraints S. Clearly, this is subsumed by
Defi nition 3, by taking P = LP ∪ S, and noting that the extended answer set semantics
reduces to the answer set semantics, due to the absence of classical negation. Although
the preferred models are defi ned in [9] by means of an object function that has to be
minimized, they are equivalent[12] to the ones resulting from Defi nition 3.
5
The general mechanism is introduced with Datalog ∨,not -programs, which allow for disjunction in the head.
52
The semantics of weak constraints, with preferability up to certain levels, appears
very similar to our preferability notion in an agent hierarchy. However, due to the use
of cardinality, deciding whether a literal l is contained in some preferred answer set of
P
P
a WLP is ∆P
2 -complete. As ∆2 ⊆ Σ2 , agent hierarchies of depth 0 suffi ce to capture
WLPs. More specifi cally, we show that a single agent can solve the problem.
Example 3. Take the weak logic program hP, {W1 , W2 }i, with the program P consisting of rules a ← , ¬a ← , b ← , and ¬b ← , and W1 = { ← a}, W2 = { ← ¬a, ←
¬b}. We have 4 extended answer sets M1 = {a, b}, M2 = {a, ¬b}, M3 = {¬a, b},
and M4 = {¬a, ¬b} of which M3 and M4 are preferable up to W1 , and only M3 is
M
M1
M1
M2
M3
preferable up to W2 . Indeed |VW
| = 0,
| = |VW
| = 1, |VW
| = |VW14 | = 0, |VW
2
1
1
1
M
M2
M3
|VW
| = |VW
| = 1, and |VW24 | = 2. We defi ne the set WC as the rules c11 ← a,
2
2
2
c1 ← ¬a, and c22 ← ¬b, identifying the weak constraints and the level on which they
appear, and rules counting the number of violated constraints in a Wi , for 0 ≤ l ≤ k,
co(1 , 0 , wi ) ← not c1i
co(2 , l , w2 ) ← co(1 , l , w2 ), not c22
i
co(1 , 1 , wi ) ← c1
co(2 , l + 1 , w2 ) ← co(1 , l , w2 ), c22
Intuitively, the third argument in a co/3 literal identifi es the particular Wi we are
looking at, the fi rst argument shows the number of constraints in Wi that have already been considered, and the second argument effectively counts the number of violated constraints in Wi . Further, WC also contains the rules defi ning the number
of violated constraints in each set of weak constraints, i ∈ {0, 1}, j ∈ {0, 1, 2}:
co(i, w1 ) ← co(1 , i, w1 ) and co(j , w2 ) ← co(2 , j , w2 ).
The order < on literals is defi ned as follows co(0 , w1 ) < co(1 , w1 ) < co(0 , w2 )
< co(1 , w2 ) < co(2 , w2 ) < R, with R the extended literals L∗P ∪WC without the co/2
atoms. Intuitively, the w1 constraints are more important than the w2 constraints, and
hence appear below them, and, among each wi , one rather has a low count than a high
count, since this implies less violated constraints. One can check that the preferred
answer set of the literal agent A =< for P ∪ WC is M3′ = M3 ∪ {c21 , co(1, 0, w1 ),
co(1, 1, w2 ), co(2, 1, w2 ), co(0, w1 ), co(1, w2 )}.
Formally, we have the following result, where the weak constraints in a Wj are assumed
to be numbered and explicitly tagged with a superscript identifying Wj , i.e. Wj = { ←
β1j , . . . , ← βnj }.
Theorem 9. Let hP, W = {W1 , . . . , Wn }i be a weak logic program. M is a preferred
answer set of hP, W i iff, for all 1 ≤ j ≤ n,
M ′ = M ∪{cji | M |= βij } ∪{co(1, α, wj ) | cj1 6∈ M ′ ⇒ α = 0, cj1 ∈ M ′ ⇒ α = 1}
∪{co(k + 1, α, wj ) | co(k, l, wj ) ∈ M ′ , 0 ≤ l ≤ k < |Wj |∧
[cjk+1 6∈ M ′ ⇒ α = l, cjk+1 ∈ M ′ ⇒ α = l + 1]}
∪{co(m, wj ) | co(|Wj |, m, wj ) ∈ M ′ }
is a preferred answer set of the literal agent A for P ∪ WC where WC , for all 1 ≤
j ≤ n, consists of the rules cij ← βij , co(1 , 0 , wj ) ← not c1j , co(1 , 1 , wj ) ← c1j and
co(k + 1 , l , wj ) ← co(k , l , wj ), not ckj +1 with 0 ≤ l ≤ k < |Wj |
co(k + 1 , l + 1 , wj ) ← co(k , l , wj ), ckj +1
53
together with the rules co(m, wj ) ← co(|Wj |, m, wj ), 0 ≤ m ≤ |Wj |, and the order
<A defined as co(0 , w1 ) <A co(1 , w1 ) <A . . . <A co(|W1 |, w1 ) <A . . . <A
co(0 , wn ) <A co(1 , wn ) <A . . . <A co(|Wn |, wn ).
A more general approach, in the spirit of rule and literal agents, is to allow agents
to prefer the satisfaction of certain weak constraints over the satisfaction of other ones.
A weak logic program then becomes a pair hP, W i, where P is a program and W is a
set of constraints. A weak agent for hP, W i corresponds to a well-founded strict partial
order on W , which induces an order ⊑ among interpretations M and N of P such that,
M ⊑ N iff ∀w2 ∈ WN \WM · ∃w1 ∈ WM \WN · w1 < w2 , where WX are the weak
constraints in W that are satisfi ed by X, mirroring Defi nition 1 for rule agents (note
that the latter are different from weak agents since RAs require the satisfaction of all
constraints in all extended answer sets).
The extended answer sets of P are, by defi nition, the extended answer sets of a weak
agent A for hP, W i, and preferred answer sets are defi ned as the minimal extended
answers sets w.r.t. ❁. Note that a preferred answer set M of a weak agent A for hP, W i
has a minimal set of violated constraints, i.e. there is no extended answer set N of A
such that W \WN ⊂ W \WM .
For a program P , defi ne the extended program E(P ) as P with the rules a ← β
replaced by a ← β, not ¬a. From Theorem 4 in [22] we have that the extended answer
sets of P are exactly the answer sets of E(P ). We can then rewrite a weak agent as a
rule agent by introducing for each weak constraint w : ← β rules w ← β and ¬w ← β
such that w is in an answer set if the constraint is violated.
Theorem 10. Let Aw be a weak agent for a WLP hP, W i. M is a preferred answer set
of Aw for hP, W i iff M ′ = M ∪ {w | w ∈ W, M 6|= w} is a preferred answer set of the
RA Ar for E(P )∪WC with WC = {w ← β; ¬w ← β; ← β, not w | w : ← β ∈ W }
and ¬w1 ← β1 <Ar ¬w2 ← β2 iff w1 <Aw w2 .
Moreover, weak agents are as expressive as rule agents.
Theorem 11. Let P be a program and Ar a RA for P . M is a preferred answer set of
Ar for P iff M is a preferred answer set of the weak agent Aw for the WLP hP, W i,
with W = { ← β, not α | α ← β ∈ P } and ← β1 , not α1 <Aw ← β2 , not α2 iff
α1 ← β1 <Ar α2 ← β2 .
Weak agents, placed in a hierarchy, then allow for an intuitive decision making
process based on satisfaction and violation of weak constraints. The complexity of weak
agent hierarchies can easily be deduced from the reductions from and to rule agent
hierarchies, with Theorem 10 and 11 and their extensions for hierarchies.
Theorem 12. The problem of deciding, given a weak agent hierarchy hhP, W i, T i and
a literal l, whether there is a preferred answer set containing l is Σ d(T )+2 -complete.
On the other hand, deciding whether every preferred answer set contains l is Π d(T )+2 complete.
5 Conclusions and Directions for Further Research
In this paper, we introduced a system suitable to model hierarchical decision making.
We equip agents with a preference relation on the available knowledge and allow them
54
to cooperate with each other in a hierarchical fashion. Preferred solutions of these systems naturally correspond to preferred decisions regarding the problem.
Initially, we defi ned two types of preference agents: rule agents express a preference
over rules, while literal agents use a preference over extended literals they rather prefer
upon others in a solution. We showed that mixed AHs, containing both types of agents,
can be reduced to hierarchies consisting only of rule or literal agents. It turns out that
these AHs cover the polynomial hierarchy.
Finally, we showed that layered weak constraints can be easily simulated by a single
agent. Furthermore, we generalized the concept of layered weak constraints to weak
agent hierarchies, which are equivalent to rule agent hierarchies.
Future work comprises a dedicated implementation of the approach, using existing
answer set solvers. E.g., we could generate an extended answer set which is then improved recursively by a set of augmented programs, corresponding to the agents in the
hierarchy, generating strictly better solutions. A fi xpoint of this procedure then corresponds to a preferred answer set of the system.
References
1. José Júlio Alferes and Luı́s Moniz Pereira. Updates plus preferences. In Manual OjedaAciego, Inma P. de Guzmán, Gerhard Brewka, and Luı́z Moniz Pereira, editors, European
Workshop, JELIA 2000, volume 1919 of Lecture Notes in Artifi cial Intelligence, pages 345–
360, Malaga, Spain, September–October 2000. Springer Verlag.
2. Chitta Baral. Knowledge Representation, Reasoning and Declarative Problem Solving. Cambridge Press, 2003.
3. Chitta Baral and Michael Gelfond. Reasoning agents in dynamic domains. In Logic-based
artifi cial intelligence, pages 257–279. Kluwer Academic Publishers, 2000.
4. G. Brewka. Logic programming with ordered disjunction. In Proceedings of the 18th National Conference on Artifi cial Intelligence and Fourteenth Conference on Innovative Applications of Artifi cial Intelligence, pages 100–105, Edmonton, Canada, July 2002. AAAI
Press.
5. Gerhard Brewka and Thomas Eiter. Preferred answer sets for extended logic programs.
Artifi cial Intelligence, 109(1-2):297–356, April 1999.
6. Gerhard Brewka, Ilkka Niemela, and Tommi Syrjanen. Implementing ordered disjunction
using answer set solvers for normal programs. In Flesca et al. [14], pages 444–455.
7. Francesco Buccafurri, Wolfgang Faber, and Nicola Leone. Disjunctive logic programs with
inheritance. In Danny De Schreye, editor, Logic Programming: The 1999 International Conference, pages 79–93, Las Cruces, New Mexico, December 1999. MIT Press.
8. Francesco Buccafurri and Georg Gottlob. Multiagent compromises, joint fixpoints, and stable models. In Antonis C. Kakas and Fariba Sadri, editors, Computational Logic: Logic
Programming and Beyond, Essays in Honour of Robert A. Kowalski, Part I, volume 2407 of
Lecture Notes in Computer Science, pages 561–585. Springer, 2002.
9. Francesco Buccafurri, Nicola Leone, and Pasquale Rullo. Strong and weak constraints in disjunctive datalog. In Proceedings of the 4th International Conference on Logic Programming
(LPNMR ’97), pages 2–17, 1997.
10. Francesco Buccafurri, Nicola Leone, and Pasquale Rullo. Disjunctive ordered logic: Semantics and expressiveness. In Anthony G. Cohn, Lenhard K. Schubert, and Stuart C. Shapiro,
editors, Proceedings of the 6th International Conference on Principles of Knowledge Representation and Reasoning, pages 418–431, Trento, June 1998. Morgan Kaufmann.
55
11. Francesco Buccafurri, Nicola Leone, and Pasquale Rullo. Enhancing disjunctive datalog by
constraints. Knowledge and Data Engineering, 12(5):845–860, 2000.
12. Wolfgang Faber. Disjunctive datalog with strong and weak constraints: Representational and
computational issues. Master’s thesis, Institut for Informationssysteme, Technische Universität Wien, 1998.
13. Wolfgang Faber, Nicola Leone, and Gerald Pfeifer. Representing school timetabling in a
disjunctive logic programming language. In Proceedings of the 13th Workshop on Logic
Programming (WLP ’98), 1998.
14. Sergio Flesca, Sergio Greco, Nicola Leone, and Giovambattista Ianni, editors. European
Conference on Logics in Artifi cial Intelligence (JELIA ’02), volume 2424 of Lecture Notes
in Artifi cial Intelligence, Cosenza, Italy, September 2002. Springer Verlag.
15. D. Gabbay, E. Laenens, and D. Vermeir. Credulous vs. Sceptical Semantics for Ordered
Logic Programs. In J. Allen, R. Fikes, and E. Sandewall, editors, Proceedings of the 2nd
International Conference on Principles of Knowledge Representation and Reasoning, pages
208–217, Cambridge, Mass, 1991. Morgan Kaufmann.
16. Michael Gelfond and Vladimir Lifschitz. The stable model semantics for logic programming.
In Robert A. Kowalski and Kenneth A. Bowen, editors, Logic Programming, Proceedings of
the Fifth International Conference and Symposium, pages 1070–1080, Seattle, Washington,
August 1988. The MIT Press.
17. Robert A. Kowalski and Fariba Sadri. Logic programs with exceptions. In David H. D.
Warren and Peter Szeredi, editors, Proceedings of the 7th International Conference on Logic
Programming, pages 598–613, Jerusalem, 1990. The MIT Press.
18. Els Laenens and Dirk Vermeir. A logical basis for object oriented programming. In Jan
van Eijck, editor, European Workshop, JELIA 90, volume 478 of Lecture Notes in Artifi cial Intelligence, pages 317–332, Amsterdam, The Netherlands, September 1990. Springer
Verlag.
19. Davy Van Nieuwenborgh, Stijn Heymans, and Dirk Vermeir. On programs with linearly
ordered multiple preferences, 2004. Accepted at ICLP ’04.
20. Christos H. Papadimitriou. Computational Complexity. Addison Wesley, 1994.
21. Chiaki Sakama and Katsumi Inoue. Representing priorities in logic programs. In Michael J.
Maher, editor, Proceedings of the 1996 Joint International Conference and Symposium on
Logic Programming, pages 82–96, Bonn, September 1996. MIT Press.
22. Davy Van Nieuwenborgh and Dirk Vermeir. Preferred answer sets for ordered logic programs. In Flesca et al. [14], pages 432–443.
23. Davy Van Nieuwenborgh and Dirk Vermeir. Order and negation as failure. In Catuscia
Palamidessi, editor, ICLP, volume 2916 of Lecture Notes in Computer Science, pages 194–
208. Springer, 2003.
24. Davy Van Nieuwenborgh and Dirk Vermeir. Ordered diagnosis. In Proceedings of the 10th
International Conference on Logic for Programming, Artifi cial Intelligence, and Reasoning
(LPAR2003), volume 2850 of Lecture Notes in Artifi cial Intelligence, pages 244–258, Almaty, Kazachstan, 2003. Springer Verlag.
25. Davy Van Nieuwenborgh and Dirk Vermeir. Ordered programs as abductive systems.
In Proceedings of the APPIA-GULP-PRODE Conference on Declarative Programming
(AGP2003), pages 374–385, Regio di Calabria, Italy, 2003.
26. Marina De Vos and Dirk Vermeir. Logic programming agents playing games. In Research
and Development in Intelligent Systems XIX (ES2002), BCS Conference Series, pages 323–
336. Springer-Verlag, 2002.
27. Kewen Wang, Lizhu Zhou, and Fangzhen Lin. Alternating fixpoint theory for logic programs
with priority. In Proceedings of the First International Conference on Computational Logic
(CL2000), volume 1861 of Lecture Notes in Computer Science, pages 164–178, London,
UK, July 2000. Springer.
56