Group Conflict as Social Contradiction
Daniele Porello, Emanuele Bottazzi, Roberta Ferrario
Institute of Cognitive Sciences and Technologies, CNR {daniele.porello,
emanuele.bottazzi, roberta.ferrario}@loa.istc.cnr.it
Abstract. This paper is a contribution to the development of an ontology of conflict. In particular, we single out and we study a peculiar notion of group conflict,
that we suggestively label ‘social contradiction’. In order to do so, we shall introduce and discuss the methodology of social choice theory, since it allows for
defining the notion of collective attitude that may emerge from a number of possibly divergent individual attitudes. We shall see how collective attitudes lead to
define a specific notion of group and therefore a specific notion of group conflict.
As a conclusion, we shall present our abstract analysis of group conflicts and we
shall position social contradiction with respect to other types of conflicts.
1
Introduction
This paper provides a number of fundamental elements in order to develop an ontologically grounded classification of group conflicts. Understanding groups’ behavior is a
challenging task that involves a number of disciplines such as game theory, sociology,
behavioral sciences. We are here interested in the perspective provided by logic and
computational disciplines, in particular our methodology is related to the analysis of
the interaction of a number of heterogeneous agents and groups that have been developed in the multiagent community [28]. The model of agency that is presupposed in
this approach is the (belief-desire-intention) BDI model that allows for developing a
mathematical representation of individual actions, plans, goals, etc. in their relationship
with other agents [7, 12].
We shall introduce the methodology of social choice theory in order to formally
grasp the relationship between the beliefs, desires, intentions, preferences ,goals of the
individuals belonging to the group and the corresponding attitudes that we may want to
ascribe to the group itself. Social choice theory (SCT) is a branch of welfare economics
that emerged at the beginning of the past century and that studies how a collective
choice can be derived from individual possibly conflicting choices, by means of fair
aggregation procedures [26, 11]. SCT has been successfully applied in economics, political science, and recently in computer science and AI. In particular, in the area of
multiagent systems, SCT has provided the key concepts for understanding and defining
notions such as group information, group choice, group intention etc.[6].
SCT views groups as collectives of individuals that may have different preferences,
opinions, desires, who have to decide and agree on a single collective stance. An example of such a group is given by a parliament in which the representatives may express
a number of divergent positions and who settle the possible disagreement by voting.
2
Moreover, the members of the board of stakeholders of a corporation who decide possible courses of action can be analyzed by means of social choice theory. More generally, any assembly of individuals that agree on the procedure to settle disagreement can
be studied by means of social choice theoretic methods. An important difference that
we want to stress is that SCT takes a different perspective on groups with respect to
Game Theory [18, 20]. SCT is interested in the behavior of the group as a single entity,
whereas the focus of game theory is on the interaction of a number of self-interested
agents. In this sense, the notion of group that social choice theory defines and investigates imposes a strict form of social cohesion. Although SCT presupposes at least
the implicit agreement on the procedure to settle disagreement, due to the variety of
aggregation procedures that can be defined and discussed, SCT methods can deal with
a wide spectrum of groups, such as parliaments, organizations, corporations, assemblies, associations, etc. SCT can be considered as a general theory of the aggregation of
propositional attitudes in the philosophical sense, i.e. beliefs, desires, intentions, preferences, judgments. Once we are capable of modelling such attitudes in a clear formal
language, we can define and evaluate the proper aggregation procedures by means of
SCT techniques [8]. Moreover, SCT theory has been applied to model groups’ intentions, for example in [4].
It is important to make the level of our analysis explicit: We are interested in knowledge representation and in particular we propose a formal and general methodology to
represent conflict. We assume that conflicts are always about something. Thus, we shall
introduce a formal language to represent possible matters of conflicts, such as preferences, beliefs, judgments, desires, goals, intentions. We shall then propose an abstract
notion of conflict between matters by using the formal concept of contradiction between the formal representations of the matter of conflict. For example, a conflict of
opinions is represented by the contradiction between a proposition A and a proposition
not A, an actual conflict that may emerge between two agents concerning their opinions
can be described by assuming that an agent is claiming A whereas the other agent is
claiming not A.
The motivation for using SCT in analyzing conflict is that it allows for singling out
a peculiar notion of conflict of groups. Although it may seem at first that the agreement
forced by SCT on the procedure to settle possible conflicts is sufficient for guaranteeing
that every conflict among the members of the group can be settled, quite surprisingly,
this is not the case. There are situation such that, although the individuals that are members of the group agree on the norms or procedures that settle possible conflicts between
individuals, nevertheless the actions, beliefs, judgments, preferences of the group turn
out to be in a peculiar situation of conflict, namely they turn out to be contradictory.
Situations like these actually occurred in the deliberative practice of the US Supreme
Court [13] and it is important to understand what type of conflict is peculiar to those
situations. In such cases, being members of the court, the judges accepted to solve possible divergences by voting by majority. Hence, the procedure to settle possible conflicts
of opinions was clearly accepted by every member of the group. Nevertheless, the outcome of the procedure turned out to be in contrast with some very basic principles of
rationality. This is the specific type of conflict that our model aims to capture.
3
The remainder of this paper is organized as follows. In Section 2, we shall introduce some basic elements of SCT and present the cases of group conflict that we want
to treat. In Section 3, we focus on judgment aggregation, a recent area in SCT, and we
present it as a general theory for aggregating propositional attitudes. Judgment aggregation will provide the formal basis for presenting the peculiar notion of group conflict
that we are going to analyze. In Section 4, we present the elements of our analysis of
groups and conflicts. Section 5 capitalizes on the conceptual methodology of the previous sections and present a taxonomy of group conflicts. Our conceptual analysis can
be considered a preliminary step towards the integration of a taxonomy of conflict into
a foundational ontology such as DOLCE [16]. Section 6 concludes and points at some
future applications. In particular, we believe that our approach is particularly useful if
implemented in complex socio-technical systems [9], as conflicts may show up between
various types of heterogeneous information, possibly originating both from humans and
artificial devices. The abstract level of representation that we pursue in this paper can
therefore deal with information coming from heterogeneous sources, thus it can be applied to model the rich informational entanglement that characterizes socio-technical
systems.
2
Social choice theory: informal presentation
The seminal result in SCT has been provided by Kenneth Arrow’s investigation of paradoxes in preference aggregation, namely the problem of aggregating a number of individual conflicting preferences into a social preference. Suppose that three parties in
a parliament (label them 1, 2 and 3) have conflicting preferences over three possible
alternative policies: a: “promote workers’ salaries” , b: “decrease entrepreneurs’ taxation”, and c: “increase unemployment benefits”. Suppose agents’ preferences can be
represented by the following rankings of the options. Mathematically, preferences are
assumed to be linear orders, thus individual preferences are supposed to be transitive:
if an agent prefers x to y and y to z, then she/he should prefer x to z; irrefelexive: an
agent does not prefer x over x; and complete: for any pair of alternatives, agents know
how to rank them, x is preferred to y or y is preferred to x. 1 Profiles are lists of the
divergent points of view of the three individuals, as in the following example:
1: a > b > c
2: b > a > c
3: a > c > b
In the scenario above, the agents have conflicting preferences and there is no agreement on which is the best policy to be implemented. Since the policies are alternative,
1 and 3 would pursue a, whereas 2 would pursue b. The example is supposed to model
a parliament, thus the possible conflicts have to be solved, as we assume that the parliament as a whole should pursue one of the alternative policies. Thus, we have to ask
1
These conditions are to be taken in a normative way. They are not of course descriptively adequate, as several results in behavioral game theory show. However, the point of this approach
is to show that even when individuals are fully rational, i.e. they conform to the rationality
criteria that we have just introduced, the aggregation of their preferences is problematic.
4
what is the preference of the group, namely the preference that we can ascribe to the
parliament composed by 1, 2 and 3. However, at this point, we cannot ascribe a single
preference to the group without assuming a rule to settle disagreement. Suppose now
that the individuals agree on a procedure to settle their differences; for example, they
agree on voting by majority on pairs of options. Thus, agents elect the collective option
by pairwise comparisons of alternatives. In our example, a over b gets two votes (by 1
and 3), b over c gets two votes (by 1 and 2) and a over c gets three votes. The majority
rule defines then a social preference a > b > c that can be ascribed to the group as the
group preference.
The famous Condorcet’s paradox shows that it is not always the case that individual
preferences can be aggregated into a collective preference. Take the following example.
1: a > b > c
2: b > c > a
3: c > a > b
Suppose agents again vote by majority on pairwise comparisons. In this case, a
is preferred to b because of 1 and 3, b is preferred to c because of 1 and 2, thus, by
transitivity, a has to be preferred to c. However, by majority also c is preferred to a.
Thus, the social preference is not “rational”, according to our definition of rationality,
as it violates transitivity.
Kenneth Arrow’s famous impossibility theorem states that Condorcet’s paradoxes
are not an unfortunate case of majority aggregation, rather they may occur for any
aggregation procedure that respects some intuitive fairness constraint [2]. In the next
section, we shall discuss in more detail the formal treatment of the intuitions concerning
fairness and we shall define a number of properties that provide normative desiderata
for the aggregation procedure.
A recent branch of SCT, Judgment Aggregation (JA) [15] studies the aggregation
of logically connected propositions provided by heterogeneous agents into collective
information. The difference with preference aggregation is that in this case agents argue
and provide reasons for their choices instead of simply reporting their preferences. For
example, take a committee composed by three members, who have to decide whether
to implement a policy B: “we should increase workers’ salaries” and the considerations
that may support such conclusion, such as A: “low salaries cause crisis” and the material
implication A → B: “if low salaries cause crisis, then we should increase workers’
salaries”. Now suppose members hold different opinions, as follows.
A A→B
1 yes yes
2 yes no
3 no yes
B
yes
no
yes
In this case, the conflict may emerge from the fact that individuals have divergent opinions on what is the best thing to do and no shared rule to settle such conflicts of opinions. If one asks what is the opinion of the group, one may simply answer that, due to
divergencies, there is no group opinion. However, this is the claim of our paper, any
statement concerning collective information depends on the procedure that is assumed
5
to settle disagreement. If we don’t assume any procedure to solve conflicts, we simply
say that individual conflicts may possibly arise, but we leave them as they are. If individuals agree that unanimity is the rule to elect a collective opinion, in the example
above, neither A, B nor A → B is elected as the opinion of the group. If the majority
rule is used, then the collective opinion is given by A (voted by 1 and 2), A → B (voted
by 1 and 3), and B (voted by 1 and 3).
Analogously to the case of Condorcet’s paradox in preference aggregation, situations of inconsistent aggregations of judgments have been individuated. These paradoxical situations have been labelled in the literature doctrinal paradoxes or discursive
dilemmas. It is important to notice that such paradoxical situations actually occurred
in the deliberative practice of the US Supreme Court [13]. This problem has been perceived as a serious threat to the legitimacy of group deliberation and it has been considered a seminal result in the recent debate on the rationality of democratic decisions
[21, 14].
We show an example of such paradox by slightly modifying the previous example.
Suppose agent 3 rejects B because she/he rejects the premise A.
A A→B
1 yes yes
2 yes no
3 no yes
B
yes
no
no
By majority, the group accepts A, because of 1 and 2, and A → B, because of 1 and
3, but it rejects B. Thus, the group collectively accepts the premises of modus ponens
while rejecting the consequence. If we assume that rejecting a proposition is equivalent to accepting its negation ¬B, then, even if individual opinions are each logically
coherent, the collective set A, A → B, and ¬B is inconsistent.
Again, doctrinal paradoxes apply to any aggregation procedure that respects some
basic fairness desiderata, this is the meaning of the theorem proven by Christian List and
Philip Pettit [14]. It is important to stress once again that the case of doctrinal paradoxes,
far from being a curious example envisaged by means of some thought experiment, have
actually occurred in the deliberative practice of judicial courts. In particular, the paradox
has been perceived as a serious threat to the legitimacy of the decision of the Court by
the judges of the Court themselves. A contradictory outcome, in that case, amounts to
providing an inconsistent sentence that can be contested by the defendant who is being
charged on that ground. Thus, it is important to provide a conceptual characterization
of what type of conflict the doctrinal paradox exhibits, as the problem of understanding
to which agent the conflict can be ascribed is not of immediate solution.
Summing up the content of this section, we have seen how SCT allows for individuating and formalizing an important form of group conflict that applies in normative
settings and that is the specific notion of conflict that we want to analyze in this paper.
3
A model of judgment aggregation
We present the main elements of the formal approach of judgment aggregation (JA). The
reason why we focus on JA is twofold: on the one hand, it has been taken to be more
6
general than preference aggregation [15], on the other hand, it has been claimed that JA
can provide a general theory of aggregation of propositional attitudes [8]. Therefore, JA
provides the proper level of abstraction for our abstract model of types of conflict. The
content of this section is based on [15] and [10] and builds upon them.
Let P be a set of propositional variables that represent the contents of the matter
under discussion by a number of agents. The language LP is the set of propositional
formulas built from P by using the usual logical connectives ¬, ∧, ∨, →, ↔.
Definition 1. An agenda is a finite nonempty set Φ ⊆ LP that is closed under (nondouble) negations if A ∈ Φ then ¬A ∈ Φ.
An agenda is the set of propositions that are evaluated by the agent in a given situation. In the examples of the previous section, the agenda is given by A, B, A → B,
¬A, ¬B, ¬(A → B). The fact that, given a proposition, the agenda must contain also
its negation aims to model the fact that agents may approve or reject a given matter.
The rejection of a matter A is then modeled by an agent accepting ¬A. In this model,
for sake of simplicity, we do not present the case of abstention, however it is possible
to account for such cases by slightly generalizing our framework.
We define individual judgment sets as follows.
Definition 2. A judgment set J on an agenda Φ is a subset of the agenda J ⊆ Φ.
We call a judgment set J complete if A ∈ J or ¬A ∈ J, for all formulas A in
the agenda Φ, and consistent if there exists an assignment that makes all formulas in
J true, namely we assume the notion of consistency that is familiar from propositional
logic.
These constraints model a notion of rationality of individuals, i.e. individuals express judgment sets that are rational in the sense that they respect the rules of (classical)
logic.2
Denote with J(Φ) the set of all complete consistent subsets of the agenda Φ, namely
J(Φ) denotes the set of all possible rational judgment sets on the agenda Φ . Given a
set N = {1, . . . , n} of individuals, denote with J = (J1 , . . . , Jn ) a profile of judgment sets, one for each individual. A profile is intuitively a list of all the judgments
of the agents involved in the collective decision at issue. For example, the profile involved in the paradoxical example of the previous section is the following: ({A, A →
B, B}, {A, ¬(A → B), ¬B}, {¬A, A → B, ¬B}).
We can now introduce the concept of aggregation procedure that is, mathematically,
a function. The domain of the aggregation procedure is given by J(Φ)n , namely, the set
of all possible profiles of individual judgments.
Definition 3. An aggregation procedure for agenda Φ and a set of n individuals is a
function F : J(Φ)n → P(Φ).
2
Of course this may be a descriptively inadequate assumption. However, on the one hand these
requirements are to be understood in a normative way, e.g. we exclude that a representative
would vote for a proposal A and a proposal ¬A at the same time. Moreover, the agenda may
contain very simple logical propositions: as we shall see, it is sufficient to to assume very
minimal reasoning capacity to get the paradoxical outcomes.
7
An aggregation procedure maps any profile of individual judgment sets to a single collective judgment set (an element of the powerset of Φ). Given the definition of the
domain of the aggregation procedure, the framework presupposes individual rationality: all individual judgment sets are complete and consistent. Note that we did not yet
put any constraint on the collective judgment set, i.e. the result of aggregation, so that
at this point the procedure may return an inconsistent set of judgments. This is motivated by our intention to study both consistent and inconsistent collective outcomes.
For example, in the doctrinal paradox of the previous section, the majority rule maps
the profile of individual judgments into an inconsistent set:
({A, A → B, B}, {A, ¬(A → B), ¬B}, {¬A, A → B, ¬B}) 7→ {A, A → B, ¬B}
7→
{A, A → B, ¬B}
The consistency of the output of the aggregation is defined by the following properties. An aggregation procedure F , defined on an agenda Φ, is said to be collectively
rational iff F is:
– complete if F (J) is complete for every J ∈ J(Φ)n ;
– consistent if F (J) is consistent for every J ∈ J(Φ)n ;
That is, collective rationality forces the outcome of the procedure to be rational in
the same sense of the individual rationality. Of course, the case of doctrinal paradox
violates collective rationality.
We now introduce a number of axioms that provide a mathematical counterpart of our
intuition on what a fair aggregation procedure is. The following are the most important
axioms for JA discussed in the literature [14, 15]:
– Unanimity (U): If φ ∈ Ji for all i then φ ∈ F (J).
– Anonymity (A): For any profile J and any permutation σ : N → N we have
F (J1 , . . . , Jn ) = F (Jσ(1) , . . . , Jσ(n) ).
– Neutrality (N): For any φ, ψ in the agenda Φ and profile J ∈ J(Φ)n , if for all i we
have that φ ∈ Ji ⇔ ψ ∈ Ji , then φ ∈ F (J) ⇔ ψ ∈ F (J).
– Independence (I): For any φ in the agenda Φ and profiles J and J′ in J(Φ)n , if
φ ∈ Ji ⇔ φ ∈ Ji′ for all i, then φ ∈ F (J) ⇔ φ ∈ F (J′ ).
– Systematicity (S): For any φ, ψ in the agenda Φ and profiles J and J′ in J(Φ)n , if
φ ∈ Ji ⇔ ψ ∈ Ji′ for all i, then φ ∈ F (J) ⇔ ψ ∈ F (J′ ).
Unanimity entails that if all individuals accept a given judgment, then so should the collective. Anonymity states all individuals should be treated equally by the aggregation
procedure. Neutrality is a symmetry requirement for propositions that entail that all the
issues in the agenda have to be treated equally. Independence says that if a proposition is
accepted by the same subgroup under two distinct profiles, then that proposition should
be accepted either under both or under neither profile. These axioms express our intuition concerning the fairness of the procedure, for example, (A) forces the procedure
not to discriminate between individuals. This fairness condition may be used to model
8
the arguments of an agent for accepting to solve conflicts by means of such a procedure. Systematicity is simply the conjunction of Independence and Neutrality and has
been introduced separately as it is the condition used to prove the impossibility theorem
in judgment aggregation. The impossibility theorem of List and Pettit [14] is stated as
follows.
Theorem 1 (List &Petitt, 2002). There are agendas Φ such that there is no aggregation procedure F : J(Φ)n → P(Φ) that satisfies (A), (S), and collective rationality.
In particular, for any aggregation procedure that satisfies (A) and (S), there is a
profile of judgment sets that returns an inconsistent outcome. The majority rule, that we
have seen in the examples of Section 2, satisfies (A) and (S); accordingly, the discursive
dilemma shows a case of inconsistent aggregation.
Very simple agendas may trigger inconsistent outcomes, one example being the
agenda of the doctrinal paradox that we have presented in Section 2 {A, A → B, B,
¬A, ¬(A → B), ¬B}. Technically, any agenda that contains a minimal inconsistent set
of cardinality greater than 2, such as {A, A → B, ¬B}, may trigger a paradox. Thus, an
agenda with respect to which the majority rule always returns consistent outcomes is a
very simple agenda that contains, for example, only unconnected pairs of propositional
atoms and their negations. Hence, paradoxical outcomes are very likely to occur in any
complex social decision.
The methodology of JA can be extended to treat many voting procedures and characterize whether they may return inconsistent outcomes. Moreover, since the notion
of aggregation procedure is very abstract, one can in principle model more complex
procedures or norms, such as those that define decision making in organizations and
corporations.
4
Social attitudes and conflict as contradiction
We have seen that JA provides a precise mathematical modeling of the relationship
between individual judgments and collective judgments. The relationship is formalized
by means of an aggregation procedure and several properties of such aggregation can be
discussed and analyzed. Moreover, it is possible to characterize the situation that lead
to inconsistent outcomes. In this section, we introduce three notions that ground our
ontological analysis of conflicts, that is, the notion of propositional attitude, the notion
of conflict as contradiction, and the notion of social attitude.
4.1
Propositional attitudes in JA
Propositional attitudes have been widely discussed in the philosophical literature and,
roughly speaking, they express a relationship between an agent i and a propositional
content p. For example, an agent can believe, judge, desire, prefer, ought, ... p, where p
represents the content of the attitude. To our end, since the point of view of this work
is knowledge representation, propositional attitudes are important as they allow for distinguishing a sharable propositional content of an attitude from the agent to whom the
attitude is ascribed. Thus, we view individual propositional attitudes as sentences that
9
are publicly expressed and communicated to other agents. Moreover, by using propositional attitudes, we are assuming that the matter of conflict between two agents can be
in principle described by a third person in a sharable way.
We can introduce a formal language to represent how agents can communicate and
reason about their attitudes, by building upon the rich logical tradition in the representation of propositional attitudes. For example, beliefs can be represented in epistemic
modal logic [27]. Intentions can be modeled by using a number of techniques in mutliagent systems. Moreover, ought sentences are widely studied in deontic logics. Preferences can be represented by means of a fragment of first order logic: we introduce
predicates P ab that represent the information “a is preferred to b”. The rationality constraints on preferences, i.e. transitivity, reflexivity and completeness, can be expressed
by means of first order formulas [22].
Therefore, general propositional attitudes can be in principle taken into account in
the framework of JA [8]. We briefly sketch how. It is enough to extend the logical
language that is used to model individual attitudes. For example, if we want to deal
with beliefs, we extend the agenda Φ that we have introduced in the previous section,
by adding individual belief operators in epistemic modal logic Bi A, standing for “The
agent i believes that A”.
Let A be a type of propositional attitudes, we label LA the logical system for representing the type of propositional attitudes A. That is, LA refers to the language to
represent propositional attitudes A and to the logical rules to reason about such attitudes, e.g an axiomatic system for that logic. Accordingly, we define an agenda ΦA as a
subset of the language of LA . In the previous section, we have defined the possible sets
of individual judgments by means of J(Φ), namely we assumed that individual judgment sets are consistent and complete with respect to (classical) propositional logic. In
the general case, it is possible to define judgment sets that are rational with respect to
different logical systems [23]. We define JA (ΦA ) as the set of possible sets of attitudes
that satisfy the rationality constraints that are specific to A. For instance, in case A are
preferences, sets of preference attitudes have to respect transitivity. In case A are beliefs, they should be consistent, in the sense that an agent is not supposed to believe A
and ¬A at the same time, therefore we exclude sets containing both Bi A and Bi ¬A.
The general form of an aggregation procedure is a slight generalization of the one
introduced in the previous section. An aggregation procedure is a function from profiles
of individual attitudes to sets of collective attitudes: F : J(ΦA )n → P(ΦA ). The notion
of collective rationality again may change as we may add more specific constraints on
the type of attitudes at issue. For example, in preference aggregation we add the constraints on preference orders. Since each one of these extensions includes propositional
logic, the impossibility theorem shall hold for the larger fragment. Thus, it is at least
in principle possible to extend the map of consistent/inconsistent aggregation to richer
languages.
4.2
Conflict as contradiction
Once we represent agents’ attitudes, we can introduce a general definition of the notion
of conflict. The notion of conflict that we define is placed at the level of the representation of propositional attitudes.
10
Given two sets of attitudes A and A′ of the same type A, we say that A is in conflict with A′ iff the set A ∪ A′ entails a contradiction in the formal system LA that
represents those attitudes. That is, the two sets of attitudes are inconsistent with respect
to LA . For instance, two conflicting judgment sets in the sense of the previous section
are simply two sets of propositions that are inconsistent with respect to propositional
logic, e.g. {A, B, C} and {¬A, B, C}. Moreover, two conflicting preferences are two
sets of preferences that together entail a contradiction, such as {P ab, P ac, P bc} and
{P ba, P ac, P bc}: i.e. P ab and P ba entail by transitivity P aa, which contradicts irreflexivity. Conflicting preferences and goals entail that they cannot be satisfied at the
same time.
Note that our notion of conflict applies to sets of attitudes of the same type A. Thus,
we do not say for example that an intention is inconsistent with a belief. This is so because, in our view, a belief can contradict an intention, or an ought, only with respect
to a reasoning system that includes both attitudes and makes the relationships between
them explicit. Such a reasoning framework has to contain a principle that links the different types of propositional attitudes that are matter of discussion. An example of such
a principle is (one version of) the means-end principle of instrumental rationality 3 : “if
I intend to A and I believe that B is a sufficient means for achieving A, then I intend to
B”. By means of such a principle, we can see how a belief may contradict an intention
as follows: suppose I intend to A, my belief that B is a sufficient means to get A would
be inconsistent with the fact that I do not intend to B. Our approach is motivated by
the fact that in general we do not want to be committed with a philosophically onerous
claim that a belief per se can contradict a preference or a desire or an ought.
In our modeling, the notion of contradiction has the following intuitive interpretation: two inconsistent sets of attitudes cannot be satisfied at the same time, e.g. two
conflicting preferences entail that either one or the other can be accepted. We can define
the conflict between two agents by simply saying that agent i is in conflict with agent
j if the set of attitudes of i Ai is inconsistent with the set of attitudes of j, Aj , namely
Ai ∪ Aj is inconsistent with respect to the formal system LA . This can be easily generalized to conflicts involving m agents: A1 ∪ · · · ∪ Am is inconsistent with respect to
LA .
Note that our definition allows for an agent being in conflict with him/herself, in
case she/he maintains a set of inconsistent attitudes. E.g. if an agent i has a set of
judgments such as {A, A → B, ¬B}. We shall use this fact in the next paragraph. The
abstract notion of conflict that we have defined can be instantiated in order to provide
a representation of actual conflicts. For example, if we want to view a chess match as a
situation of conflict between two agents, we can represent the conflicting aspect of the
match by describing the agents’ opposing goals of winning by beating the other.
It is important to stress that, in order to talk about a contradiction, we need to make
the reasoning system LA explicit. For instance, the set of preferences {P ab, P ba} is not
inconsistent with respect to a reasoning system LA that does not impose irreflexivity.
Thus, in order to claim that some attitudes are inconsistent, the individuals have to agree
on the reasoning framework that grounds the inconsistency claims.
3
For a discussion on the status of instrumental rationality, see [19].
11
The point is that any contradiction depends on the reasoning system that is adopted
to evaluate the matter at issue. Imagine two agents that have apparently conflicting
preferences but that do not share the common reasoning rules that define what a contradiction is. For example, the preferences of agent 1 and agent 2 may be incompatible
from the point of view of agent 1 but not from the point of view of agent 2. Agent
1’s most preferred option may be a, whereas agent 2 may have two equally most preferred options a and b. That is, 1 is reasoning according to the rules of preferences that
we have presented before, namely she/he linearly orders alternatives, whereas 2 has a
partial order on alternatives. In that case, 1 believes that the policy a has to be implemented, whereas 2 believes that both a and b have to be implemented. In such a case,
the disagreement is on the nature of the alternatives and that is reflected on the rules that
norm reasoning about such matters. Thus, the conflict is at a more abstract level: it is
about the reasoning principle that norms the matter at issue. It is important to stress that
also claiming principles is a form of propositional attitude, thus the conflict of principle
is a type of conflict that fits the definition that we have presented, provided the agents
agree on the reasoning framework that judges conflicting principles. By iterating this
argument, we could imagine situations of an indeterminate regressus: in order to acknowledge that we are in conflict on a certain matter, we need to agree on the principles
that establish such conflict, but if we are in conflict on such principles, we need other
principles that establish the conflict about principles and so on. However, it is not clear
whether such a situation can be classified as a conflict, namely it is not clear on what
ground agents in such a scenario can claim to have conflicting attitudes. Although such
a situation is theoretically possible and interesting to investigate, in this paper we want
to focus on types of conflicts that are actually recognizable by the agents involved, and
that require an agreement on what conflicting attitudes are. Hence, we shall not discuss
this type of situations further. In this work we shall assume that the blame of inconsistency is shared among the individuals, namely that they agree on a common reasoning
system that specifies what is a contradiction between sets of propositional attitudes, and
we leave cases of asymmetric blame for future work.
4.3
Social agentive groups and social contradiction
We have seen that social choice theory defines how to aggregate the propositional attitudes of a number of possibly conflicting heterogenous agents into a single set of
attitudes. In particular, SCT and JA provide a way to view the group as a single agent
and to ascribe propositional attitudes to the group itself. We present some elements of
an ontological treatment of conflicts, by placing our treatment within the foundational
ontology DOLCE. The categories that we use are summarized in Table 1 at the end of
the paper, boldface categories are new wrt DOLCE. It is easy to define individual attitudes as propositional attitudes that are ascribed to an individual agent i. We introduce
a relation ASC(a, i) between propositional attitudes of a certain type and individuals.
In order to define ascription, we need a category ATT(x) for propositional attitudes and
a category IND(x) for individual agents: ASC(a, i) → ATT(a) ∧ IND(i). We shall
also
V ascribe sets of attitudes Ai to individuals, we write ASC(Ai , i) as a shorthand for
j ASC(aj , i), for all aj ∈ Ai .
12
We are going to define the notion of social propositional attitude as a propositional
attitude that is ascribed to the group of agents itself and not to any of the individuals
belonging to the group. Of course, it does not seem meaningful to ascribe propositional
attitudes to any set of individuals, or to any type of group. For instance, if we talk about
the beliefs of fifteen individuals randomly chosen from the phone book, we are simply
talking about the sum of all individual beliefs, and not about a belief that is ascribed to
the group itself. We need to be careful when defining attitudes ascribed to groups since
propositional attitudes are usually properly intended as ascribed to agents. Thus, we
would make a category mistake in applying an attitude to something non-agentive, in
the same way as we would make a category mistake in attributing beliefs or intentions
to a time interval.
For example, take a strategic setting described by game theory, such as a market. We
claim that it is a category mistake to ascribe attitudes to the outcome of the interaction
of agents, e.g. “the market believes, decides, intends, ... to p”. The point is that such
ascription may be metaphorically effective, however it is not grounded in a definition
of any agent who is entitled to carry the social or collective belief, decision, intention.
Namely, the market is not constructed as an agent.
On the other hand, there are cases in which it is meaningful, and sometimes even
necessary, to ascribe attitudes to parliaments, representatives assemblies, corporations,
organizations. For example, ascribing attitudes to the group is required, in case we want
to ascribe responsibility to the group itself.
The point is that, whenever we want to ascribe attitudes to a group, we need to show
that the group is some type of agent. We are going to define this specific type of group
that we label social agentive group. We will show that this notion of social group is
required in order to understand the type of conflict of social choice theory paradoxes.
A social agentive group depends on a set of individuals N and on an aggregation
procedure in the sense of Section 3. The social agentive group is defined by those agents
that agree to be subject to a particular aggregation procedure. The fact that such individuals acknowledge an aggregation procedure means simply that they agree on the rule to
settle their conflicts. For example, the group of representatives in a parliament and the
majority rule: a single representative may disagree with a collective decision, however
she/he implicitly has to acknowledge it and be subject to the consequences of that decision. Note that any set of individuals and any of the aggregation procedures in the sense
of Section 3 define a social agentive group. We view the agreement on the aggregation
procedure as baptizing a new type of object, namely a new agent, the social agentive
group, SAG(g).4
We need to introduce the following categories. Let AGG be the class of aggregation
procedures, GRP the class of groups (i.e. sets of individuals), IND the class of individual agents. We represent the membership of an individual i in a group N by means of
the relation MEMB(i, N ). Moreover, we introduce ACK(i, f ) to represent the acknowl4
We are assuming that the social agentive group is a distinct object with respect to the group
as a set of individuals. The reason is that we want to attribute to the social agentive group
properties of a different kind with respect to those that we can attribute to the group. In this
sense, the social agentive group is a qua object.
13
edgment relation that holds between an individual i and an aggregation procedure f .5
Firstly, we define a social agentive group as a subclass of agentive social objects ASO
defined in [17, 5]. That is, a social agentive group is a social object that is assumed to
have agency: SAG(x) → ASO(x).
Moreover, the existence of a social agentive groups depends on an aggregation procedure in the following sense. We assume as a necessary condition that an agentive
group is correlated to a group of individuals as well as to an aggregation procedure.6
SAG(g) → ∃f, ∃N AGG(f ) ∧ GRP(N ) ∧ ∀i (MEMB(i, N ) → ACK(i, f ))
(1)
Definition (2) means that an agentive group g depends on a group of individuals N
and an aggregation procedure f such that every individual in the group acknowledges
f . Since the category of groups GRP and the category of agentive social objects ASO
are disjoint, we are assuming that the set of individuals and the agentive social group
are distinct objects of our ontology.
There are further conditions on social agentive groups, for example, given a social
agentive group g, there is a unique aggregation procedure for g at a given time.
In order to simplify the presentation of social agentive groups and to focus on conflict, we abstract here from issues related to time and change [24].7
The acknowledgment relation is here intentionally designed to be abstract because
it may be subject to different interpretations depending on the type of group and individual agents. For example, members of an organization subscribe the rules of the
organization, employees sign the employment contract, representatives of the parliament are bound by oath to the constitution, and so on. The properties of aggregation
procedures that we have introduced in Section 3 may be used in order to define under
which conditions an agent is willing to accept an aggregation procedure, in less institutionalized cases; for example an anonymous aggregation procedure can be accepted on
the ground that it ensures a form of impartiality.
Here we assume, according to our previous analysis, that the acknowledgment of
the aggregation procedure entails the individual agreement on the reasoning framework
5
6
7
Here we present the definitions in a semi-formal fashion. Our analysis can be incorporated in
the ontological treatment of DOLCE [16]. Note that, although the definition seems to be in
second order logic, it is possible to flatten the hierarchy of concepts by typing them. This is the
so called reification strategy of DOLCE. We leave a precise presentation of DOLCE for future
work.
For a precise ontological treatment of the agency of groups, we refer to [24].
For example, we may discuss whether a social agentive group remains the same by adding or
removing members of the set of individuals or by reforming the aggregation procedure. For
this reason, we did not put the unicity constraint on N and f in Definition (2). Moreover, by
viewing social agentive group with respect to time, the acknowledgment relation has to be
parametrized wrt times as well. One application of a time-dependent acknowledgment relation
is that, in order to reform the aggregation procedure at a certain moment, a new acknowledgment may be required. However, at a time slice, the group and the procedure are supposed to be
unique. This is motivated by the simple observation that if we were to allow for two different
aggregation procedures at a given time, with possibly divergent outcomes, the attitudes of the
social agentive group would always be indeterminate.
14
LA that is used to judge conflicts. This is because an aggregation procedure is defined
on a specific input which takes propositional attitudes that are rational according to LA .
Hence, in order to accept an aggregation procedure, agents have to accept that only
propositional attitudes that are rational wrt LA can be submitted and that amounts to
endorsing LA .
We can now define social attitudes (SATT) as propositional attitudes that are ascribed to the social agentive group. Our definition does not entail that the social attitude
is ascribed to any of the individuals of the group, although the attitude of the group may
coincide with the attitude that is ascribed to some of its members. Since a social agentive group is defined by an aggregation procedure at a time, we can define the relation
of dependence of the social agentive group on the aggregation procedure and denote it
by DEP(g, f, t). Moreover, we define the dependence of the social agentive group on
the set of individuals at a give time by DEP(g, N, t).
A social attitude is a propositional attitude (a) that is obtained by means of the
aggregation procedure f . By using the notation of Section 3, a ∈ f (A1 , . . . , An ),
meaning that a belongs to the output of the aggregation procedure when given as input
the profile of individual attitudes A1 , . . . , An . An exhaustive ontological treatment of
a ∈ f (A1 , . . . , An ) entails for example that the individual propositional attitudes Aj
are ascribed to individual j.
SATT(a) → ∃x ∃t(SAG(x)∧DEP(x, f, t)∧DEP(x, N, t)∧a ∈ f (A1 , . . . , An )) (2)
Definition (3) means that a social attitude depends on the group and the aggregation procedure that define the social agentive group at a given time. Thus, we can now legitimate
the ascription of a social attitude to the social agentive group by slightly modifying our
previous definition of ascription: ASC(x, y) → (ATT(x) ∧ IND(y)) ∨ (SATT(x) ∧
SAG(y)). Note that, by Definition (3), a social attitude is necessarily ascribed to some
social agentive group. This is motivated by the fact that we want to exclude that taking
for example the beliefs of a number of randomly chosen individuals and aggregating
them by majority is sufficient to define a social attitude.
We can finally introduce the notion of social contradiction in order to analyze the
paradoxical outcomes of social choice theory. Firstly, we introduce a relation for making
the notion of contradictory set of attitudes explicit in our ontology. We identify sets of
attitudes with conjunctions of formulas and we express that the formula a1 ∧ · · · ∧ am
is inconsistent with respect to the reasoning principles of LA by means of the relation
CTR(a1 ∧· · ·∧am , LA ). According to our previous analysis, the notion of contradiction
has to depend on the reasoning systems that is adopted.
For example CTR(P ab ∧ P ba, {irreflexivity, transitivity, completeness}) holds,
whereas CTR(P ab ∧ P ba, {transitivity}) does not. A social contradiction is just
an inconsistent set of social attitudes. This definition entails that there exists a social
agentive group who maintains those inconsistent attitudes.
We can now stress the difference between the notion of social agentive group defined by means of social choice theory and other notions of groups that may be treated
for example by means of game theory. The fundamental difference is the agency that
is ascribed to the social agentive group: the notion of social contradiction defines the
contradiction of the social agentive group with itself viewed as a single agent.
15
This analysis of social contradiction precisely represents situations such as the Condorcet’s Paradox and the discursive dilemma. Note that, without the concepts that we
have introduced, it is hard to identify what type of conflict cases like that exhibit. Social contradictions are not conflicts between individuals that belong to the group, since
the group is defined by the agreement on the procedure that settles individual possible conflicts. Social contradictions are not conflicts between different groups, as in the
paradoxical case only one group is involved. Moreover, social contradictions do not
apply to general groups of individual, they are specific to social agentive groups. It
comes with no surprise that a number of individuals may have conflicting attitudes and
that there is no way to solve their conflicts. The point of social contradiction is that,
although individuals agree on the rule to settle conflicts, this peculiar type of conflict
can still occur. Therefore, the type of conflict of the social choice theory paradoxes is a
specific type of conflict that applies only to groups insofar as they are viewed as social
agentive groups and that is not reducible to any individual conflict. The non-reducibility
of social contradictions to individual conflicts can be argued by simply noticing that we
cannot say which conflict between individuals is responsible of the social contradiction.
For example, by reducing the social contradiction to conflicts between individuals, we
would not be able to distinguish the opposition between the majority and the minority
in a paradoxical case and the opposition between the majority and the minority in a coherent and unproblematic majority voting. It can be argued that it is the procedure that
is responsible for the paradoxical outcome, e.g. the majority rule. However, the majority is reliable in many other cases and social choice theory results show that the only
procedures that ensure consistency are the dictatorships of some individual. Therefore,
social contradictions are something we have to live with, as they may occur in any possible actual solution of individual conflicts that ascribes agency to the group. Without
the notion of social agentive group, we could not ascribe propositional attitudes to the
group itself, and we could only interpret social choice theory paradoxes as conflicts
between individuals. Thus, our specific treatment of conflict as social contradiction is
needed as social contradictions are non-reducible to other forms of conflict.
5
A taxonomy of conflicts
We present a taxonomy of conflicts along the conceptual analysis that we have outlined.
We distinguish types of conflicts that depend on two parameters: the type of agents
involved (individual agents or social agentive groups) and the matter of conflict (namely,
the type of propositional attitude at issue).
Agents
About (Propositional attitudes)
I IND: i vs j
beliefs, desires, judgments,...
II GRP: ∃i1 , . . . im in G, beliefs, desires, judgments,...
i1 vs . . . vs im
III SAG: sag vs sag’
beliefs, desires, judgments,...
IV SAG: sag vs sag
beliefs, desires, judgments,...
Type
Contradiction
Contradiction
Contradiction
Social Contradiction
(I) classifies conflicts between individuals (including the conflict of an individual with
him/herself) that may be about any propositional attitude. (II) classifies conflicts within
16
groups that are reducible to conflicts among members of the group. In this case, the
group is not viewed as a social agentive group and the conflict within the group can be
reduced to conflicts between subsets of individuals. As an example, take an auction in
which a number of agents make their bids for getting a certain item and only one of
them can win the item. (III) classifies conflicts between different groups each of them
viewed as social agentive groups, for example two different parliaments of different
states voting two incompatible policies. Finally, (IV) classifies the case of social contradictions that are exemplified by social choice theory paradoxes. From the point of
view of our ontological analysis, (III) can be reduced to (I): namely conflict between
two different social agentive groups can be modeled as conflict between different individual agents, that is, it can be modeled by using the notion of contradiction between
propositional attitudes of two different agents. Moreover, our modeling shows that the
type of conflict that is defined in (II) is actually a conflict between individuals: again, it
can be modeled by means of the notion of contradiction between a number of individual
attitudes. The notion of social contradiction is required only to model the conflict of the
social agentive group with itself, namely the group conflict that is non-reducible to any
conflict between any member of the group.
6
Conclusion and future work
We have developed the first conceptual elements to provide an ontological analysis of
group conflicts. We have used the methodology of SCT in order to mathematically understand a number of types of conflicts and to define the concept of social contradiction.
We have introduced some foundamental elements of an ontological analysis of conflicts
by spelling out the required concepts of propositional attitude, conflict as contradiction,
and social agentive group. In particular, we have argued that the concept of social agentive group is necessary in order to understand the type of group conflict that is involved
in social paradoxes. We plan to provide a fine-grained ontological representation of
aggregation procedures that would enable modeling the dynamics of group formation
and change, possibly motivated by conflicts, besides allowing us to distinguish between
types of groups in terms of the properties of the aggregation procedure that is endorsed.
A close examination of the norms that specifically apply to groups is then compelling
[1]. The next step is to integrate our analysis within the general framework of a foundational ontology such as DOLCE. Complex aggregation procedures can be applied to
treat the rich internal structure of organization [3, 5], for example by defining the notion
of sub-organization and by formalizing the relationship between the different modules.
To that extent, we have started developing a module for ascribing agency to groups and
organization in [24]. This leads towards a generalization of our model to provide an understanding of the ascription of agency to complex social systems and socio-technical
systems and to apply our treatment of conflicts in such complex social constructions. In
particular, modeling socio-technical systems requires to integrate information coming
from heterogeneous agents, human and artificial, and it is important to deploy conceptual tools, such as those that we have discussed, that provide a precise description of the
concept of aggregate information. We have presented a number of applications of the
methodology of social choice theory to model systemic information in socio-technical
17
systems in [25], we plan to integrate that analysis with the present investigation of conflict and social contradictions in order to grasp situation of crisis in socio-technical
systems.
Acknowledgments: D. Porello and R. Ferrario are supported by the VisCoSo project,
financed by the Autonomous Province of Trento, “Team 2011” funding programme. E.
Bottazzi is supported by the STACCO project, financed by the Autonomous Province
of Trento, “Postdoc 2011” funding programme.
Table 1. Ontology of group agency in DOLCE
Bibliography
[1] H. Aldewereld, V. Dignum, and W. Vasconcelos. We ought to; they do; blame the
management! In Coordination, Organizations, Institutions, and Norms in Agent
Systems IX, pages 195–210. Springer, 2014.
[2] K. Arrow. Social Choice and Individual Values. Cowles Foundation for Research
in Economics at Yale University, Monograph 12. Yale University Press, 1963.
[3] G. Boella, L. Lesmo, and R. Damiano. On the ontological status of plans and
norms. Artif. Intell. Law, 12(4):317–357, 2004.
[4] G. Boella, G. Pigozzi, M. Slavkovik, and L. van der Torre. Group intention is
social choice with commitment. In Proceedings of the 6th International Conference on Coordination, Organizations, Institutions, and Norms in Agent Systems,
COIN@AAMAS’10, pages 152–171, Berlin, Heidelberg, 2011. Springer-Verlag.
[5] E. Bottazzi and R. Ferrario. Preliminaries to a DOLCE ontology of organizations.
International Journal of Business Process Integration and Management, Special
Issue on Vocabularies, Ontologies and Business Rules for Enterprise Modeling,
4(4):225–238, 2009.
[6] F. Brandt, V. Conitzer, and U. Endriss. Computational social choice. In G. Weiss,
editor, Multiagent Systems. MIT Press, 2013.
[7] M. E. Bratman. Intention, Plans, and Practical Reason. Harvard University Press,
Nov. 1987.
[8] F. Dietrich and C. List. The aggregation of propositional attitudes: towards a
general theory. Technical report, 2009.
[9] F. E. Emery and E. Trist. Socio-technical systems. In Management Sciences:
Models and Techniques, volume 2, pages 83–97. Pergamon, 1960.
[10] U. Endriss, U. Grandi, and D. Porello. Complexity of judgment aggregation. Journal of Artificial Intelligence Research, 45:481–514, 2012.
[11] W. Gaertner. A Primer in Social Choice Theory. Oxford University Press, 2006.
[12] M. P. Georgeff, B. Pell, M. E. Pollack, M. Tambe, and M. Wooldridge. The beliefdesire-intention model of agency. In Proceedings of the 5th International Workshop on Intelligent Agents V, Agent Theories, Architectures, and Languages, ATAL
’98, pages 1–10, London, UK, UK, 1999. Springer-Verlag.
[13] L. A. Kornhauser and L. G. Sager. The one and the many: Adjudication in collegial
courts. California Law Review, 81(1):1–59, 1993.
[14] C. List and P. Pettit. Aggregating Sets of Judgments: An Impossibility Result.
Economics and Philosophy, 18:89–110, 2002.
[15] C. List and C. Puppe. Judgment aggregation: A survey. In Handbook of Rational
and Social Choice. Oxford University Press, 2009.
[16] C. Masolo, S. Borgo, A. Gangemi, N. Guarino, and A. Oltramari. Wonderweb
deliverable d18. Technical report, CNR, 2003.
[17] C. Masolo, L. Vieu, E. Bottazzi, C. Catenacci, R. Ferrario, A. Gangemi, and
N. Guarino. Social roles and their descriptions. In Proc. of the 6th Int. Conf.
on the Principles of Knowledge Representation and Reasoning (KR-2004), pages
267–277, 2004.
19
[18] J. V. Neumann and O. Morgenstern. Theory of Games and Economic Behavior.
Princeton University Press, 1944.
[19] R. Nozick. The Nature of Rationality. Princeton University Press, 1993.
[20] M. J. Osborne and A. Rubinstein. A Course in Game Theory. MIT Press Books.
The MIT Press, June 1994.
[21] P. Pettit. Deliberative democracy and the discursive dilemma. Philosophical Issues, 11(1):268–299, 2001.
[22] D. Porello. Ranking judgments in arrow’s setting. Synthese, 173(2):199–210,
2010.
[23] D. Porello. A proof-theoretical view of collective rationality. In IJCAI 2013,
Proceedings of the 23rd International Joint Conference on Artificial Intelligence,
Beijing, China, August 3-9, 2013, 2013.
[24] D. Porello, E. Bottazzi, and R. Ferrario. The ontology of group agency. In 8th International Conference on Formal Ontology in Information Systems, FOIS 2014.
IOS Press, 2014. In press.
[25] D. Porello, F. Setti, R. Ferrario, and M. Cristani. Multiagent socio-technical systems: An ontological approach. In Coordination, Organizations, Institutions, and
Norms in Agent Systems IX, pages 42–62. Springer, 2014.
[26] A. D. Taylor. Social choice and the mathematics of manipulation. Cambridge
University Press, 2005.
[27] J. van Benthem. Logical Dynamics of Information and Interaction. Cambridge
University Press, 2011.
[28] M. Woolridge. Introduction to Multiagent Systems. John Wiley & Sons, Inc., New
York, NY, USA, 2008.