MANAGEMENT SCIENCE
informs
Vol. 56, No. 1, January 2010, pp. 71–89
issn 0025-1909 eissn 1526-5501 10 5601 0071
®
doi 10.1287/mnsc.1090.1096
© 2010 INFORMS
Design of Decision-Making Organizations
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
Michael Christensen, Thorbjørn Knudsen
Strategic Organization Design Unit, Department of Marketing and Management, University of Southern Denmark,
DK-5230 Odense M, Denmark {
[email protected],
[email protected]}
S
tarting from the premise that individuals within an organization are fallible, this paper advances the study of
relationships between the organization’s decision-making structure and its performance. We offer a general
treatment that allows one to analyze the full range of organizational architectures between extreme centralized
and decentralized forms (often referred to as hierarchies and polyarchies). Our approach furthermore allows
designers to examine the change in the overall reliability of the organizational structure as the number of actors
within the organization changes. We provide general proofs that show how decision-making structures can be
constructed so they maximize reliability for a given number of agents. Our model can be used directly for a
qualitative assessment of decision-making structures. It is thereby useful for assessment of the many complicated
hybrid structures that we see in actual decision-making organizations, such as banks, purchasing departments,
and military intelligence. An application from a bank illustrates how our framework can be used in practice.
Key words: organizational architecture; organizational design; decision making; evaluation
History: Received June 15, 2007; accepted April 28, 2009, by Olav Sorenson, organizations and social networks.
Published online in Articles in Advance November 6, 2009.
1.
Introduction
of decision makers is sufficient for an alternative
to be approved—will tend to minimize the probability of rejecting a superior alternative—i.e., polyarchy reduces Type I errors. The essence of designing
decision-making organizations is to choose a structure
that most effectively reduces Type I and/or Type II
errors as required by the organization’s task environment. We offer a general treatment of this problem.
That is, we extend the decision structures considered
by Sah and Stiglitz (1986, 1988) to include all possible organizations spanned by the hierarchy and polyarchy. We then show how organization designers can
identify the structure that most effectively reduces
Type I and/or Type II errors (given any number of
available decision makers).
Another literature, in information theory, has discussed a similar problem. An important theorem, the
Moore and Shannon (1956a, b) theorem, has shown
that perfect reliability of electrical circuits can be
accomplished with imperfect components. It might
seem that this theorem could be directly applied to
organizations. However, it cannot because individuals
at any level of a human organization can, in principle, be assigned the final decision-making authority.
This violates a fundamental assumption of the original Moore-Shannon results, where such delegation is
ruled out. This limitation shows up as the MooreShannon restriction, which rules out that lower-level
members of an organization can be assigned the final
decision-making authority (on behalf of the entire
organization) to accept or reject a project. We therefore
How does organizational design effect decision
making? Decision-making organizations—such as
boards, corporate headquarters, and purchasing
departments—generally exist with various structures
(Colombo and Delmastro 2008). At one pole, organizations are designed as hierarchies, with lower-level
managers reporting to their immediate superiors. But
within hierarchical layers, there can be considerable
variation in the number of people that make independent, parallel decisions. As we approach the other
extreme pole, managers reside in a flat structure,
where they make decisions in parallel. Our aim here
is to advance a general treatment of decision-making
organizations that allows one to consider the full
range of organizational architectures that lie between
these two extreme forms.
At a basic level of analysis, decision making can
suffer from two possible errors: Type I errors of rejecting a superior alternative and Type II errors of accepting an inferior alternative. As shown in prior work
(Sah and Stiglitz 1985, 1986, 1988), different organizational structures vary in their proclivity to make one
type of error or the other. In particular, hierarchical
structures, in which a proposal needs to be validated
by successive ranks of the hierarchy to be approved,
tends to reduce the likelihood that an inferior alternative will be adopted—i.e., hierarchy reduces Type
II errors. In contrast, what Sah and Stiglitz (1986)
term polyarchies—flat organizational structures in
which approval by any one actor in a parallel series
71
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
72
redo the proofs with different topological structures
that are consistent with the way Sah and Stiglitz (1985,
1986, 1988) and others have characterized human
organizations—as opposed to systems of relays and
similar electric components.
Our approach builds on the information processing
perspective in economics. The properties that drive
the design of decision-making structures are multifaceted, so it is not surprising that they have been
investigated from many perspectives. Prior work on
decentralization of information has provided robust
results regarding the way organizational structures
can be designed to optimize efficiency under capacity constraints (Marschak and Radner 1972, Radner
1993, Bolton and Dewatripont 1994, Van Zandt 1999).
By contrast, Sah and Stiglitz (1985, 1986, 1988) and
followers (Ioannides 1987, Koh 1992) have mainly
focussed on human evaluation. We directly extend
prior work initiated by Sah and Stiglitz (1985, 1986).
That work laid the foundations for a systematic
approach to designing reliable decision-making structures by comparing the screening properties of simple hierarchies against decentralized polyarchies (Koh
1992, 1994; Sah and Stiglitz 1988; Sah 1991; Visser
2000). The study of sequential decision making in
hierarchies and polyarchies has been complemented
by analysis of voting and other forms of decision
making that happen simultaneously, such as committee decision making (Ben-Yashar and Nitzan 1997, Li
et al. 2001, Sah and Stiglitz 1988). From a broader
perspective, the many facets of organizational design
have been studied in economics, management, and
related fields. There is a huge body of work on
organizational design, spanning multiple streams of
literature and including the information processing
stream (to which our work contributes), transaction cost economics, the decentralization of incentives
stream (including principal-agent theory), the evolutionary and behavioral stream, and contingency theory (Colombo and Delmastro 2008 provide a useful
review).
By and large, however, the existing literature does
not analyze the broad range of structures between
hierarchies and polyarchies. Although the study of
committee decision making included a broader range
of decision-making structures, it is limited to voting and other forms of decision making that happen
simultaneously (Sah and Stiglitz 1988).
This paper advances the economic and managerial
literature by offering a general treatment that allows
one to consider the full range of organizational architectures between hierarchies and polyarchies. Thereby
one can design decision-making structures that trade
off Type I errors (of rejecting a superior alternative)
and Type II errors (of accepting an inferior alternative) as the relative degree of hierarchy and polyarchy
Christensen and Knudsen: Design of Decision-Making Organizations
Management Science 56(1), pp. 71–89, © 2010 INFORMS
shifts. Our approach furthermore allows designers
to examine the change in the overall reliability of
the organizational structure as the number of actors
within the organization changes. We provide general
proofs that show how decision-making structures can
be constructed so they maximize reliability for a given
number of agents. However, it is beyond the scope of
the present work, and would be too hasty, to engage
in a detailed treatment of costs and benefits that
derive from alternative decision-making structures.
The conclusion in §5 considers these limitations and
how they can be lifted. Despite limitations, our model
can be used directly for a qualitative assessment of
decision-making structures, because error rates are
correlated with costs and benefits. It is thereby useful for assessment of the many complicated structures
that we see in actual decision-making organizations.
This paper is organized as follows. In §2, we specify the basic model, including a very broad range
of decision-making structures spanned by the hierarchy and polyarchy of Sah and Stiglitz (1985, 1986).
Section 3 introduces the extremity theorem (Theorem 1), which identifies hierarchies and polyarchies
of any size, n, as evaluation structures that bound
all other structures of size n with respect to screening. Together with Theorem 2, it provides a building
block that is necessary to establish the main result of
the present paper. Two further results to be used in
the main proof are established in §3. The first result
(Theorem 3) identifies the decision structures that
most effectively increase judgmental discrimination.
The second result (Theorem 4) identifies the structures that most effectively remove bias. These results
provide the components for a constructive proof that
makes our analytical results readily available for practical applications. Our main result, the theorem of perfection (Theorem 5), is established in §4. A case study
of credit evaluation in a bank illustrates how our analytical framework can be used in practice. Section 5
concludes.
2.
The Model
Following the approach developed by Sah and Stiglitz
(1985, 1986, 1988), we study organizations whose
members are decision makers (agents) that evaluate a
set of alternatives (projects) and make a binary choice
(screening). We refer to such organizations as decisionmaking organizations or, more briefly, as evaluation
structures. The decision-making organizations under
study face the problem of choosing which projects to
accept and which to reject. It is a notable feature of the
model that final acceptance is equivalent to making a
commitment of unavoidable economic consequence.
Figure 1 shows three decision-making structures:
a three-member hierarchy, a three-member polyarchy,
Christensen and Knudsen: Design of Decision-Making Organizations
Management Science 56(1), pp. 71–89, © 2010 INFORMS
Figure 1
Example of Evaluation Structures
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
Hierarchy
Polyarchy
Self-dual hybrid, G *
Notes. A three-member hierarchy, a three-member polyarchy, and a selfdual graph G∗ . Full lines are acceptance edges and dashed lines are rejection
edges.
and finally a special five-member structure, G∗ ,
known as a self-dual structure. The decision-making
structures consider projects from an initial distribution (I). Individuals either accept or reject projects as
they pass through the decision-making organization.
Solid lines indicate that accepted projects are passed
on, and dashed lines show rejected projects that are
passed on. The projects can be passed on to other
agents, or a decision can be made on the part of the
whole organization to either accept or reject a project.
The contrast between the hierarchy and the polyarchy is clear. The hierarchy is a conservative structure that preserves the strictest possible filtering
as projects move to the top of the organizational
hierarchy. In contrast, the polyarchy is a permissive
structure that preserves the loosest possible filtering. Theorem 1 shows that the hierarchy and polyarchy are indeed the extreme structures with respect
to reducing either Type I or Type II errors. It is also
interesting to note that the self-dual graph G∗ is composed of a single agent with an acceptance edge to a
two-member polyarchy and a rejection edge to a twomember hierarchy. That is, optimistic evaluations are
checked by a hierarchy, whereas pessimistic evaluations are checked by a polyarchy. This construction
serves to promote the symmetric reduction of Type
I and Type II errors that is characteristic of self-dual
structures (as explained in more detail below).
Projects are independent and identically distributed
as a random variable x . Each project is represented
by a vector of signals, x, about the true quality of the
project. These signals include all elements that are relevant if the decision-making organization accepts a
project. There is a map, V , from the vector of perfect
73
signals onto a scalar economic value, a net income
that is obtained by the decision-making organization
if the project is accepted. This scalar valuation is a
measure of the true net income of a project, including all the relevant benefits and costs associated with
undertaking a project. The costs of making the decision are not included in the income because they are
endogenous to the evaluation structure.1
The task of the individual decision maker within the
decision-making organization is to evaluate projects.
An evaluation is a binary choice—whether to accept
or reject a project—made on the basis of the vector
of perfect signals about true project quality. The individual decision maker is characterized by an ability to
evaluate projects. This ability is captured in the agent
screening function, f , which maps each project onto a
probability that the decision maker accepts the project.
An omniscient decision maker would not make a
single error of judgment. Such a decision maker would
process all the signals about the true project quality
without error. The omniscient decision maker therefore accepts projects with economic value V ≥ 0 <0
with probability f = 1 (0). That is to say, the agent
screening function of the omniscient decision maker,
f , is the functional composition of the Heaviside step
function and the economic value, f ≡ V .
All human decision makers are fallible; they make
errors of judgment. Errors in judgment come from a
variety of sources, including noisy signals, incomplete
processing of information, and defective information
processing. Errors in judgment reduce the ability to
make choices in the sense of discriminating between
projects with positive and negative economic value.
In the case of maximal error, the agent has no discriminating ability; the decision maker simply processes the signals about project quality by flipping
a coin, f = 1/2. At the opposite pole, the agent has
perfect discriminating ability; the decision maker is
omniscient and assigns signals about project quality to acceptance or rejection in a deterministic way
(projects with positive value are accepted with probability 1, and projects with negative value are rejected
with probability 1). In the general case, the level of
errorin the agent’s processing of signals is the measure x f x − f x dx.
There are n members in an evaluation structure.
The task of the evaluation structure is to decide which
projects to accept and which to reject. Its objective is
to maximize income V net of evaluation costs or, in
some cases, to minimize the incidence of Type I and
Type II errors. The present effort is focussed on the
1
The costs of making the decision depend on the size of the organization (the number of agents), the levels of pay, the choice of
compensation method (e.g., fixed salary or pay per evaluation), and
the possibility of economies of scale with respect to evaluation.
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
74
design of reliable decision-making organizations in
the sense that they minimize the incidence of Type I
and Type II errors subject to the constraints of the
number of available evaluators and their ability.2
We show how this problem can be solved with
respect to internal structure and system size as choice
variables, holding constant the decision-making ability. Consider a system of fallible agents that are
homogeneous in their decision-making ability.3 Each
individual evaluator has access to two distinct types
of communication channels; one is used when a
project is accepted and the other in case of rejection.
It is the availability of both of these communication
channels that allows the evaluators to make independent deliberate choices that are characteristic of
human agents. It is possible for an evaluator to individually accept or reject a project on behalf of the
organization without interference by other members.
The human agent can further choose how projects are
sent to other agents and who the possible receivers
are.
An important background matter here is as follows.
An individual at any level of a human organization
can, in principle, be assigned the final decisionmaking authority (on behalf of the entire organization) to accept or reject a project. This is ruled out
in the classical Moore-Shannon formulation of how
to improve reliability in electrical circuits (Moore and
Shannon 1956a, b) and more recent work on network
reliability (Lynn et al. 1998; Balakrishnan and Rao
2001). This is because the Moore-Shannon formulation
is limited to relays and similar electric components
that cannot individually block an electric current.
For brevity, we refer to the preceding restriction as
the Moore-Shannon restriction. Formally, the MooreShannon restriction excludes edges from lower-level
individuals to external rejection nodes. This has the
immediate implication that there will be inherent dissimilarities between actual human organizations and
organizations of electrical components. Apart from
being a useful concept with which to contrast human
organizations, our approach reuses and upgrades the
techniques used in Moore and Shannon’s (1956a, b)
proofs. Because of the Moore-Shannon restriction,
however, we build on the very different topological
structures that are consistent with the way Sah and
Stiglitz (1985, 1986, 1988) and others have characterized human organizations.
The decision-making organization is modeled as a
graph. Each node represents a decision maker, and
2
In the case of perfect decision-making ability, the organization has
no effect because there is no single error; all projects with positive
income would be accepted and all projects with negative income
rejected.
3
As mentioned in the appendix, some issues of heterogeneity in
screening ability can be handled within the current framework.
Christensen and Knudsen: Design of Decision-Making Organizations
Management Science 56(1), pp. 71–89, © 2010 INFORMS
each edge represents a channel of communication. We
study homogeneous graphs (one type of agent, A)
with two types of edges (accept/reject). The entry and
the exit of the projects are determined by the way the
internal structure is connected to three external nodes:
(1) the initial portfolio (I) containing the distribution of
projects x ; (2) the final portfolio (F), where the accepted
projects are implemented; and (3) the termination node
(T), where the rejected projects are dumped (see Figure 1). The design of the evaluation structure involves
specification of edges that connect members, specification of edges that connect the internal structure
to external nodes (I, F, T), and specification of rules
determining how many times a decision maker can
evaluate the same project (truncation rule).
The generalization of the agent screening function f
to the level of a specific architecture G is the graph
screening function fG . If the organization contains only
a single agent A, then obviously fG = fA = f , as G = A.
The graph screening function is an aggregation rule
that assigns individual decisions of acceptance and
rejection to any structure. It can be viewed as a generalization of the aggregation rules that have previously
been used to model decision making by committees
(Ben-Yashar and Nitzan 1997, Sah and Stiglitz 1988).
In mathematical reliability theory, the graph screening
function is known as the system reliability (Lomnicki
1973) or the reliability function of a network (Carlsson
and Grenander 1966).
It is often useful to express the graph screening fG
in terms of the reduced graph screening function FG
that operates not on the vector of signals regarding
the project but solely on the scalar acceptance probability, = f x. In the case of homogeneous ability,
the graph screening function is a polynomial in ,
commonly known as the reliability polynomial.4 The
reliability polynomial is the reduced graph screening
function, and the relation between the screenings is
again that of functional composition, fG = FG f .
3.
Fundamental Organization
Structures
Some organization structures have a fundamental role
in reducing error and thereby improving the quality of decision making. Sah and Stiglitz (1985, 1986)
identified two archtypical evaluation structures: hierarchies and polyarchies. These structures also play a
fundamental role in our generalization of the MooreShannon result. Their primary role is to move a
screening function to the desired part of the project
distribution—hierarchies move the screening function
to the “right,” whereas polyarchies move the screening function to the “left” (see Figure 2). The purpose
In the case of j levels of heterogeneous ability, j = fj x, the graph
screening function is a multinomial in j .
4
Christensen and Knudsen: Design of Decision-Making Organizations
75
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
Management Science 56(1), pp. 71–89, © 2010 INFORMS
of moving a screening function is to remove bias.
However, pure hierarchies (polyarchies) can overshoot the desired move, so it is sometimes necessary
to use a slightly modified version of these graphs.
This introduces a new set of graphs, called stair
graphs. Stair graphs actually resemble stairs in that
they are constructed with a few polyarchical (hierarchical) “interruptions” of longer hierarchical (polyarchical) sequences. Stair graphs serve the same purpose as the so-called ladder graphs in the original
proof (Moore and Shannon 1956a, b).
To improve reliability at a desired point in the
project distribution, the graph must be steepened.
This is achieved by graphs that are similar to self-dual
committees, i.e., committees with an uneven number of members that use a simple majority vote (Sah
and Stiglitz did not consider the self-dual property of
committees). The notion of a self-dual graph refers to
a decision-making structure that will induce a symmetrical graph screening function, i.e., a symmetrical
reduction of Type I and II errors. We are considering sequential decision making, so we identify a set
of self-dual structures that represent a sequential processing of projects (in the same way self-dual committees do for voting procedures). Self-dual graphs serve
the same purpose as the so-called hammock graphs in
the original proof.
The theorems of this paper are proved in the
appendix, and all the proofs are constructive to not
only guarantee the existence of almost-perfect organizations but to show exactly how such organizations
are obtained. These construction methods are directly
applicable in real business cases, as explained after
Theorem 2, and further illustrated in §4.3. The theorems of this section therefore serve two purposes.
They are building blocks for the main perfection theorem and its extension (presented in §4). And they are
guides on how to improve discriminating ability and
remove bias (illustrated in §4.3).
3.1. Extremity of Hierarchies and Polyarchies
This section introduces a theorem that identifies hierarchies and polyarchies of any size, n, as evaluation
structures that bound all other structures of size n
with respect to screening. As described above, hierarchies are extreme because they are the structures that
filter out most information on the way to the top (at
the expense of loosing good projects in the process).
In contrast, polyarchies are the extreme permissive
structures that allow the most information through
(at the expense of promoting low-quality projects).
The extremity theorem provides a necessary building
block that enables us to provide a general tool for the
design of reliable decision-making organizations (§4).
A simple version of the theorem is presented here.
The proof is in §A.1 of the appendix, together with a
number of extensions.
Theorem 1. Let n for any positive integer n be the
set of all homogeneous graphs that can be constructed from
agents of type A, such that the maximal evaluation count is
no more than n. Let Pn ∈ n be the polyarchy of n members
with reduced graph screening function FPn , and let Hn ∈ n
be the hierarchy of n members with reduced graph screening function FHn . Any graph G ∈ n with reduced graph
screening FG satisfies
FHn ≤ FG ≤ FPn
(1)
The proof, which appears in the appendix, is inductive on maximal evaluation count, i.e., the maximal
number of evaluations that is necessary for the organization to finally accept a project. For intuition, note
that the addition of a new member to a hierarchy can
either soften or enhance extreme sceptical evaluation
(i.e., minimize Type II error). If the new member is
put in a position that extends the hierarchy, extreme
sceptical evaluation is enhanced. Otherwise, it is softened (i.e., becomes less extreme). Similar reasoning
captures the addition of new members to a polyarchy.
In extreme situations, when there are only projects
with positive (V > 0) or negative (V < 0) income, the
n member polyarchy (hierarchy) will therefore dominate any other structure. Provided the costs of making
the decision in an evaluation structure are not prohibitive, the n member polyarchy (hierarchy) will also
dominate the individual agent.5 Even if this result
is rather trivial, it holds with remarkable generality
(see §A.1), even to the extent of lifting the homogeneity assumption and letting the agents creatively
manipulate the projects.
According to the extremity theorem, finite hierarchies (polyarchies) map the agent screening of any
project closer to 0 (1) than any other structure with
the same number of agents. The minimal number
of agents required to reduce the incidence of Type I
and Type II errors to some minimal desired level
is provided in Theorem 2. The remainder of the
paper shows how decision-making organizations that
include both hierarchical and polyarchical elements
can be designed to minimize both Type I and Type II
errors.
Theorem 2. Given any threshold 0 < < 1 and a point
0 ∈0 1 , the number of agents n in a hierarchy such that
FHn ≤ ∀ ∈ 0 0 , is
n≥
log
log 0
(2)
5
In situations with more realistic project distributions that include
both positive and negative income, the optimal evaluation structures are hybrids that include both hierarchies and polyarchies.
Christensen and Knudsen: Design of Decision-Making Organizations
76
Management Science 56(1), pp. 71–89, © 2010 INFORMS
and the number of agents n in a polyarchy such that
FPn ≥ 1 − ∀ ∈ 0 0 , is
log
log1 − 0
(3)
Proof. The result follows trivially from the graph
screening functions of the n member hierarchy Hn and
polyarchy Pn :
FHn = n
and
FPn = 1 − 1 − n
(4)
To illustrate, consider a CEO who wants to hire
a new manager, knowing that the available managers are too optimistic. The CEO’s reservation value
is set to
= 0 (as shown in Figure 2). Further,
assume that available managers have a screening
function like Christie’s in Figure 2. The CEO knows
he must use a hierarchical structure to reduce or
perhaps even remove the optimistic bias. How large
should the hierarchy be? In Christie’s case, = 0 and
0 = f ∼ 088. In contrast, for an unbiased (imperfect) screening function, 0 = 050. Assume that the
CEO’s target is to build a structure that achieves
FHn ≤ = 055; i.e., the CEO accepts a tolerance of
10% from the target of the unbiased screening function of 0.50.
According to Theorem 2, the CEO finds that a hierarchy of n ≥ log / log 0 ∼ log 055/ log 088 ∼ 468
will meet his target. A comparison of the four- and
five-member hierarchy leads to the conclusion that
the resulting unbiased evaluation structure is a hierarchy employing five managers, all with a bias similar
to Christie’s. The CEO has now designed an unbiased evaluation structure from biased members. Its
screening function is shown in Figure 2. Note that if
Christie’s bias had been pessimistic, the CEO should
Figure 2
Removal of Bias and Further Improvement of
Judgmental Ability
1.0
0.8
Christie
F(), = f (x)
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
n≥
have chosen a flat, polyarchical structure to remove
the bias.
We have illustrated how a decision-making bias can
be removed such that the incidence of Type I and
Type II errors is balanced around the CEO’s reservation value, . By choosing an appropriate design,
including both hierarchical and polyarchical elements,
it is possible to achieve further improvements that
reduce both Type I and Type II errors. To achieve
this, the CEO must use a self-dual evaluation structure (a formal characterization and proof for optimality are provided in §3.2). Such an evaluation structure
achieves a symmetric improvement of a screening
function. The self-dual graph used here will also
appear in our general proof. To use this self-dual
graph, the CEO hires five unbiased managers and
places them in the structure, G∗ , as shown in Figure 1.
If the only available managers are of Christie’s type,
another possibility is to design a structure where each
of the nodes in the self-dual graph is a five-member
hierarchy. As shown in Figure 2, Christie’s unbiased
screening function is further steepened. As a result,
such decision-making organizations make better evaluations than organizations using managers of similar
ability on an individual basis.
Our general proof shows how many agents are
required to achieve a given level of reliability, given
their inherent ability. In principle, the self-dual graph
could be used repeatedly to approach perfect screening. For each repeated use, the decision-making organization increases by a factor 5; very quickly, this
organization would grow to unrealistic proportions.
The important point, however, is that significant
incremental improvements can be achieved with a
small number of agents. Even if we abstract from
decision-making costs here, it should be clear how
our approach identifies cost components and tradeoffs. The relevant question from the point of view of
a practical application is what the marginal costs and
gains from a redesign of the decision-making organization (gains versus personnel costs and reorganization costs) are. We provide further illustration from a
real-world example from a bank.
Christie’s
bias removed
0.6
0.4
Christie’s
screening
further
improved
0.2
0
–3
–2
–1
0
1
2
3
X
3.2. Optimality of Self-Dual Committees
We now characterize decision structures that are
optimal with respect to increasing judgmental discrimination. The aim is to identify the most effective symmetrical graph screening function, i.e.,
the decision-making structure that most effectively
reduces both Type I and II errors. Such a symmetrical
screening function is known as a self-dual graph. At
the limit, a self-dual graph will approach a step function; i.e., projects below some criterion are accepted
with probability 0 and projects above this criterion
will be accepted with probability 1. That is, we identify the optimal self-dual graph.
Christensen and Knudsen: Design of Decision-Making Organizations
77
Management Science 56(1), pp. 71–89, © 2010 INFORMS
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
Theorem 3. For any positive integer n, let n be the
set of all self-dual graphs that can be constructed from
agents indistinguishable with respect to project screening,
such that the maximal evaluation count is no more than n.
The slope of the screening polynomials at = 1/2 of the
graphs in n cannot exceed that of
FGn∗ =
n
Bi i
i=n+1/2
−1i−n+1/2
n + 1
Bi =
n + 1/22
i
with
n − 1/2
i − n + 1/2
(5)
and at least one graph Gn∗ ∈ n has this reduced graph
screening.
Here,
is the Gamma function. Interestingly, the
screening polynomial given by Equation (3) matches
the screening of a self-dual committee (Sah and
Stiglitz 1988). Such a committee always consist of
odd n members and consensus k = n + 1/2, i.e.,
the simple majority rule. The proof in §A.2 of the
appendix shows how to build a graph with the optimally discriminating screening without using consensus rules. These are the graphs with the steepest
graph screenings
FGn∗ 1/2 =
n + 1
1
2n−1 n + 1/22
(6)
in the middle of the reduced screening interval.
This proof has two stages. First, the polynomial view
finds the optimal reduced screening function with
respect to discriminating ability. Then the topological view shows by construction that there exists a
unique graph with the found optimal screening polynomial. For intuition, note that the self-dual graph
G∗ shown in Figure 1 has symmetric checks on optimistic and pessimistic evaluations. That is, optimistic
(pessimistic) evaluations are checked once before they
are accepted (rejected) by the organization. Note further that a reversal in a decision leads to an additional check. For example, if an optimistic evaluation
is rejected, there will be an additional check. But this
additional check is omitted when the first two agents
are consistently positive. This construction serves to
promote a symmetric reduction of Type I and Type II
errors that is characteristic of self-dual structures.
The proof establishes that there exist unique selfdual structures that correspond to optimal screening
polynomials.
3.3. Stair Graphs
The final family of graphs used in the main proof is
stair graphs. A graph is a stair graph if and only if
it has one entry point and no loops and each agent
has at most one channel of communication to another
agent. The generic stair graph is therefore just a chain
of agents arranged as a hierarchy whose last agent
rejects to a polyarchy whose last agent accepts to a
hierarchy, etc. As in the family of committees of various degrees of consensus, hierarchies and polyarchies
are also the extremes of the family of stair graphs.
The stair graphs have been chosen for (at least)
three reasons. First, they have monotonous graph
screening polynomials (because there is no branching
in the structure), which is a crucial property if any of
the work of Moore and Shannon (1956a, b) is to be
reused in the current setup. Second, the size equals
the maximal evaluation count. Third, they are very
effective at removing bias, as the following theorem
shows.
Theorem 4. For any 0 < 0 < 1 and any 0 < < d ≡
min0 1 − 0 , a stair graph G exists with no more than
n = log /2/ log d
(7)
agents satisfying the relation:
FG 0 − < 1/2 < FG 0 +
(8)
The proof goes by sequential construction, ensuring convergence at the desired point 0 as the number
of agents is increased. The proof is found in §A.3 of
the appendix. The intuition is that hierarchies are the
most effective way to move a screening function to
the “right,” and polyarchies are most effective in moving a screening function to the “left.” Because pure
hierarchies (polyarchies) can overshoot the desired
point 0 , including a polyarchical (hierarchical) element can always achieve the necessary correction. In
effect, this leads to construction of stair-like graphs
with a few polyarchical (hierarchical) “interruptions”
of longer hierarchical (polyarchical) sequences.
More precisely, the theorem shows that any screening function can be modified via a stair graph to cross
from mainly rejecting to mainly accepting projects at
any 0 with an arbitrarily low tolerance . Picking
the shift point to occur within the set of projects of
zero value, V = 0, will remove any bias of the original
screening under the assumptions of perfect and positive correlation between screening and project value.
4.
Approaching Perfection
Our approach reuses and upgrades the techniques
used in Moore and Shannon’s (1956a, b) proofs. We
cannot build directly on the original proofs because
they were limited to structures comprised of electrical components. As previously mentioned, this limitation shows up as the Moore-Shannon restriction,
which rules out that lower-level members of an organization can be assigned the final decision-making
authority (on behalf of the entire organization) to
accept or reject a project. We therefore redo the proofs
with different topological structures that are consistent with the way Sah and Stiglitz (1985, 1986,
1988) and others have characterized human decisionmaking organizations.
Because most readers are probably not familiar with
Moore and Shannon’s (1956a, b) original proofs, it
is useful to briefly summarize the main result and
assumptions. First, starting from a single component,
they showed how the graph can be incrementally
improved when additional components are added.
Second, they showed (see Theorem 6) that it is possible to build a graph out of unreliable electric components (relays) with a screening performance that
deviates arbitrarily little from perfection at the infinite
limit. The original results build on the following three
assumptions: (1) the agents are able to discriminate
between projects with positive and negative value;
(2) the agents are more likely to accept projects with
positive than with negative value, i.e., the correlation
between project value and agent screening is perfect
and positive; and (3) the graph screening function is
monotone (see Moore and Shannon 1956a, b, Equation (4)).
Because we break with the Moore-Shannon restriction to characterize (human) decision-making
organizations, we redo the original proof with new
topological structures and thereby introduce a new
formalism. It is important to note that our new graph
formalism lifts assumption (3) so that our results hold
even if the graph screening is nonmonotone. This is
necessary because the assignment of final decisionmaking authority to lower-level members will often
lead to nonmonotone graph screening. We proceed
as follows. First, we establish Moore and Shannon’s (1956a, b) result for decision-making organizations, i.e., human organizations that are built of individuals who can, in principle, be assigned the final
decision-making authority. This result is established
under assumptions (1) and (2). Second, we show that
it is possible to build reliable decision-making organizations of unreliable human agents even if assumption (2) is dropped. That is to say, if only the agents
are able to discriminate between projects with positive and negative value, a reliable decision-making
organization can be designed. The surprising implication is that it is possible to design reliable organizations even when individuals are severely misguided
such that they confuse negative project value with a
positive value and vice versa.
4.1. Perfection: The Single-Step Function
We now establish Moore and Shannon’s (1956a, b)
result for (human) decision-making organizations
under assumptions (1) and (2). The projects can be
divided into two disjoint sets according to the sign
of their value. A perfect positive correlation between
Christensen and Knudsen: Design of Decision-Making Organizations
Management Science 56(1), pp. 71–89, © 2010 INFORMS
Figure 3
A Graph Screening Function That Separates Disjoint Sets of
Project Value on the Basis of Screening Outcomes in a Point
0 Satisfying Condition (9)
1.0
1–
F()
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
78
0.5
0
0
0 –
0
0 +
1
screening outcomes and project value ensures that the
corresponding sets are disjoint and ordered in a similar way. The proof operates in the reduced screening
function space. As a consequence, the task of constructing a graph screening that deviates arbitrarily
little from perfection can be described as constructing
a graph whose reduced screening polynomial F shifts
from a probability arbitrarily close to 0 to a probability arbitrarily close to 1 within an arbitrarily narrow interval around some point 0 . The point of the
desired shift, 0 , in the graph screening function is
any point that separates the two disjoint sets relating
to screening outcomes.
At this point, it is useful to define perfection at
the level of decision-making organizations, i.e., the
perfect graph screening function. The perfect graph
screening function is identical to the perfect agent
screening function, f x. The deviation from perfection can be expressed by two parameters, and .
As shown in Figure 3, the parameter defines the
interval around 0 . The parameter defines the deviation from the extreme screening outcomes of 0 and 1.
The error rates of the screening polynomial should
not exceed outside the interval 0 − 0 + :
F 0 − <
and
F 0 + > 1 −
(9)
The boundary points, 0 ∈ 0 1, are trivially dealt
with by polyarchies and hierarchies of increasing size,
as shown in the extremity theorem (see Theorem 2).
For any other 0 , we now present a theorem that
includes Moore and Shannon’s (1956a, b) perfection
theorem as a special case:
Theorem 5. Given any position 0 < 0 < 1 for the shift
in graph screening, any threshold 0 < < 1/2, and any
Christensen and Knudsen: Design of Decision-Making Organizations
Management Science 56(1), pp. 71–89, © 2010 INFORMS
radius 0 < < min0 1 − 0 , then an architecture can
be constructed from no more than
log 5/log11/8
1
log /2
·
nck ≤ 25 ·
logmin0 1 − 0
2
log 5/log 2
log3
·
(10)
log3/4
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
agents whose reduced graph screening polynomial fulfills
condition (9).
If > 1/3, the number of agents to achieve the
required level of reliability in Theorem 5 reduces to
log5/log11/8
1
log /2
·
(11)
nck ≤ 5 ·
logmin0 1−0
2
The major obstacle in the generalization of the original proof (Moore and Shannon 1956a, b) originates
in the fact that human decision-making structures
can delegate decision rights to lower hierarchical levels. Including this characterization of human
organizations in the formalism will often lead to
nonmonotonous reduced graph screening functions.
Because the original proof only goes through under
the assumption of monotonicity (see the fundamental
Equation (4) of Moore and Shannon 1956a, b), it does
not hold for all social and economic decision-making
organizations. Moreover, the specific graphs used in
the original proof cannot be applied here because
they are not general enough to include social agents
that have the powers to reject or accept a project on
behalf of the organization. To overcome these obstacles, we extend Moore and Shannon’s (1956a, b) proof
by including entirely new structures whose members
can be assigned the final decision-making authority to
accept or reject a project. Our result includes Moore
and Shannon’s (1956a, b) original proof as a special
case.6
6
For comparison, the original bound of Moore and Shannon translated to the present setup is
log 9/log3/2
1
log /2
·
logmin0 1 − 0
2
√
log 9/log 3
log 8
·
√
log1/ 2
nms ≤ 81 ·
As can be seen, performance is always better for human decisionmaking structures than for electrical circuits (as treated by Moore
and Shannon). This is because our proofs build on an assumption that has relaxed the Moore-Shannon restriction. Because this
gives us fewer constraints upon optimization or upon incremental
change, performance will be better (or at least no worse) than with
the Moore-Shannon formalism.
79
Sketch of Proof. The proof can be found in §B of
the appendix, and it uses a technique similar to that
of Moore and Shannon (1956a, b) for constructing
an explicit graph. To ease the exposition, the proof
mainly elaborates on the extensions that are necessary
to generalize Moore and Shannon’s (1956a, b) proof
to encompass graphs that include human agents with
powers to reject and accept projects on behalf of the
organization.7 This paper uses the technique of constructing an explicit graph provided by Moore and
Shannon (1956a, b). As in Moore and Shannon (1956a,
b), the proof will, for reasons of mathematical convenience, be provided in a construction process with
three steps, referred to as the opening, middle game, and
end game.
The opening game consists of finding an architecture with n members such that the point 0 is the
position of the shift in the reduced graph screening function. That is to say, the graph screening
function is moved over such that it intersects the
value 1/2, F 0 − < 1/2 and F 0 + > 1/2. The
middle game consists of steepening the graph by
recursive expansion (thus increasing n) such that
F 0 − < 1/4 and F 0 + > 3/4. Recursive expansion is the procedure of replacing each agent in a selfdual and highly discriminating graph with a copy of
the architecture found in the opening game. The end
game then consists of further steepening the graph
by recursive expansion to obtain the required level of
reliability with a total of n agents such that condition (9) is fulfilled. The solution to the general problem is a total of s expansions of a simple graph having
as agents the architecture found in the opening game.
This procedure is known as composition.
The intuition behind the proof relates organizational performance (in terms of error reduction) to
the properties of decision-making graphs. A special
class of graphs has the property that the graphs will
steepen the slope of the screening function when they
are recursively expanded; i.e., each node is replaced
with a copy of the graph itself. These graphs are
referred to as self-dual graphs. With each expansion,
more agents are added and the decision-making structure becomes more discriminating.
The boundary points are dealt with by applying
the extremity proof (Theorem 1); i.e., the hierarchy is
used to steepen the screening function at the “right”
boundary and the polyarchy is used in a similar way
at the “left” boundary. Finally, note that the extremity
proof also shows that the most effective way of moving a screening function to the desired part of the
project distribution is to use a hierarchy (move to the
7
Moore and Shannon’s (1956a, b) proof is readily available in the
original as well as in reprint (Sloane and Wyner 1992).
Christensen and Knudsen: Design of Decision-Making Organizations
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
80
Management Science 56(1), pp. 71–89, © 2010 INFORMS
“right”) or a polyarchy (move to the “left”). If hierarchies or polyarchies “overshoot” the movement, a
modified stair graph is used (as explained earlier). If
necessary, the graph can be adjusted so the desired
level of reliability is achieved. This is done by nesting
the graph in a hierarchy or polyarchy (or a modified
stair graph).
The proof then ensures that the desired level of
reliability (in terms of minimizing both Type I and
Type II errors) is achieved by adding the fewest possible agents. This means that the proof can be used for
design purposes, i.e., to achieve the maximal incremental improvement or to build the most reliable
structure, given the available number of individuals.
Below, we use an application from a bank to illustrate
how this can be done in practice.
4.2. Perfection: The Multistep Function
Theorem 5 shows that decision-making organizations
can be designed to deviate arbitrarily little from the
limit of perfect reliability under all the three abovementioned assumptions. This result is now established in the more general case. We require the
minimal assumption that agents are able to discriminate between disjoint sets of positive and negative project value, but drop the two remaining
assumptions.
We do not require that the correlation between
project value and agent screening is perfect, positive, or even different from zero. That is to say, we
do not require a monotone agent screening function,
which is an advantage because we are able to design
decision-making organizations that can repair serious human error. Second, we do not require a monotone graph screening function. When we replace relays
by human agents, who have powers to individually
accept or reject projects on behalf of the decisionmaking organization, the graph screening polynomial is not necessarily monotone even if the agent
screening is monotone. To the best of our knowledge,
this problem has not been recognized (or solved) in
previous research. In the following, we establish a
new version of Moore and Shannon’s (1956a, b) theorem of perfection that is valid even in the case of
nonmonotone agent screening or nonmonotone graph
screening or both.
Assume that agents are merely able to discriminate
between projects with positive and negative value.
That is, let projects fall within disjoint sets:
A− ∩ A+ =
(12)
Here it will be assumed that the agent screening function and the value mapping are both piecewise continuous, which imply that A− ∪ A+ is at most split into
a finite number of intervals.
Theorem 6. Given any threshold 0 < < 1, a series of
m (odd) shift points in reduced space
0 < 1 < 2 < · · · < m < 1
(13)
and a radius 0 < < mini i+1 − i /2, a graph can be
constructed whose screening will jump from below to
above 1 − (and back alternatingly) within of the i ’s.
Sketch of Proof. A graph with the postulated screening function is built from the single-step functions of
Theorem 5 using a sufficiently small and reusing .
As Figure 4 shows, the graphs shifting at the required
appearences are lined up into a hierarchy, starting
with 1 closest to entry. The ones shifting from 0 to 1
must reject to the termination node, and the rest must
reject to polyarchies (this follows from Theorem 1)
large enough to ensure almost certain acceptance as
required by the threshold. In case 1 ≤ or m ≥ 1 − ,
the first or last single-step graph, respectively, must be
replaced by a suitable polyarchy or hierarchy according to Theorems 1 and 2.
Assuming that polyarchies (and hierarchies, if
needed) complete the required shift within the same
and as the generic single-step graphs, it is easy to
show that the total graph will have a graph screening
that meets the required threshold if ≤ /m is used.
Finally, the assumption regarding the polyarchies is
satisfied (again according to Theorem 2) by picking
n ≥ log / log1 − 1 + .
In practice, a graph displaying the desired multistep screening function can be obtained by first finding single-step graphs using a stricter < but
reusing . Then these graphs are arranged into a
hierarchy, with every other subgraph rejecting to
a suitably large polyarchy, as shown in Figure 4
(three jumps). An example of a multistep graph
screening function obtained from this procedure is
shown in Figure 5 (five jumps).
Figure 4
Graph for Multistep Function
I
S
1
S2
S3
F
Pn
T
Notes. Social and economic decision-making organizations may have nonmonotonous screening capabilities as a function of project appearance,
which allow multistep functions shifting at 1 2 3 (generalizable to any
odd number of shifts). These properties can be utilized to create more general architectures which screen perfectly whenever the appearence of good
and bad projects fall within disjoint sets. Full lines are acceptance edges and
dashed lines are rejection edges.
Christensen and Knudsen: Design of Decision-Making Organizations
81
Management Science 56(1), pp. 71–89, © 2010 INFORMS
Figure 5
Example of a Multistep Function with = 1/20, = 1/54,
and Shifts 3/18 9/18 11/18 13/18 15/18 as Constructed
by the Method Devised in the Proofs of Theorems 5 and 6
PI =
1.0
F ()
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
PII =
0.6
0.4
0.2
0.2
0.4
0.6
0.8
1.0
4.3.
V x1 − fG xx dx
(14)
where x is the distribution of projects, and the probability of a Type II error (accepting a bad project) is
0.8
0
0
their ability. The probability of a Type I error (rejecting
a good project) is8
An Example: Organizing Credit
Evaluation in Bank F
This example is based on a case study of bank F. It
has a number of local branches, where credit advisors evaluate applications from business clientele. The
evaluations result in immediate approval, rejection
or, as is often the case, referrel to a credit officer in
the bank’s central credit unit. The credit officer can
approve, reject, refer the project to the next layer, or
in some cases consult with a colleague at the same
level. A number of advisers work at that level, and
one of these may finally approve or reject the project.
Or the project may be pushed on to its final station,
the head of the central credit unit. We are here considering applications of modest size (approximately
$1 million) that occur rather frequently (we considered a sample of 209 recent credit applications).
The common measure of the efficacy of a bank’s
credit evaluation is the number of defaults, a term
that refers to the frequency of losses (error rates). The
bank has a good estimate of Type II errors (defaults)
but little information on Type I errors (rejecting good
applications). The error rate for this bank is approximately 0.5%. The official policy of this bank is to
“caution all evaluators to be mindful of the balance
between risk and reward.” In practice, this translates
into a conservative policy of evaluating a number of
indicators that are thought to correlate with risk of
default.
The bank’s objective is to minimize the incidence of
Type I and Type II errors subject to the constraints of
the number of available evaluators (system size) and
1 − V xfG xx dx
(15)
To derive specific prescriptions it is useful to consider the agent’s discriminating ability against the
system’s tolerance of error; i.e., what is the tolerance for uncovered losses on a credit application? The
system’s tolerance of error is the maximal absolute
project value for which errors have consequences that
can be ignored. A system that tolerates no error whatsoever requires a zone of uncertainty that is zero. Such
a system requires employment of an omniscient decision maker, f x, but these are not on the job market.
The zone of uncertainty defines the range of project
values where Type I and Type II errors will be committed (like a confidence interval in statistical theory).
In bank F, the system necessarily has some zone of
uncertainty. The aim is to design the credit evaluation
system so this zone excludes credit applications with
unacceptably high losses.
From a design perspective, it is further useful to
make a distinction between judgmental bias and ability. Judgmental bias is a deviation from symmetric
screening around the point of zero project value. An
employee may systematically be overoptimistic and
accept credit applications that turn out to be loosing
propositions. What is more likely is that an employee
may have a “healthy” conservative bias that tends
to exclude good applications. A biased agent screening function is skewed. If the skew is significant, the
system’s tolerance of error may no longer include
the agent’s zone of uncertainty. The agent’s judgmental ability is a different matter. Even an unbiased
agent may have too little discriminating ability, which
means that the agent’s zone of uncertainty exceeds
the system’s tolerance for error. There are two complementary approaches to improving the system. The
first is simply to hire evaluators with superior ability
and replace evaluators with intolerable performance.
The second concerns the design of the evaluation system, given the present level of judgmental ability.
To illustrate the application of our theory, we consider how bank F could dramatically improve credit
evaluation by choosing a design that both removes
8
Percentages (conditional probabilities) are readily obtained from
these quantities, and a performance measure can easily be constructed by a suitable weighting of the errors.
Christensen and Knudsen: Design of Decision-Making Organizations
Management Science 56(1), pp. 71–89, © 2010 INFORMS
judgmental bias and increases the discriminating ability. We actually conducted an experiment in bank F
to extract a screening function from 40 randomly
selected credit evaluators. A mixture of 12 indicators
that the bank commonly uses to evaluate credit applications was selected, and a number of fake applications were constructed. The fake applications had
known quality and frequency, so it was possible to
extract the average screening function. It was sigmoid and it was fitted to the function y = 050 +
tanhx − 306/543 with less than 0.5% unexplained
variance.
In the following example, we model credit applications as the scalar value, V x = x. Consistent with
the evidence from bank F, we use a sigmoid agent
screening function
1 + tanhx − x0 /x
f x =
2
Figure 6
Repairing a Biased Screening Function
1.0
A
G (4 agents)
G′ (20 agents)
0.8
G : Unbiased
F(), = f(x)
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
82
0.6
0.4
0.2
G ′: Unbiased and steeper
0
–2.0
–1.5
–1.0
–0.5
0
0.5
1.0
1.5
2.0
x
(16)
where x0 is an arbitrary bias and 1/f x0 = 2x is
the zone of uncertainty. The tolerance xtol defines the
interval −xtol /2 xtol /2 around zero, containing credit
applications of little consequence (most of the risk
is covered by collateral and other securities). We set
the scale of the zone of uncertainty to unity, x = 1,
and measure the system’s tolerance of error on this
scale. That is to say, the agent’s zone of uncertainty
coincides with the system’s tolerance when xtol = 2.
If the system’s tolerance is significantly lower than
the agent’s zone of uncertainty (xtol < 1/10), the agent
is not sufficiently reliable. As the system approaches
perfect reliability, → 1/2, xtol → 0, and f 0 → .
One advantage of the three-step construction of
Theorem 5 is that the first step, the opening game,
removes any bias in the graph screening function.9 By
application of the first step in the design of a reliable
system, the graph screening function is moved over
such that it intersects the value 1/2 with arbitrary precision. After application of this procedure, the screening of the system is unbiased. Figure 6 shows how
a biased agent screening function can be repaired.
With a few agents (four in the example), any level of
judgmental bias can be removed. The example further
shows how an unbiased system, G, can be steepened
by applying the middle and end game.
In the example, the unbiased graph G is steepened by one composition with the self-dual graph G∗ .
This requires a system including a total of 20 agents.
This example is a forceful demonstration that the procedures provided in the present study can be used
9
Still assuming a reasonable overlap between the zone of uncertainty and level of tolerance, the graph screening will not be much
distorted, only shifted. If this assumption does not hold, the two
remaining steps of the construction procedure must also be applied
to (re-)steepen the screening function.
Notes. Given a low tolerance xtol = 1/25, a fairly large bias x0 = −0 3 can
be removed with just four agents organized in a stair graph (a two-member
polyarchy in which the last agent accepts to another two-member polyarchy).
Using the procedure outlined in the proof, the graph can be further steepened
if the structure is designed from 20 agents using one composition.
to achieve incremental improvements even for small
decision-making organizations. In bank F, such design
improvements may significantly decrease the unacceptable defaults.
More generally, Table 1 shows the number of agents
needed to produce an unbiased graph screening function. These numbers were obtained from the application of the opening game of Theorem 5. Table 1 shows
improvements of reliability in terms of the reduced
graph screening polynomial F . We assume that the
bank employs fairly reliable agents in accordance with
our experiment. A reliable agent has a high value of
because its agent screening function maps alternatives
onto a probability that is close to 0 (alternatives with
negative value) or 1 (alternatives with positive value)
outside the interval of little importance. Fewer agents
are required to repair a system with a high level of .
To increase the reliability of the system, any initial bias must first be removed by the procedure outlined in the opening game of Theorem 5. Further
Table 1
Number of Agents Needed in the Stair Graph (from the
Opening Game) to Remove Any Initial Bias 0 = 1/2
for Any Agent Screening
0 \
0.0001
0.0010
0.0100
0.1000
0.4000
0.45
0.35
0.25
0.15
0.05
4
12
12
10
28
4
7
5
7
20
4
5
5
(4)
(12)
(1)
(2)
(2)
(3)
(5)
(1)
(1)
(1)
(1)
(2)
Notes. The numbers in parentheses are polyarchies. Values of 0 > 1/2 are
obtained by symmetry through dual graphs.
Christensen and Knudsen: Design of Decision-Making Organizations
Management Science 56(1), pp. 71–89, © 2010 INFORMS
Table 2
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
\
0.0100
0.0050
0.0010
0.0005
0.0001
System Sizes Obtained by Explicit Construction According to
Specified Thresholds and Radii for a Single Shift Graph
with 0 = 1/2
0.400
0.450
0.490
0.495
0.499
9
9
27
27
27
3
9
9
9
27
1
3
3
3
9
1
1
5
3
3
1
1
1
3
3
improvements can then be achieved by increasing the
system’s discriminating ability. The middle and end
game of Theorem 5 provides the procedure to achieve
a desired level of discriminating ability by improving an unbiased graph screening function. We now
illustrate how the results of this paper can be used to
accomplish this. Table 2 shows the number of agents
that is necessary to steepen a symmetric (unbiased)
screening function (0 = 1/2) to various desired levels
of reliability.
As Tables 1 and 2 show, it is possible to achieve dramatic improvement in reliability through the proper
design of an evaluation structure. Even if a large
number of agents is needed to reach a high level
of reliability when starting from incompetent agents,
it is always possible to obtain significant incremental improvement with a handful of agents, either by
diminishing the bias or by narrowing the agent’s
zone of uncertainty. Overall, our example from bank
F illustrates how our model can be used to design
decision-making organizations. In actual practice, the
application in bank F led to a number of suggested
improvements. Further obvious applications are those
where a system of evaluators jointly considers a high
number of comparable projects. Examples include
insurance companies, military intelligence, and medical diagnostics.
5.
Conclusion
This paper advances a general treatment of decisionmaking organizations that allows one to consider
the full range of organizational architectures between
extreme centralized and decentralized forms—often
referred to as hierarchies and polyarchies. Thereby
one can design decision-making structures that trade
off Type I errors (of rejecting a superior alternative)
and Type II errors (of accepting an inferior alternative) as the relative degree of hierarchy and polyarchy
shifts. We provide proofs that show how decisionmaking organizations can be constructed so they maximize reliability for a given number of agents. We also
show how incremental improvement can be achieved
when additional components are added. This allows
organizational designers to examine the change in the
overall reliability of the organizational structure as
83
the number of actors within the organization changes.
Our model can directly be used for a qualitative
assessment of actual decision-making organizations
such as boards, corporate headquarters, purchasing
departments, and other management teams. An application from a bank illustrated how our framework can
be used in practice. Our contribution is to show how
organizations in general can improve the quality of
collective decisions. Our proofs have direct applicability but may also be used to develop algorithms that
can answer specific concerns relating to performance
trade-offs. Recent work by Csaszar (2009) provides a
very useful step in this direction.
Our results are derived from a new extended version of an important theorem in information theory,
the Moore-Shannon (1956a, b) theorem. This theorem showed how systems can incrementally approach
perfect reliability even though all components are
imperfect. Although this theorem offers useful inspiration, it cannot be directly applied to the design
of decision-making structures because it is restricted
to systems of relays and similar electric components.
The Moore-Shannon restriction has the immediate
implication that there will be inherent dissimilarities
between actual human organizations and organizations of electrical component. The most important dissimilarity originates in the fact that human decisionmaking structures can delegate decision rights to
lower hierarchical levels. This observation breaks with
the assumptions of the original proof. Our approach
therefore builds on the very different topological
structures that are consistent with the way Sah and
Stiglitz (1985, 1986, 1988) and others have characterized human organizations.
As a model of organizational decision making,
a number of limitations should be considered. First,
we abstract from decision-making costs. In practice,
the costs of making the decision usually depend
on the size of the organization (the number of agents),
the levels of pay, the choice of compensation method
(e.g., fixed salary or pay per evaluation), and the
possibility of (some) economies of scale with respect
to evaluation. Therefore, total personnel costs would
enter as a constraint on the problem of designing reliable decision-making structures. It is fairly easy to
extend our approach by including the relevant cost
model.
Our study of bank F further indicated that reorganization costs are often important from the point
of view of a practical application, i.e., marginal gains
from re-organization in terms of changes in error
rates or revenue. The estimation of marginal revenue
requires further specification of the project distribution (as we did for bank F). It obviously makes a
huge difference if decision-making structures consider
innovation projects where quality has an exponential
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
84
distribution or credit applications where the distribution is Gaussian.
A second limitation concerns our choice of focus.
Prior work on decentralization of information has
provided robust results regarding the way organizational structures can be designed to optimize
efficiency under capacity constraints (Marschak and
Radner 1972, Radner 1993, Bolton and Dewatripont
1994, Van Zandt 1999). The theory of decentralization of information has analyzed design of structures
while largely abstracting from the problems relating
to reliability that we have considered here. By broadening the scope of organizational structures that can
be analyzed from a reliability perspective, it should
be possible to advance a synthesis of the two perspectives. A third limitation concerns incentives. We
have assumed that individual ability (as characterized
by the screening function) is exogenously given. But
incentives are commonly thought to stimulate effort
also with respect to decision making. This implies
that (the expression of) ability is often endogenous
to incentives. Our model further shows that performance signals can be made less noisy by using
appropriate evaluation structures. This implies that
the level of noise in principal-agent models would
become a choice variable (as opposed to the common
assumption that noise is exogenous to the problem).
If organizations have not optimized their evaluation
structures, the implication is that they would undervalue the effect of incentivizing individuals (and overvalue the effect of group-level incentives). This line
of query would merge the information processing
view on organizations with the incentive stream (for
a review, see Colombo and Delmastro 2008). In our
view this is a very promising item on the agenda
for future research. Fourth, we follow prior research
by Sah and Stiglitz (1985, 1986, 1988) in abstracting
from heterogeneity in ability. It would be interesting to gain further theoretical insights on the allocation of employees to positions in the decision-making
structure when there is variation in ability. Intuitively,
one would place the agent with the highest ability (steepest unbiased screening function) where the
information flow is highest, i.e., at low levels in the
organization. However, intuition may be misleading,
because the result would depend on the definition of
the project distribution. So, we need research that provides a detailed examination of the relation among
the economic context (project distribution), individual ability, and decision-making structure. Fifth, we
are abstracting from a rather fundamental aspect of
decision-making organizations, namely, joint learning.
Joint learning is the common situation where individuals influence each other’s learning opportunities.
To the extent that individuals learn from experience,
Christensen and Knudsen: Design of Decision-Making Organizations
Management Science 56(1), pp. 71–89, © 2010 INFORMS
decision-making structures will have a huge impact
on ability. This is because different structures filter
information in different ways. For example, a hierarchy filters out most information on the way to
the top of the organizational hierarchy. This means
that higher-level managers are deprived of learning
opportunities. We would therefore expect that their
ability to pass judgment would suffer. This line of
inquiry leaves an important unanswered question: To
what extent is ability a function of (formal and informal) organization structures as well as talent?
Although our abstraction from decision-making
costs and the economic context (project distributions and personnel costs) served to provide general
insights, the methods outlined in the present paper
provide answers to two fundamental questions: How
many individuals of a certain ability are required
to build a decision-making organization of a certain
level of reliability? How can maximal incremental
improvements be achieved when individuals of a certain ability are added to the decision-making organization? We thereby advance research on the relation
between organizational design and the efficiency of
managing. Our approach advances a reliability perspective that is complementary to prior work on the
decentralization of information and invites elaborations on the relation between individual ability and
organizational structure.
Acknowledgments
The authors thank Markus C. Becker, Winston T. H. Koh,
Daniel A. Levinthal, James G. March, Roy Radner, Raaj
Sah, Larry Samuelson, and Nils Stieglitz for discussions
and helpful comments on previous drafts. The authors also
thank Martin Krone Dahl for access to the case study of
bank F. This research is supported by a grant from the
Danish Social Science Research Council.
Appendix. Proofs of Theorems
A. Proofs of Extremity, Optimality, and Shifting
The theorems of §3 are proved here.
A.1. Proof of Extremity
The theorem of extremity is remarkably general as discussed below. We first prove the theorem with homogeneous agents and then discuss extensions.
Theorem 1. Let n for any positive integer n be the set of all
homogeneous graphs that can be constructed from agents of type
A, such that the maximal evaluation count is no more than n.
Let Pn ∈ n be the polyarchy of n members with reduced graph
screening function FPn , and let Hn ∈ n be the hierarchy of n
members with reduced graph screening function FHn . Any graph
G ∈ n with reduced graph screening FG satisfies
FHn ≤ FG ≤ FPn
(17)
Christensen and Knudsen: Design of Decision-Making Organizations
85
Management Science 56(1), pp. 71–89, © 2010 INFORMS
Figure A.1
Effective Form (with Loops Unfolded) of the General
Member of n+1 Immediately After Entry
the weights during such a dispatching must take values in
the unit interval and sum up to unity:
a+
Ga1
m
aj = 1
and r +
j=1
a1
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
Gam
F
am
a
I
r
Grm
T
r1
Gr1
Notes. The solid lines are acceptance edges, the dashed lines are rejection
edges, and Gaj Grj represent the subgraph to be seen in case of acceptance
(rejection) along edge j.
Proof of Theorem 1. The proof runs by induction on
the maximal evaluation count n. The basic step, n = 1, is
trivial, as all graphs in 1 have the same screening function
as P1 and H1 , both consisting only of one agent. The induction hypothesis consists of the assumption that the theorem
holds for k for all k from 1 and up to some positive n. So
by assumption, all graphs in k have their screening functions bounded by those of Pk and Hk . The recursiveness of
the theorem will now be shown by construction.
Consider any graph G ∈ n+1 of maximal evaluation
count n + 1, and let m > 0 be the number of agents in
the architecture. All directed edges from any node to any
other node can be collected into two effective edges without affecting the screening capabilities, a rejection edge and
an acceptance edge having weights equal to the collective
chance of moving between the two nodes in case of rejection
and acceptance, respectively. Furthermore, only graphs with
one entry point from the external node I need to be considered, because other graphs will have screening functions
that are linear combinations of these single-entry graphs.
After having entered the graph at some specific node,
the most general form of the graph is as shown in Figure A.1. Because m agents are in the structure, the number
of possible agents that can receive the project after the initial evaluation is m at most (one of them being the first
agent itself), regardless of the result of the agent screening.
On arrival at the next agent (number j with probability aj
in case of acceptance and rj for rejection), the effective subgraph to be seen by the project has a maximal evaluation
count of n because one evaluation has already been spent.
Thus, the subgraphs Gaj ∈ n and Grj ∈ n belong to the set
of graphs for which the theorem holds because of the induction hypothesis. This is the cornerstone of the proof.
There is a possibility that the project is terminated or
ultimately accepted directly as a result of the first evaluation. These probabilities are denoted r and a, respectively.
Because the project must leave the agent after evaluation,
rj = 1
(18)
j=1
The graph screening of G can be expressed recursively in
terms of the entry agent and the subgraphs reached after
the first evaluation.
m
m
FG x = f x a+ aj FGa x +1−f x rj FGr x (19)
j=1
rm
m
j
j=1
j
Similar expressions can be obtained for the polyarchy
FPn+1 x = f x + 1 − f x FPn x
(20)
and for the hierarchy
FHn+1 x = f x FHn x
(21)
as well.
The recursiveness of the theorem is now established by
comparing Equation (19) to Equations (20) and (21). First,
m
FPn+1 x − FG x = f x 1 − a − aj FGa x + 1 − f x
j
j=1
m
· FPn x − rj FGr x ≥ 0
j=1
j
(22)
where the inequality follows from Equation (18) and the
induction hypothesis, making both terms nonnegative.
Finally,
m
FG x − FHn+1 x = f x a + aj FGa x − FHn x
j
j=1
+ 1 − f x
m
j=1
rj FGr x ≥ 0
j
(23)
is reached by similar arguments.
Invoking the principle of mathematical induction, the
theorem must hold for all n > 1.
Dynamic dispatching of projects can be included in the
model by letting the weights of the channels of communication depend on the project and the path that it has
seen so far, e.g., aj x. It thereby allows freelancing schemes
for reusing agents and subgraphs to keep the total number of agents down (e.g., the consensus rule of committees), it can supply a truncation mechanism for potential
infinite loops, and it allows organizations to have different modes of response triggered by certain project parameters. As mentioned, dynamic dispatching greatly extends
the set of organizations that can be considered, because it
adds more flexible structures with a realistic touch, usually
by reducing size or cost. Despite the broader scope of such
a model, the theorem of extremity still holds because it is
unaffected by such arguments on the weights.
Introduction of project transformations constitutes a
major extension of the basic model. If agents are characterized by a (stochastic) transformation Tacc/rej y x, representing the probability that project x is transformed into
Christensen and Knudsen: Design of Decision-Making Organizations
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
86
Management Science 56(1), pp. 71–89, © 2010 INFORMS
project y during the evaluation process leading to acceptance/rejection, then a wide range of new models can
be constructed. It then becomes possible to model imperfect channels of communication as well as examine the
effects of agents actively manipulating the projects. Regardless of whether the agents transform the project before or
after evaluation/dispatching, the theorem of extremity still
holds. Transformations may also lift eventual degeneracies
between good and bad projects of equal appearence.
Graphs built from agents that are heterogeneous in
screening properties can even be treated if the graph screening performance is averaged over the distribution of different agents. This is a realistic assumption when screening
performance is averaged over time in systems with high
replacement rates. This would be the case if agents are regularly replaced by drawing new ones from the common
agent distribution or if each agent is actually a department
of individuals whose capabilities follow the given distribution. Long-time performances of organizations drawing
employees from the same workforce can thus be treated,
and even in these situations the theorem of extremity holds.
Further extensions and a more rigorous treatment of the
project selection formalism can be found in Christensen and
Knudsen (2002).
Theorem 2 is proved in the main text.
A.2. Proof of Optimality
Theorem 3. For any positive integer n, let n be the set of
all self-dual graphs that can be constructed from agents indistinguishable with respect to project screening, such that the maximal
evaluation count is no more than n. The slope of the screening
polynomials at = 1/2 of the graphs in n cannot exceed that of
FGn∗ =
n
Bi i
i=n+1/2
Bi =
n + 1
−1i−n+1/2
n + 1/22
i
with
n − 1/2
i − n + 1/2
(24)
and at least one graph Gn∗ ∈ n has this reduced graph screening.
Proof of Theorem 3. This proof has two stages. First,
the polynomial view approaches from the analytical side to
find the optimal reduced screening function with respect
to discriminating ability. Then the topological view shows by
construction that a graph does exist having the found optimal screening.
The Polynomial View. Graph screening functions are polynomials in the agent screening whose maximal order is
the maximal evaluation count. Thus, with the number of
agent evaluations limited to n, there are at most n + 1
coefficients to tune. And for every (nonsymmetric) constraint put on the polynomial, the self-duality constraint
produces yet another. So the optimal screening polynomials
should be sought within the family of odd evaluation count
(assumed in the following unless otherwise stated), and at
most n + 1/2 conditions can be specified.
The optimality strived for here is a minimal deviation
from the perfect (self-dual) screening, − 1/2, the step
function. The screening is required to stay close to 0 below
= 1/2 (and close to 1 above due to self-duality) and to
change sharply around the middle of the unit interval.
Clearly, the best way of achieving this is to require that
the screening function be as flat as possible near the ends
of the interval. In this way the screening stays closest to
the extreme values for as long as possible, leaving as little
as possible of the parameter space over which to perform
the jump between these extremes. Moreover, this requirement ensures monotonicity, which again ensures that the
screening value stays in the unit interval as it must for interpretation as a probability. From a polynomial perspective,
the optimality requirement is that the screening and its first
n − 1/2 derivatives are zero at = 0:
d i FGn∗
di
=0
=0
with i ∈ 0 1
n−1
2
(25)
Here the hypothetical optimal self-dual graph with maximal
evaluation count n is denoted Gn∗ .
The solution to conditions (25) has a derivative proportional to to the power n − 1/2 and, by self-duality,
1 − to the same power. Working out the normalization
constant, the entire solution (3) can be obtained from the
integral of
FGn∗ =
n + 1
n−1/2 1 − n−1/2
n + 1/22
(26)
with vanishing constant as FGn∗ 0 = 0.
The Topological View. The proof is carried out in the simplest of models, where each agent receives projects from a
single predecessor only and each agent has only one successor on the acceptance and rejection side, respectively. Within
this model the graphs are unique. More advanced models,
for example, with complex dynamic rules, can always be
simplified to fit within the simple model without affecting
the screening properties. This is done by duplicating, or
unfolding, agents appearing on multiple paths. However,
the addition of extra rules may break the uniqueness property of the graphs.
The existence (and uniqueness within the simple model)
of graphs having the optimal screening is now proved by
explicit construction. Because they are characterized by a
maximal evaluation count n and concern sets of paths of
certain lengths, a few general properties of the unfolded
graphs are derived first.
There is exactly one graph with fixed maximal evaluation
count n that has exactly k accepts on every path to ultimate
acceptance. Both existence and uniqueness can be shown
by induction. Obviously 1 ≤ k ≤ n and the single agent covers the basic case of n = 1 = k. Assume now that the all the
graphs exist up to n for all allowed k, and let n k denote
each of these graphs. Then n + 1 n + 1 is constructed by
letting a single agent accept to n k, which yields the hierarchy Hn+1 , and similarly n + 1 1 is constructed by letting
a single agent reject to n 1, which yields the polyarchy
Pn+1 . For all intermediate k the graph n + 1 k is constructed by letting a single agent accept to n k − 1 and
reject to n k. Because the graphs at level n were unique,
the acceptance and rejection branches are independent, and
the subset of n + 1 graphs just constructed are all different,
thus, the uniqueness must hold as well at level n + 1.
Returning to the self-dual graphs of optimal screening, it
is seen from the 0th derivative polynomial requirement (25)
Christensen and Knudsen: Design of Decision-Making Organizations
87
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
Management Science 56(1), pp. 71–89, © 2010 INFORMS
that there should be at least n + 1/2 accepts on all paths
to the final portfolio (again with odd e). Furthermore, the
self-dual variant of the same constraint dictates that there
should be at most n + 1/2 accepts on all paths. Therefore,
there must be exactly n + 1/2 accepts on every path leading to ultimate acceptance. This is just the special graph
labeled n k = n + 1/2, the existence and uniqueness of
which has just been proved above.
Although polynomials of even maximal evaluation
count seem to miss a constraint compared to their odd
counterparts, they are actually fully constrained as well
as the self-duality requirement forces the coefficient of n
to 0 for even n. Alternatively, an additional self-duality
constraint not conflicting with optimality can be put at
= 1/2 where the generic dual constraints collapse into
one, FGn 1/2 = 1/2. Consequently, there is nothing to gain
with respect to screening capabilities by adding a single
evaluation to an optimal decision structure with odd
maximal evaluation count.
A.3. Proof of Shifting
Theorem 4. For any 0 < 0 < 1 and any 0 < < d ≡
min0 1 − 0 , a stair graph G exists with no more than
n = log /2/ log d
(27)
agents satisfying the relation:
FG 0 − < 1/2 < FG 0 +
(28)
Proof of Theorem 4. A sequence of stair graphs of
increasing size is constructed in the following fashion.
Start out with two empty dummy graphs G0↓ and G0↑
representing default strategies of rejection and acceptance,
respectively. Let G1 = A be the graph consisting of a single agent. These graphs obviously have screening functions,
FG0↓ = 0, FG0↑ = 1, and FG1 = .
For each step in the sequence, if FGn 0 < 1/2, then
(i) Gn↓ = Gn and Gn↑ = Gn−1↑ ,
(ii) to obtain Gn+1 add a new agent at rejection from the
latest added agent;
else
(i) Gn↓ = Gn−1↓ and Gn↑ = Gn ,
(ii) to obtain Gn+1 add a new agent at acceptance from
the latest added agent.
This sequential construction has several properties ensuring convergence, such as
is why this specific construction is chosen. Therefore, theorem 1 of the original proof, stating that
FGn
1 − FGn FGn
>
1
1 −
(31)
(except at the endpoints of the interval and if n = 1 where
equality holds), can be be applied to prove by contradiction
that FGn intersects 1/2 within I whenever
n ≥ log /2/ log d
agents have been added.
(32)
B.
Proof of the Human Moore-Shannon Theorem
and Extension
The theorems of §4 are proved here.
B.1. Proof of Human Moore-Shannon Theorem
Theorem 5. Given any position 0 < 0 < 1 for the shift in
graph screening, any threshold 0 < < 1/2, and any radius
0 < < min0 1 − 0 , then an architecture can be constructed
from no more than
log 5/log11/8
1
log /2
·
logmin0 1 − 0
2
log3 log 5/log 2
·
(33)
log3/4
nck ≤ 25 ·
agents whose reduced graph screening polynomial fulfills condition (9).
(29)
Proof of Theorem 5. The present proof uses a technique
similar to that of Moore and Shannon (1956a, b). The opening game accomplishes a shifting of the screening via stair
graphs, which removes any initial bias. The middle and
end game steepen the screening using graph composition,
in which the previously found graph substitutes the agents
of a highly discriminating and self-dual graph.
The Opening Game. The opening game consists of finding an architecture with n members such that the graph
screening function, FGn , intersects the value 1/2 within
I ≡ 0 − 0 + . The main difficulty lies in finding a suitable graph. Moore and Shannon (1956a, b) used ladder
graphs in their proof, but these graphs cannot be used when
agents have powers to reject or accept a project on behalf
of the economic system. A suitable graph, including such
agents, is the stair graph that is built according to the proof
of Theorem 4. Thus, the construction process is continued
until
n = log /2/ log d
(34)
d ≡ max0 1−0 < 1
(30)
Although Equations (3) and (4) in Moore and Shannon
(1956a) do not hold for all social and economic systems and
certainly not for expansion around any arbitrary agent, they
do actually hold for the above sequence of graphs, which
agents have been added.
The Middle Game. The middle game consists of steepening the graph by recursive expansion such that
FGn 0 − < 1/4 and FGn 0 + > 3/4 are obtained. To
accomplish this, begin from Gn↓ and Gn↑ , obtained in the
opening game. From Gn↓ and Gn↑ select the graph G0 lying
closest to 1/2 at 0 as a building block for further construction. Then select a self-dual graph to be used in recursive
expansion of G0 . A self-dual graph in the reduced space,
FGn↓ ≤ FGn ≤ FGn↑
FGn↓ 0 < 1/2
1/2 ≤ FGn↑ 0
and
FGn↑ 0 −FGn↓ 0 ≤ d n
with
Christensen and Knudsen: Design of Decision-Making Organizations
88
Management Science 56(1), pp. 71–89, © 2010 INFORMS
Figure A.2
Self-Dual Graph G∗ Used in the Middle and End Game to
Steepen the Graph Screening Function Until the Threshold
Is Met
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
compositions.
The End Game. Because FGs 0 − < 1/4 and FG∗ < 32
for < 1/4, the end game of the original proof can be
applied directly, thereby showing that the desired threshold
is reached in no more than
log3
s ≥ log
log 2
(39)
log3/4
F
I
T
Note. Full lines are acceptance edges and dashed lines are rejection edges.
where the graph screening is a function of the agent screening, is defined as10 FG + FG 1 − = 1. Recursively substituting the graph for the agents in a self-dual graph with
a steep slope around = 1/2 will further steepen the total
graph screening, first to ensure that it falls outside 1/4 3/4
on I, then to ensure that it falls outside 1 − . A new
sequence of graphs Gs is obtained in this way. For each
step, s, of this procedure of recursive expansion, the size
of the graph increases. The self-dual graph G∗ used here is
illustrated in Figure A.2; it is a single agent with an acceptance edge to a two-member polyarchy and a rejection edge
to a two-member hierarchy.11 The choice of G∗ was based on
the premise that it is the best (in the sense of steepening the
graph screening) self-dual graph that can be obtained with
a small number of agents (less than nine agents). This optimality is guaranteed by Theorem 3 as G∗ ≡ G3∗ . The reduced
graph screening function of the self-dual graph G∗ is
FG∗ = 2 3 − 2
(35)
Initially FG0 0 − < 1/2 − /2 and similarly FG0 0 +
> 1/2 + /2. Owing to the symmetry of the problem, it suffices to consider the lower end of the interval, = 0 − .
The technique of Moore and Shannon can be applied
directly, but an even better bound can be obtained if the
deviation s from 1/2 is observed as a function of composition count s,
s+1 ≡ 1/2 − s − FG∗ 1/2 − s = s 1/2 − 2s2
as long as s is below one-quarter, which is reached no later
than
s ≥ − log2 / log11/8
(38)
(36)
additional compositions with G∗ .
The theorem is finally proven by counting the agents
required by all the above steps, and a prefactor is added,
because some agent or composition counts may not come
out even.
B.2. Proof of the Multistep Theorem
Theorem 6. Given any threshold 0 < < 1, a series of m
(odd) shift points in reduced space
0 < 1 < 2 < · · · < m < 1
(40)
and a radius 0 < < mini i+1 − i /2, a graph can be constructed whose screening will jump from below to above 1 −
(and back alternatingly) within of the i ’s.
Proof of Theorem 6. A graph with the postulated
screening function is built from the single-step functions
of Theorem 5 using a sufficiently small and reusing .
As Figure 4 shows, the graphs shifting at the required
appearences are lined up into a hierarchy, starting with 1
closest to entry. The ones shifting from 0 to 1 must reject
to the termination node, and the rest must reject to polyarchies (follows from Theorem 1) large enough to ensure
almost certain acceptance as required by the threshold. In
case 1 ≤ or m ≥ 1 − , the first or last single-step graph,
respectively, must be replaced by a suitable polyarchy or
hierarchy according to Theorems 1 and 2.
Assuming that polyarchies (and hierarchies, if needed)
complete the required shift within the same and as the
generic single-step graphs, it is easy to show that the total
graph will have a graph screening meeting the required
threshold if ≤ /m is used. Finally, the assumption on
the polyarchies is satisfied (again according to Theorem 2)
by picking n ≥ log / log1 − 1 + .
where 0 = /2 and s+1 = s + s+1 . Hence the deviation
from one-half grows like
s+1 = s + s 1/2 − 2s2 <
11
8 s
References
(37)
10
Although the subject of self-duality is widely used, curiously little
attention has been given to the study of self-dual graphs (Servatius
and Christopher 1992).
11
Although the self-dual graph used in the present paper does
not have as steep a slope around the center (thereby requiring
more composition steps) as the 3 × 3 hammock networks of Moore
and Shannon, it only has five agents (the agent count grows more
slowly).
Balakrishnan, N., C. R. Rao, eds. 2001. Handbook of Statistics,
Advances in Reliability, Vol. 20. Elsevier, New York.
Ben-Yashar, R., S. Nitzan. 1997. The optimal decision rule for fixed
size committees in dichotomous choice situations: The general
result. Internat. Econom. Rev. 38(1) 175–187.
Bolton, P., M. Dewatripont. 1994. The firm as a communication network. Quart. J. Econom. 109(4) 809–839.
Carlsson, S., U. Grenander. 1966. Some properties of statistical reliability functions. Ann. Math. Statist. 37(4) 826–836.
Christensen, M., T. Knudsen. 2002. The architecture of economic
organization: Toward a general framework. Mimeo, University
of Southern Denmark, Odense.
Christensen and Knudsen: Design of Decision-Making Organizations
INFORMS holds copyright to this article and distributed this copy as a courtesy to the author(s).
Additional information, including rights and permission policies, is available at http://journals.informs.org/.
Management Science 56(1), pp. 71–89, © 2010 INFORMS
Colombo, M. G., M. Delmastro. 2008. The Economics of Organizational
Design. Palgrave MacMillan, New York.
Csaszar, F. A. 2009. An efficient frontier in organization design.
Working paper, INSEAD, Fontainebleau, France.
Ioannides, Y. M. 1987. On the architecture of complex organizations.
Econom. Lett. 25(3) 201–206.
Knudsen, T., D. A. Levinthal. 2007. Two faces of search: Alternative
generation and alternative evaluation. Organ. Sci. 18(1) 39–54.
Koh, W. T. H. 1992. Human fallibility and sequential decision making: Hierarchy versus polyarchy. J. Econom. Behav. Organ. 18(3)
317–345.
Koh, W. T. H. 1994. Making decisions in committees: A human
fallibility approach. J. Econom. Behav. Organ. 23(2) 195–214.
Li, H., S. Rosen, S. Wing. 2001. Conflicts and common interests in
committees. Amer. Econom. Rev. 91(5) 1478–1477.
Lomnicki, Z. A. 1973. Some aspects of the statistical approach to
reliability. J. Roy. Statist. Soc. Ser. A (General) 136(3) 395–420.
Lynn, N., N. Singpurwalla, A. Smith. 1998. Bayesian assessment of
network reliability. SIAM Rev. 40(2) 202–227.
Marschak, J., R. Radner. 1972. Economic Theory of Teams. Yale University Press, New Haven, CT.
Moore, E. F., C. E. Shannon. 1956a. Reliable circuits using less
reliable relays, part I. J. Franklin Inst. 262(September) 191–208.
View publication stats
89
Moore, E. F., C. E. Shannon. 1956b. Reliable circuits using less reliable relays, part II. J. Franklin Inst. 262(October) 281–297.
Nitzan, S., J. Paroush. 1985. Collective Decision Making: An Economic
Outlook. Cambridge University Press, Cambridge, UK.
Radner, R. 1993. The organization of decentralized information processing. Econometrica 61(5) 1109–1146.
Sah, R. 1991. Fallibility in human organizations and political systems. J. Econom. Perspect. 5(2) 67–88.
Sah, R., J. Stiglitz. 1985. Human fallibility and economic organization. Amer. Econom. Rev. 75(2) 292–297.
Sah, R., J. Stiglitz. 1986. The architecture of economic systems: Hierarchies and polyarchies. Amer. Econom. Rev. 76(4) 716–727.
Sah, R., J. Stiglitz. 1988. Committees, hierarchies and polyarchies.
Econom. J. 98(391) 451–470.
Servatius, B., P. R. Christopher. 1992. Construction of self-dual
graphs. Amer. Math. Monthly 99(2) 153–158.
Sloane, N. J. A., A. D. Wyner, eds. 1992. Claude Elwood Shannon:
Collected Papers. IEEE Press, Piscataway, NJ.
Van Zandt, T. 1999. Real-time decentralized information processing
as a model of organizations with boundedly rational agents.
Rev. Econom. Stud. 66(228) 633–658.
Visser, B. 2000. Organizational communication structure and performance. J. Econom. Behav. Organ. 42(2) 231–252.