Deception in Networks: A Laboratory Study
Rong Rong and Daniel Houser
April 2014
Discussion Paper
Interdisciplinary Center for Economic Science
4400 University Drive, MSN 1B2, Fairfax, VA 22030
Tel: +1-703-993-4719 Fax: +1-703-993-4851
ICES Website: http://ices.gmu.edu
ICES RePEc Archive Online at: http://edirc.repec.org/data/icgmuus.html
Deception in Networks: A Laboratory Study
Rong Rong*1and Daniel Houser**
April, 2014
Abstract: Communication between departments within a firm may include deception. Theory
suggests that telling lies in these environments may be strategically optimal if there exists a small
difference in monetary incentives (Crawford and Sobel, 1982; Galeotti et al, 2012). We design a
laboratory experiment to investigate whether agents with different monetary incentives in a
network environment behave according to theoretical predictions. We found that players’ choices
are consistent with the theory. That is, most communication within an incentive group is truthful
and deception often occurs between subjects from different groups. These results have important
implications for intra-organizational conflict management, demonstrating that in order to
minimize deceptive communication between departments the firm may need to reduce incentive
differences between these groups.
JEL classification: D85, D02, C92
Keywords: social networks, deception, strategic information transmission, experiments
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*Department!of!Economics,!Weber!State!University,!Ogden,!UT,!84408,!Rong:
[email protected]!!
**Interdisciplinary!Center!for!Economic!Science!(ICES)!and!Department!of!Economics,!George!Mason!University,!
Fairfax,!VA!22030,!Houser:
[email protected]!
We!thank!NSF!Dissertation!Improvement!Award!for!financial!support!of!this!project.!For!helpful!comments!we!
thank!our!colleagues!at!ICES,!George!Mason!University!and!Goddard!School!of!Business!and!Economics!at!Weber!
State!University,!seminar!participants!at!the!ESA!NorthXAmerican!meeting!(2012),!the!Research!Brown!Bag!
Meeting!at!Utah!Valley!University!and!the!Networks!and!Externalities!Meeting!at!Louisiana!State!University.!The!
authors!are!of!course!responsible!for!any!errors!in!this!paper.!
!
!
I. Introduction
Groups with different financial incentives often deceive each other for strategic reasons. Within
an organization, for example, people from different departments often manipulate the
information they send to each other so that executive decisions will be in their favor. A familiar
example occurs when academic departments make hiring decisions. Faculty members in a
specific field may withhold important information about certain candidate from faculty of other
fields, hoping to raise the priority of hiring a colleague in one’s own field. Not surprisingly, this
phenomenon has also been observed in many non-academic organizations, including important
business sectors such as high-tech research and development, mass media and health care (Cloke
and Goldsmith, 2000; Cowan, 2003; Tobak, 2008, Gupta et al, 1985; Eckmen and Lindlof, 2003;
Pirnejad et al, 2008). The negative impacts of deception due to the conflict of interest have been
documented widely in the studies of industrial and organizational psychology, as well as
management (Colb et al, 1992; Rahim, 2000; Dreu and Galfand, 2007; Conrad and Poole, 2011;
Miller, 2011).
Economists have studied deception using sender-receiver games. Seminal work by Crawford and
Sobel (1982) describes a one sender and one receiver case (also denoted a strategic information
transmission game, or cheap talk game)2. In their model, the informed sender sends a “cheap talk”
message to an uninformed receiver. Then the receiver, as the only decision maker of the game,
would choose the option that determines the payoff for both players. Their model provides the
conditions where uninformative messages (“cheap talk”) are the equilibrium outcome of the
game.
Many other studies have used variations of this model; however, prior to Galeotti et al (2013),
players either made decisions as a sender or as a receiver, but never both3. Galeotti et al (2013)
investigates N player communication in a network setting where one can send cheap-talk
messages to others and also receive messages from others. Despite the complexity, their model
generates sharp predictions when the players are divided into two groups. In this “two group
model,” players’ payoffs are the same within a group but differ between groups. The model
predicts that truth-telling among those with aligned incentives will be greater than where
incentives are misaligned. This is very much the case for academic hiring. Assuming the “players”
are divided into micro and macro faculty, micro faculty enjoy higher payoff (through research
synergy) when the new hire is another micro-economist and vise versa. According to the model,
one would expect to see higher level of truthful communication within micro or macro faculty
and less so between the two.
The goal of this study is to test these predictions of the “two group model” and, in particular, to
investigate to what extend will people lie to achieve higher monetary gain in the network senderreceiver experiment.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2
!Experimental!studies!support!this!prediction!includes!Dickhaut!et!al!(1995),!Blume!et!al!(1998),!Blume!et!al!
(2001),!Cai!and!Wang!(2006)!and!Wang!et!al!(2010).!
3
!An!exception!is!Hagenbach!and!Koessler!(2010).!Their!paper!has!very!similar!setup!of!the!model!comparing!to!
Galeotti!et!al!(2013).!It!also!yields!similar!predictions.!We!delay!the!discussion!of!Hagenback!and!Koessler!(2010)!to!
the!literature!review!section.!
!
!
Our experiment tests the two-group cheap talk model in the lab. We believe a laboratory analysis
is ideal for this study. The reason is that in natural environments it can be difficult to identify the
causal impact of monetary incentives on truth-telling. In particular, when using non-experimental
observations, the empirical correlation between the two may not convey a causal story
convincingly: there may be other factors impacting both the incentives one faces as well as one’s
communication strategy. As the purpose of this study is to discover how monetary incentive
alone impact deception, we randomly assign monetary incentives to each subject.
Our main findings are (1) consistent with theory, truth-telling nearly always occurs among those
with identical monetary incentives; (2) systematic over-communication occurs between groups
with different incentives; and (3) players overly trust messages they receive.
To our knowledge, we are the first to provide empirical evidence on behavior in sender-receiver
games with multiple senders and multiple receivers4. Despite the many insights gleaned from
one-sender-one-receiver cases, extending the strategic information transmission to a group
context is important. It provides more accurate description on the types of communication that
occurs in multi-group population with divergent preferences.
The remainder of this paper is organized as follows: Section 2 briefly reviews some of the related
theoretical and experimental literature. Section 3 lays out the theoretical background for our
study. Section 4 presents the experimental design and procedures. Section 5 describes the
hypothesis and reports experimental results. Section 6 concludes.
II. Literature on Deception
There are a number of economic theories and experimental tests of sender-receiver
environments. We begin by reviewing these theories. Then we discuss the experimental
evidence, particularly the recent literature on deception.
II.1. Theory of Cheap Talk Game
Information is often delivered in a strategic way. When the information holders do not
have the same incentive as an uninformed decision maker, they tend to hold back some but not
all of the information in order to gain an advantage in the transaction. This important economic
intuition was first described in the seminal model by Crawford and Sobel (1982). In their paper,
a sender has full knowledge of the state of the world and can send messages to influence a
receiver’s belief so that he or she may make a choice that benefits the sender. The receiver, who
is fully aware of the possibility of manipulation in senders’ messages, chooses an action that
maximizes his or her own earnings.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4
!A!few!studies!look!at!environments!with!one!sender!and!two!receivers!(Battaglini!and!Makarov!,2011)!or!where!
there!are!two!senders!and!one!receiver!(Minozzi!and!Woon,!2011;!Lai,!Lim,!and!Wang,!2011).!Those!studies!differ!
from!ours,!as!players!in!those!experiments!make!decisions!as!either!a!sender!or!a!receiver,!but!never!both.!We!
focus!on!a!game!that!better!describes!the!environment!of!intraXorganizational!communication,!which!is!
characterized!by!having!each!player!act!as!both!sender!and!receiver.!
!
!
The model implies, in equilibrium, larger the payoff differences between players lead
senders to hold back more truthful information. In the limit senders are predicted to send random
messages (engage in cheap talk).
The seminal work by Crawford and Sobel (1982) has been extended in many directions.
For example, Milgrom and Roberts (1986), Gilligan and Krehbiel (1989), Austen-Smith (1993),
Krishna and Morgan (2001a, b) investigate the case where there is more than one sender for each
receiver. Battaglini (2002) and Ambrus and Takahashi (2008) further extend the analysis to
environments in which senders give advice on multidimensional issues. Morgan and Stocken
(2008) study the case of polling in which each sender has a different information and ideology.
Additionally, Farrell and Gibbons (1989) discuss the case where there are two receivers and two
states of the world. In all of these cases, however, each agent plays either as receiver or as
sender. Despite the important insights these models convey, they contrasts with the situations we
are interested in on one very important dimension: players in these models are either a sender or
a receiver, never both.
The types of real world environments we are interested in would seem to contradict the
model where the roles of a sender and a receiver are separated. For example, workers from
different departments often talk and listen to each other, and people of one political party often
express their own opinion to others and receive opinions from the other party.
We are aware of only two models of strategic information transmission in networks where each
person can act as both sender and receiver (Hagenbach and Koessler, 2010; Galeotti et al 2013).
Many real world environments would seem to require this framework. For example, workers
from different departments at the same company often talk and listen to each other, and people of
different political opinions may also mutually exchange information.
Hagenbach and Koessler (2010) investigated a case where each individual receives some
information and the aggregation of all private signals reveals the truth. In their environment, a
player earns more when (1) choosing a number that is closer to the true state of the world plus an
individual bias5; and (2) choosing a number that is closer to others’ choices. The first part
incentivizes the individuals to make the best guess of the truth, while the second half requires
coordination of choices between players. As in other cheap talk games, players send messages
free-of-cost before choosing numbers. Additionally, all messages are non-verifiable.
Galeotti et al (2013) also models group communication; however, the earnings in their
model are defined in a different way. In particular, their model assumes that a player earns the
highest payoff if everyone in the game, including him/her self, chooses a number that matches
the truth plus his/her own bias. Given that different players have different biases, one may try to
affect others’ beliefs using cheap talk messages. The predictions of this and Hagenbach and
Koessler (2010) are quite similar. Since we build primarily from Galeotti et al (2013), we make
clear the details of their model in Section III.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
5
!Just!as!in!Galeotti!et!al!(2013),!described!in!detail!below,!the!“bias”!in!Hagenbach!and!Koesslers!(2010)!is!a!payoffX
relevant!parameter!that!can!differ!between!the!two!groups!and!that!can!affect!the!number!they!prefer!the!other!
group!to!choose,!and!thus!can!impact!their!decision!regarding!whether!to!send!a!truthful!message.!!
!
!
II.2. Deception Experiments
The early experimental literature on cheap talk game studies the fit of empirical data to the
predictions of Crawford and Sobel (1982). In those games, senders can choose vague messages
by sending a range of possible states (e.g. sending (1-3) when signal is 2.). Dickhaut, McCabe
and Mukherji (1995) confirms the comparative statics of the model by showing that the senders’
messages become vaguer and the receivers’ actions deviate more from the true state as
preferences between sender and receiver diverge. Cai and Wang (2006) replicate the above
finding and further show that the average payoffs of senders and receivers are very close to the
predicted level for the most informative equilibrium. Their data also suggest that senders overcommunicate and receivers over-trust the message. Wang, Spezio and Camerer (2010) study the
source of over-communication using eye-tracking data.
Some recent experimental research uses a simplified sender-receiver game to study
deception behavior in the lab. Gneezy (2005) analyze an experiment where there are only two
states of the world. They find that people are sensitive to both their own gain and others’ losses
when deciding to lie. Lundquist et al (2009) modify the game further into a labor contract
context, where the senders have information on their ability level and face an incentive to lie so
the receiver will agree to hire. With this design, they can observe not only whether a player has
lied, but also the size of the lie. They find that lie aversion increases with the size of the lie and
also the strength of the promise. The data also shows that free form messages lead to fewer lies
and more efficient outcomes. Sheremeta and Shields (2013) designed a sender-receiver game
where the subjects play the role of a sender or a receiver sequentially. With this design, the
authors can identify whether the subject who lied as a sender will believe other’s message as a
receiver. They find the liars believe and the lying behavior can be rationalized by accounting for
elicited beliefs and other-regarding preferences. In this literature6, messages are typically
considered deceptive if a sender’s message contains representations that differ from the true state
of the world. We also use this method to analyze deceptive messages7.
III. Theoretical Background
Our experiment design is inspired by the two-group communication model in Galeotti et
al (2013). We review the details of the model in this section.
The set of players is denoted by N={1,2,…,n} partitioned into two groups, N1 and N2,
with size n1 and n2, respectively, where n1+n2=n. Without loss of generality, assume n1 >n2≥1.
Player i’s individual bias is bi. In the two-group communication model, each member of group 1
has a bias normalized to 0; members of group 2 have a bias bi=b>0. The state of the world ! is
uniformly distributed on [0, 1]. Every player i receives a private signal si∈{0,1} where si=1 with
probability!!.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
6
!Sutter!(2009)!and!Xiao!(2012)!consider!deception!to!be!“sophisticated”!if!a!deceptive!sender!chooses!the!true!
message!with!the!expectation!that!the!receiver!will!not!follow!his/her!message.!
7
!Identifying!a!player’s!exact!strategy!(truthXtelling!or!cheap!talk)!requires!repeated!observations.!The!result!may!
be!ambiguous!if!individuals!switch!between!different!strategies!during!the!game.!We!do!not!try!to!identify!players’!
strategies!but!simply!study!the!frequency!of!deceptive!messages.!
!
!
Communication among players is exogenously restricted by a communication network
! ∈ {0,1}!×! where player i can send a message to j if gij=1 with gii=0 for all ! ∈ !. The
communication neighborhood of i is the set of players to whom i can send his/her signals; it is
denoted by Ni(g)={!! ∈ !: gij=1}. In this study we focus on the case where g is a complete
network, meaning players can send a message to any other player.
Communication mode describes to what extent the technology of communication allows
for targeting messages. In a private message setting, player i chooses what message to send to
each other player j. A communication strategy profile for each signal si∈{0,1} is defined as
m={m1,m2,…,mn} in which mi(si)={m!" }!!∈!,!!! .
After communication occurs, each player chooses an action. Agent i’s action strategy,
based on his/her own signal and messages received from others, is yi:{0,1}n-1×{0,1}!R;
y={y1,y2,…yn} denotes an action strategy profile. Given the state of the world ! and a profile of
actions ! = {!! , !! , … , !! }, the payoff of i is:
(!! − ! − !! )! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!(1)
!! ! ! = −
!∈!
That is, agent i’s payoffs depend on how close his/her own action yi and the actions taken
by other players are to his/her ideal action !! + !.
A communication network g together with a strategy profile (m, y) induces a subgraph of
g in which each link involves truthful communication. They refer to this network as the
equilibrium truth-telling network denoted by c(m,y|g), a directed graph where !!" !, ! ! = 1 if
and only if j belongs to i’s communication network and !!" ! = ! for every s={0,1}. Given
c(m,y|g) and that the agents are divided into two groups, the in-degree of an arbitrary player in
group i, !! is defined as the number of agents who send a truthful message to him/her. Among
all the truthful messages, the number sent by members of the same group is denoted by kii, while
the number sent by members of the opposite group is kij.
Their analysis focuses on pure strategy Bayesian Nash equilibrium. They provide a full
characterization of the utility-maximizing equilibrium networks8 with a focus on the natural
subclass of those networks where there is complete intra-group communication. In our
experiment setting, the bias we choose yields the same prediction whether we decide to use the
full characterization or the subclass. The following equation describes the in-degree of an
arbitrary player in group i in the utility-maximizing equilibrium truth-telling network:
!!! = !! − 1;!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!(2)
!!" = !"# !"#
1
− !! − 2 , !! , 0 , !, ! = 1,2, ! ≠ !;!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!(3)
2!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
8
!It!is!a!tradition!in!strategic!information!transmission!models!to!characterize!the!utility!maximizing!equilibrium!as!
babbling!is!always!an!equilibrium!solution!but!not!meaningful!in!most!contexts.!
!
!
That is, if ! <
!>
!
!(!! !!)
!
!(!!!)
, both intra-group and inter-group communication is complete; and if
, there is complete intra-group communication and no inter-group communication.
When b takes the intermediate value, inter-group communication also takes intermediate value9.
Given this type of communication, in equilibrium all players trust all intra-group communication.
!
They treat inter-group messages as true signals whenever b <
, and as no information
!(!!!)
whenever!! >
!
!(!! !!)
. That completes the equilibrium prediction of the model.
IV. Design and Procedure
IV.1 Experiment Design
The design of our experiment is based on the theory of Galeotti et al (2013) detailed
above. Each experimental session includes 15 subjects. They are randomly assigned into three
groups to play the game. Hence, there are 5 subjects in each group. The group plays the game
repeatedly for a random number of rounds10 within a stage game. Players know that the other
four players are fixed during the repeated game, and each of them holds a unique identifier: J, K,
L, M or N. Players J, K and L belong to Group 1. Players M and N belong to Group 2. Group 1
and Group 2 players differ in their payoff function by only one parameter: the bias. The bias
level also stay fixed for all rounds in a stage. Moreover, all the above information are common
knowledge for all players.
Once a stage game ends, 15 subjects are rematched into three new groups, assigned a new
ID, given a different set of bias levels and restart a new stage. In total, all subjects experience
three stage games by the end of the experiment.
Each round of the experiment is a guessing game. Before a round starts, the computer
generates a random integer r between 0 and 5 (including 0 and 5). The number is unknown to all
players. At the beginning of each round, each player receives a private signal that is either 0 or 1.
Players do not see others’ signals. However, they are told that the sum of the five signals
received by all five players equals the random integer11. Before players guess the number, they
are given the opportunity to exchange “cheap-talk messages” between each other. The messages
are constrained to be either 0 or 1 to match the space of the signal. Moreover, messages are
group-specific, so each player decides on what message to send to Group 1 and Group 2 players
rather than sending a unique message to each player12. After all players submit their messages,
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
9
!Specifics!related!to!intermediate!biases!can!be!found!in!Galeotti!et!al!(2011).!!
!
10
!There!are!always!at!least!4!rounds!in!a!stage.!After!round!4,!the!game!has!a!random!stopping!probability!of!0.04!
at!any!given!round.!To!keep!control!over!the!length!of!the!real!experiment,!we!randomly!generate!predetermined!
round!lengths!of!19,!28!and!32!for!experimental!stages!I,!II!and!III!respectively.!The!practice!stage!lasts!3!rounds.!!!
11
!This!part!of!design!follows!Hagenbach!and!Koessler!(2010).!We!deviate!from!Galeotti!et!al!(2013)!for!two!
reasons:!(1)!the!former!involves!less!uncertainty!and!thus!is!an!easier!task!for!our!subjects;!and!(2)!the!main!
predictions!that!we!test!in!this!paper!remain!the!same!between!the!two!models.!
12
!The!message!is!group!specific!in!order!to!simplify!the!decision!problem!for!the!subjects.!!
!
!
they observe the messages that are sent to them and are asked to guess the value r randomly
chosen at the beginning of the round. They also choose a number x based on their guess of r to
determine everyone’s payoff for that round. The payoff functions for Group 1 and Group 2
players are as follows13:
5
Payoff J , K , L = 20 − ∑ ( xi − r − b1 ) 2
(4)
Payoff M , N = 20 − ∑ ( xi − r − b2 ) 2
(5)
i =1
5
i =1
Players J, K and L share the same payoff function, as shown in equation 4. The payoff is
maximized when all five players choose the number x that equals the true value of the random
number r plus a group-specific bias b1. Players M and N share the same payoff function, as
shown in equation 5. The difference between their payoff functions and those for Group 1
players is the group-specific bias b2. As indicated in the theory, this payoff structure incentivizes
every player to: (1) choose a number x that is as close as possible to their best guess of r plus
their own group’s bias b; and (2) make other players, both in the same group and in the other
group, choose the same x. The presence of cheap talk messaging makes it possible for players in
one group to manipulate the choice of x made by players in the other group. In our experiment
setting, b1 and b2 can only take four different values, that is (0, 0), (0, 1), (1, 0) or (1, 1). Note
that (1, 1) appears always and only in the practice stage, and thus is not included in our data
analysis. The other three combinations appear in random order for the three experimental stages.
The structure of the game and all payoff-related information, including the value of b1 and b2, are
common knowledge. Players also know that the value of b1 and b2 remain fixed within a stage
game, but change between stages.
The following three screens implement this design. First, subjects send messages using
the “messaging screen” (see Appendix A, Fig. 1). Then, subjects make guesses (that do not affect
their payoffs) about the state r.14 Next, subjects choose the payoff relevant value of x using the
“guessing screen” (See Appendix A, Fig 2). While they are making these two choices, the same
screen also shows them the messages they received from others graphically. Finally, the “result
screen” (see Appendix A, Fig 3) reveals the true value of the random integer and displays all the
actions taken by the other four players and their current payoff.
Payoffs accumulate within, but not between, each of the three stage games. Players are
informed about their accumulated payoff at the end of each stage. They are also reminded that
they will be re-matched with a new set of players, and that their stage payoff will not be carried
over to the new stage. Each subject’s earnings for the experiment are determined by one
randomly-determined stage game according to a die roll at the end of the experiment.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
13
!The!payoff!differs!from!the!theory!section,!as!we!give!20!experimental!dollars!as!an!endowment!per!period.!This!
ensures!subjects!do!not!earn!negative!amounts!during!the!experiment.!This!change!does!not!alter!the!theoretical!
predictions.!!
14
!The!first!guess!does!not!affect!payoffs!but!is!used!as!a!way!to!verify!that!participants!take!account!of!the!bias!
when!making!their!payoff!relevant!choice.!In!particular,!the!payoff!relevant!choice!should!be!equal!to!their!guess!
plus!the!bias.!Nearly!all!of!our!subjects!displayed!the!expected!relationship!between!their!guess!and!their!payoffX
relevant!choice.!
!
!
Note that our experiment does not strictly follow Galeotti et al (2013). One difference is
that Galeotti et al (2013) assumes that the true state is uniformly distributed, and that players
signals are drawn iid from {0,1} with probability that the signal is “1” equal to the value of the
state. In our experiment the true state is also drawn from a uniform distribution on the discrete
state {0, 1, 2, 3, 4, 5}, and each player receives a signal that, ex-ante, is equal to “1” with
probability equal to one-fifth the value of the state. Consequently, in an ex-ante sense, all players
understand they are equally likely to receive a signal of “1”, and that the likelihood varies
depending on the true state. It is easy to verify that the theorems in Galeotti et al (2013) require
only that this be true. The fact that there is correlation in our signals among players does not
impact equilibrium predictions. The intuition is that play is simultaneous, and the state is
unknown, meaning there is no way for players to exploit information about correlation in their
actions, either in theory or in practice.
A second difference is that we restricted message sending so that each player was
required to send the same message to all members of the a given group. Note that the impact of
this is to eliminate some types of off-equilibrium play. While in some games restricting offequilibrium play can change equilibrium outcomes (e.g., punishment games), that is not the case
here. It is easy to verify that restricting off-equilibrium decisions does not in this case change the
game’s unique equilibrium.
IV.2 Procedures
The experiment sessions were conducted between May 2012 and June 2012 in the ICES
laboratory at George Mason University and in April, 2014 in the experimental lab at Weber State
University15. Subjects were recruited via email from registered students at George Mason
University and Weber State University. Each subject participated in only one session and none
had previously participated in a similar experiment. The result from either institution does not
differ from each other in any meaningful way, so we pool all the data in our analysis.
In total, 90 subjects participated in the computerized experiment programmed with z-Tree
(Fischbacher, 2007). Each experimental session lasted between 120 and 150 minutes. Subjects’
total earnings were determined by the Experimental Dollars (E$) earned at the end of the
experiment, which were then converted at a rate of E$20 per US dollar. Average earnings were
$25.27, ranging from a maximum of $27.5 to a minimum of $18 across all sessions (excluding
the $5 show up fee).
In all treatments, before a session began, subjects were seated in separate cubicles to
ensure anonymity. They were informed of the rules of conduct and provided with detailed
instructions. The instructions were read aloud. In order to ensure there was no confusion, after
subjects finished reading the instructions, they were asked to complete a quiz. An experimenter
checked their answers and corrected any mistakes one by one. Then the experimenter worked
through the quiz questions on a white board in front of all subjects. The experiment began after
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
15
!In!response!to!a!revision!request,!we!ran!three!additional!sessions!at!the!experimental!lab!in!Weber!State!
University,!where!one!of!the!authors!is!affiliated!with!at!the!time.!
!
!
all subjects confirmed they had no further questions. All sessions in both institutions were
conducted by the same experimenter to ensure the procedure in use is the same.
90 subjects participated in the experiment during six sessions. Within each session, we
obtained 97 message-sending decisions from each subject (excluding the practice stage). Since
we rematch groups between stages, our analysis conservatively assumes the average measure at
the unique group level within each stage is an independent observation. Hence, our data analysis
is based on 54 observations.
V. Hypothesis and Results
V.1 Hypothesis
Based on the theory of Galeotti, et al (2013) discussed in section III, we make the
following hypothesis:
Hypothesis 1: Players always tell the truth to those in the same monetary group.
Hypothesis 2: Players tell the truth to others in a different monetary group if the bias is
(0, 0), and always send random message (babble) if the bias is (0,1) or (1,0).
Hypothesis 3: Players fully trust the message sent by their group members and also fully
trust the message sent by other group members if bias is (0, 0). They disregard those betweengroup messages if the bias is (0,1) or (1,0).
V.2 Results
We lay out the results in the order of the hypotheses above. First, we show the choice of
message-sending (to test Hypothesis 1 and 2). Then, we investigate the behavior of guessing
number (to test Hypothesis 3).
Result 1: Most within-group messages are truthful.
Our data support hypothesis T1. As shown in Figure 1, we find that 95.63% of the
within-group messages are truthful. Although the overall level of truth-telling seems high, it is
significantly lower than the predicted level of 100% (Wilcoxon signed-rank test, p<0.001).
Consistent with the theory, the bias of the opposing group does not affect within-group messages
in any statistically significant way (Mann-Whitney ranksum test, pairwise comparisons, all pvalues greater than 0.2516). Moreover, group size also does not impact the truthfulness of
within-group messages (95.7% for Group 1 and 94.8% for Group 2, Mann-Whitney ranksum test,
p=0.2130).
Result 2: Players tell less truth to members of different group than to their own group
members.
Our data support hypothesis T2. 83.4% of messages sent between two groups are truthful.
This level of truth-telling is much lower in comparison to within-group messages (Wilcoxon
sign-rank test, p<0.001). This effect is larger if the bias is (0,1) or (1,0). However the effect
persists even if the bias is (0,0).
!
!
In the case where bias is (0, 0), the two groups share the same payoff function, so that
truth-telling is predicted to be 100%. However, 90.68% of these between-group messages are
truthful, a significantly lower percentage than predicted (Wilcoxon signed-rank test, p<0.001). It
is also lower than the truthfulness for within-group messages (compare to 95.63%, Wilcoxon
sign-rank test, p<0.03), suggesting that simply dividing subjects into two groups has an impact
on their truthfulness regardless of monetary incentive16.
According to theory, under bias (0,1) and (1,0), there only exists a babbling equilibrium
with 50% truthful messages. We observe 79.77% truthful messages between groups17, which is
significantly higher than predicted levels (Wilcoxon signed-rank test, p<0.001). Under either
bias, truth-telling is significantly lower than the case where the bias is (0,0) (pairwise
comparisons, Mann-Whitney ranksum test, p=0.0001 and 0.0075 respectively).
Result 3: Players overly trust messages they receive.
To measure whether a player believes the messages s/he receives, we measure the
difference between one’s guess and the sum of “1” messages received. When the difference is
zero, we define the “trust” measure to equal one and set it to zero otherwise. 80.59% of all
guesses submitted exactly equaled the sum of “1” messages received. When the bias is (0,0),
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
16
!Eckel!and!Grossman!(2005),!however,!suggest!that!minimal!group!identity!does!not!affect!subjects’!behavior!in!
their!experimental!setting.!Our!data!suggests!that!the!effectiveness!may!be!sensitive!to!the!environment.!
17
!We!combine!the!two!cases!together!as!the!unequal!bias!case,!as!there!is!no!significant!difference!between!them!
(p=0.652)!
!
!
85.58% of guesses are consistent with the messages received, which is significantly lower than
the predicted 100% level of trust (ttest, p<0.001).
In equilibrium, when the bias is either (0,1) or (1,0), random choice will lead Group 1
players to appear completely trusting of between-group messages 37.5% of the time. This can be
seen as follows. In equilibrium, each player in Group 1 faces four possible message
combinations sent at random by two Group 2 players: (0,0), (0,1), (1,0) and (1,1). Each of these
four outcomes is equally likely to appear. Group 1 players form beliefs about the true signal that
Group 2 players hold independent of these messages: (0,0), (0,1), (1,0) and (1,1). Each outcome
is also equally likely to happen. Therefore, out of 16 message-belief pairs with each combination
having the same probability, six of the sums can coincide at random (6/16=37.5%). Similarly,
Group 2 players may appear to be trusting 31.25% of the time even if they are choosing at
random. Overall then, random choice will lead 35% of choices to appear fully trusting. Our data
show that 76.63% of guesses are fully trusting, significantly higher than 35% (ttest, p<0.001).
Moreover, the trust levels between bias (0,0) and bias (0,1) and (1,0) are significantly different
(ttest, p<0.001).
VI. Conclusion
In a great number of socio-economic environments, information is transmitted between
group members strategically. This may occur in part due to monetary incentives, which can
affect strategic considerations, and ultimately the truthfulness of people’s messages. Based on a
model suggested by Galeotti et al (2013), we conducted a laboratory study of deceptive behavior
in an environment of strategic information transmission. In our design, two groups of subjects
with different monetary incentives could attempt to influence each others’ decisions by sending
potentially dishonest messages. We found that the message-sending behavior of our subjects
conformed to theory. In particular, within-group messages were mostly truthful and betweengroup messages were relatively less truthful. We found behavior to depart from predictions,
however, in that we find more truth-telling between groups with misaligned incentives than
theory predicts.
The decision to behave honestly when deception is economically optimal is an often
reported finding in the behavioral economics literature. An explanation is that people have social
image concerns that mitigate their dishonesty (Hao and Houser, 2013). Whether and how social
image might play a role in explaining data in these environments is a topic worthy of additional
exploration.
The results of this study suggest that misaligned incentive may explain the intraorganizational conflict we often see in business practice. To reduce the conflict and deceptive
communication, it may be of benefit to develop methods to reduce incentive differences between
departments. As monetary incentives may be costly or sometimes impossible to manipulate,
studying social incentives and behavioral anomalies, as well as the role of emotion and social
preferences in these environments, might be a profitable directions for future research.
!
!
Reference:
Ambrus, A. and S. Takahashi (2008): Multi-sender Cheap Talk with Restricted State Spaces,
Theoretical Economics, 3:1-27
Austen-Smith, D. (1993): Interested Experts and Policy Advice: Multiple Referrals under Open
Rule, Games and Economic Behavior, 5(1): 3-43
Battaglini, M. (2002): Multiple Referrals and Multidimensional Cheap Talk, Econometrica, 70(4):
1379–1401
Battaglini, M., and U. Makarov (2011): Cheap Talk with Multiple Audiences: an Experimental
Analysis, working paper
Blume, A., D. DeJong, Y. G. Kim and G. Sprinkle (1998): Experimental Evidence on the Evolution
of Meaning of Messages in Sender-Receiver Games, American Economic Review, 88: 1323-1340
Blume, A., D. DeJong, Y. G. Kim and G. Sprinkle (2001): Evolution of Communication with Partial
Common Interest, Games and Economic Behavior, 37: 79-120
Cai, H and J.T. Wang (2006): Overcommunication in Strategic Information Transmission Games,
Games and Economic Behavior, 56 (1):7-36
Cloke, K., J. GoldSmith (2000): Resolving Personal and Organizational Conflict: Stories of
Transformation and Forgiveness, Jossey-Bass
Conrad, Charles R., M.S. Poole (2011): Strategic Organizational Communication: In a Global
Economy, Wiley & Sons
Cowan, David. (2003): Taking Charge of Organizational Conflict: A Guide to Managing Anger and
Confrontation, Personhood Press
Crawford, V.P. and J. Sobel (1982): Strategic Information Transmission, Econometrica, 50(6):14311451
Dana, J., R. A. Weber and J. X. Kuang (2007): Exploiting moral wiggle room: experiments
demonstrating an illusory preference for fairness, Economic Theory, 33: 67–80
De Dreu, C.K.W., M. J. Gelfand (2007):The Psychology of Conflict and Conflict Management in
Organizations, Psychology Press
Dickhaut, J.W, K.A. McCabe and A. Mukherji (1995): An Experimental Study of Strategic
Information Transmission: Economic Theory, 6:389-403
Eckman, A., T. Lindlof (2003): Negotiating the Gray Lines: an ethnographic case study of
organizational conflict between advertorials and news, Journalism Studies, 4:65–77
Farrell, J. and R. Gibbons (1989): Cheap Talk with Two Audiences, American Economic Review,
79(5): 1214-23
Galeotti, A, C. Ghiglino and F. Squantani (2013): Strategic Information Transmission in Networks,
Journal of Economic Theory, 148(5): 1751-1769.
!
!
Gilligan, T. and K. Krehbiel (1989): Asymmetric Information and Legislative Rules with a
Heterogeneous Committee, American Journal of Political Science, 459-90
Gneezy, U. (2005): Deception: The Role of Consequences, The American Economic Review, 95(1):
384-394.
Gupta, A. K., S. P. Raj, and D. L. Wilemon (1985): R&D and Marketing Dialogue in High-Tech
Firms. Industrial Marketing Management 14, 289–300
Hagenbach, J. and F. Koessler (2010): Strategic Communication Networks, Review of Economic
Studies, 77(3):1072-1099
Hao, L. and D. Houser (2013) “Perceptions, Intentions, and Cheating”, working paper
Kolb, D. M., L.L. Putnam, J. M. Bartunek (1992): Hidden Conflict in Organizations: 1st
Edition, SAGE Publications
Krishna, V. and J. Morgan (2001a): A Model of Expertise, Quarterly Journal of Economics,
116(2):747-775
Krishna, V. and J. Morgan (2001b): Asymmetric Information and Legislative Rules: Some
Amendments, American Political Science Review, 95(2):435-452
Lai, E. K., W. Lim, and J. T.-Y. Wang (2011): Experimental Implementations and Robustness
of Fully Revealing Equilibria in Multidimensional Cheap Talk, working paper
Lundquist, T., T. Ellingsen, E. Gribbe and M. Johannesson (2009): The Aversion to Lying, Journal
of. Economic Behavior and Organization, 70(1–2):81–92
Milgrom, P.R. and J. Roberts (1986): Relying on the Information of Interested Parties, Rand Journal
of Economics, 17: 18-32
Miller, Katherine (2011): Organizational Communication: Approaches and Processes, Cengage
Learning
Minozzi, W., and J. Woon (2011): Competition, Preference Uncertainty, and Jamming: A Strategic
Communication Experiment, working paper
Morgan, J. and P. C. Stocken (2008): Information Aggregation in Polls, American Economic Review,
98(3): 864-96.
Pirnejad, H., Z, Niazkhani, M.Berg and R. Bal (2008): Intra-organizational Communication in
Healthcare--Considerations for Standardization and ICT Application, Methods of Information in
Medicine, 47(4): 336-45
Rahim, Afzalur (2000): Managing Conflict in Organizations: 3rd Edition, ABC-Clio, LLC
Sheremeta, R. and T. Shields (2013):Do Liars Believe? Beliefs and Other-Regarding Preferences
in Sender-Receiver Games,” Journal of Economic Behavior and Organization, 2013, 94, 268-277.
Sutter, M.(2009): Deception through Telling the Truth? Experimental Evidence from Individuals
and Teams. Journal of Econonomics. 119, 4760.
!
!
Tobak, S. (2008): Marketing v. Sales: How To Solve Organizational Conflict, CBS,
MONEYWATCH
Wang, J.T., M. Spezio and C.F. Camerer (2010): Pinocchio's Pupil: Using Eyetracking and Pupil
Dilation to Understand Truth Telling and Deception in Sender-Receiver Games, American Economic
Review, 100 (3): 984-1007
Weinrauch, J. D., R. Anderson (1982): Conflicts Between Engineering and Marketing Units,
Industrial Marketing Management, 11(4): 291-301
Xiao, E. (forthcoming): "Profit Seeking Punishment Corrupts Norm Obedience, Games and
Economic Behavior
!
!
Appendix A. Z-tree Interface
Figure 1. Messaging Screen
!
!
Figure 2. Guessing Screen
!
!
Figure 3. Result Screen
!
!