Academia.eduAcademia.edu

The effects of consequential thinking on trust game behavior

Contrary to rational Expected Monetary Value (EMV) predictions that no money will be transferred in Trust Games, in experiments players make positive transfers. Theorists have proposed modifying the Sender’s utility function while retaining utility- maximization assumptions to account for this behavior. Such accounts assume that Senders can grasp the possible outcomes of their choices, their probabilities, and utilities. In reality, however, Senders’ choices are unexpectedly complex, and the assumption that they approximate expected utility maximization is highly implausible. Instead, we suggest that Senders are guided by general propensities to trust others. Two experiments examine the effect of inducing consequential thought on Sender behavior. One induced consequential thought directly; the other did so indirectly. The amount sent was significantly reduced following either manipulation. This suggests that models of Sender behavior in Complex Trust Games should not assume that participants routinely engage in consequential thinking (CT) of the depth that would be required for utility maximization.

Journal of Behavioral Decision Making J. Behav. Dec. Making (2008) Published online in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/bdm.614 The Effect of Consequential Thinking on Trust Game Behavior TAMAR KUGLER*, TERRY CONNOLLY and EDGAR E. KAUSEL Department of Management and Organizations, University of Arizona, Arizona, USA ABSTRACT Contrary to rational Expected Monetary Value (EMV) predictions that no money will be transferred in Trust Games, in experiments players make positive transfers. Theorists have proposed modifying the Sender’s utility function while retaining utilitymaximization assumptions to account for this behavior. Such accounts assume that Senders can grasp the possible outcomes of their choices, their probabilities, and utilities. In reality, however, Senders’ choices are unexpectedly complex, and the assumption that they approximate expected utility maximization is highly implausible. Instead, we suggest that Senders are guided by general propensities to trust others. Two experiments examine the effect of inducing consequential thought on Sender behavior. One induced consequential thought directly; the other did so indirectly. The amount sent was significantly reduced following either manipulation. This suggests that models of Sender behavior in Complex Trust Games should not assume that participants routinely engage in consequential thinking (CT) of the depth that would be required for utility maximization. Copyright # 2008 John Wiley & Sons, Ltd. key words trust; consequential thinking; regret; game In recent years, experimental economists have popularized the Trust Game (Berg, Dickhaut, & McCabe, 1995) to investigate trusting and trustworthy behavior, and a considerable literature based on this game has now accumulated (see, e.g., Camerer, 2003, for a review. A sample of this research is shown in Table 1). The game appears to offer an unusually clear and precise measure of one player’s willingness to trust another and a similarly clear measure of the extent to which this degree of trust is justified. Our concern in the present paper is to examine these measurement claims critically, especially the first. Most studies to date have implicitly interpreted trustor behavior in these games as reflecting thoughtful, detailed assessments of the other player’s possible responses to each of the trustor’s specific actions, and a judgment of how likely and how desirable each response is. An alternative interpretation is that the trustor’s behavior simply reflects a broad general tendency to assume that others will or will not behave benevolently in this context—a ‘‘trust * Correspondence to: Tamar Kugler, Department of Management and Organizations, University of Arizona, AZ, USA. E-mail: [email protected] Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making Table 1. Summary of empirical Trust Game papers Factor at the center of investigation References Demographics (gender, race, age, nationality) Bellemare and Kroger, 2007; Buchan and Croson, 2004; Buchan, Croson, and Dawes, 2002; Burns, 2006; Danielson and Holm, 2007; Fershtman and Gneezy, 2001; Greig and Bohnet, 2007; Ho and Weigelt, 2005; Holm and Nystedt, 2005; Innocenti and Pazienza, 2007; Kiyonari, Yamagishi, Cook, and Cheshire, 2006; Sutter and Kocher, 2007; Schechter, 2007; Willinger, Keser, Lohmann, & Usunier, 2003 Coricelli, Morales, & Mahlsted, 2006; Eckel and Wilson, 2004 Glaeser, Laibson, Scheinkman, and Soutter, 2000 Gunnthorsdottir, McCabe, & Smith, 2002 Cox, 2004 Kugler et al., 2007; Cochard et al., 2004; DeBruin, 2002; Engle-Warnick & Slonim, 2004; Keser, 2003; Rigdon et al., 2007; Johansson-Stenman, Mahmud, and Martinsson, 2006; Schotter and Sopher, 2006; Barclay, 2006; Houser, Xiao, McCabe, and Smith, 2007 Scharlemann, Eckel, Kacelnik, and Wilson, 2001; Slonick, 2007; Houser, Xiao, McCabe and Smith (2007) Risk attitude Correlation with other trust measures Machiavellianism Fairness of outcome distributions Variations of the game (Group vs. individual behavior, stake size, single vs. repeated play, receipt of advice, and opportunity to punish) Communication between players heuristic,’’ in the terminology of Meyerson, Weick, and Kramer (1996). This paper examines the question of which interpretation is more plausible. In the simplest (hereafter the ‘‘Basic’’) version of the Trust Game (Dasgupta, 1988; Kreps, 1990), each player is endowed with $1. The Sender, A, must choose whether to send or keep her dollar. If she decides to keep it, the game ends and each player gets $1. If A sends, her $1 is tripled by the experimenter and delivered to the Responder, B, who now has $4. B must then decide whether to keep the $4 (leaving A with $0) or to split, keeping $2 and returning $2 to A. Each player thus faces at most a single binary decision, A to send or keep, B to keep or split. A’s choice is assumed to depend on her estimate of B’s likely behavior. If A assumes B is self-interested and rational, then her decision to send her $1 will leave B with $4 and no incentive to split, leaving A with $0. If, however, A judges that the probability of B deciding to split is sufficiently large (i.e., she ‘‘trusts’’ B), she will send her $1. She will profit by doing so, increasing her payoff from $1 to $2, if B does, in fact, split (i.e., acts ‘‘trustworthily’’); otherwise, she ends up with $0. A’s decision to send thus satisfies one accepted definition of trust: the deliberate willingness of an agent to make herself vulnerable to the actions of another agent (Rousseau, Sitkin, Burt, & Camerer, 1998). The very simple structure of the game and the single binary choice facing each player makes plausible the interpretation of A’s assessment of the probability that B will split as a trust measure (though somewhat different interpretations, such as ‘‘optimism about the success of a risky investment,’’ are not excluded. In fact, some researchers have renamed trust games as ‘‘investment games’’). The Basic game provides only binary measures: A either does or does not ‘‘trust’’ B, B either is or is not ‘‘trustworthy.’’ Generalized (hereafter, ‘‘Complex’’ or ‘‘Continuous’’) versions of the Trust Game (Berg et al., 1995) have attempted to provide finer-grained measures of A’s trustingness, and of B’s trustworthiness In these Complex games, the Sender, A, receives a divisible initial endowment, X > 0, and can transfer any amount x  X to the Responder, B. The transferred amount (the ‘‘send’’) is tripled by the experimenter, so that B receives 3x. She can then return to A any amount y  3x. The final payoff for A is thus (X  x þ y), and for B is (3x  y). (In some versions of the game, B is also independently endowed, raising her final payoff by this amount.) The amount sent, x, is taken as a (continuous) measure of A’s trustingness, the relative return (y/3x) as a (continuous) measure of B’s trustworthiness. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm T. Kugler et al. Consequential Thinking in the Trust Game The standard game-theoretic analysis of both Basic and Complex Trust Games assumes selfishness and common knowledge of rationality and predicts that no transfers will be made. B has no incentive to return any part of whatever money she receives. A thus has no incentive to send any. However, the results of trust game experiments are in sharp contrast with this prediction: average transfers in Complex games are typically about a half of X, and returns are around 30% of the transfer (see Camerer, 2003, for a recent review). In Basic games, 30–40% of As typically send, and around 40% of B’s who receive a send typically send back, depending on the exact parameters of the game (Snijders & Keren, 1999). Economists have proposed several accounts of these data (and deviations from selfish equilibrium predictions in general). Some accounts are outcome based (e.g., Bolton & Ockenfelds, 2000; Fehr & Schmidt, 1999), others intention based (e.g., Charness & Rabin, 2002; Falk & Fischbacher, 2006; Kirchsteiger & Dufwenberg, 2004; Levine, 1998; Rabin, 1993). These models relax the assumption of complete selfishness and allow other factors (e.g., other-regarding preferences, a desire for equitable distribution of payoffs, reciprocation tendencies) to enter A’s utility function. What is implicitly retained, however, is the assumption that the Sender can predict the possible outcomes (for both players) of her possible actions, and can assign probabilities and utilities to these outcomes. In the Basic version of the game, these assumptions are quite plausible. A Sender with a clear grasp on the game rules knows that ‘‘keep’’ will yield a payoff of ($1 for her, $1 for B), while ‘‘send’’ has two possible outcomes: ($2 for her, $2 for B) or ($0 for her, $4 for B). She has only a single behavioral prediction to make: what is the probability that B will split $4 rather than keeping it all? Since there are only three outcomes, it seems plausible that A could form some assessment of their relative desirability. This assessment might be based on her payoff alone (selfish rationality), or might include other considerations such as the B’s payoff (other-regarding preference), the equality of the two payoffs (inequity aversion), and so on. In order to maximize expected utility, A needs only to assess three utilities and one probability. This does not seem to make excessive cognitive demands. In Complex Trust Games, however, A has a much more difficult task. She faces a range of possible sends (x ¼ $0 or $1, or $2 or . . .) from an endowment of $10 or $20, and B faces a three times larger choice set as to how much to return. A Sender who assumes (correctly) a non-zero probability of positive returns from B faces a dauntingly difficult problem of choosing a utility-maximizing value of x. Appendix A details the analysis, but the problem is readily illustrated. Suppose that A is endowed with X ¼ $20 and is considering the implications of just one of her options: sending $10. She must first consider the 31 options such a send would make available to the B, (return $0, return $1,. . ., return $30) and make some assessment of (a) how likely and (b) how desirable each is. She must then integrate these 62 items of information into an overall evaluation or ‘‘subjective expected utility’’ of the ‘‘Send $10’’ option She must now repeat the exercise for her remaining 19 options, which range in complexity from the relatively simple (‘‘Send $1’’: only 4 possible responses, closely clustered) to the extremely difficult (‘‘Send $20’’: 62 possible responses, widely dispersed), before finally choosing the option offering the best expected utility.1 The Sender’s problem is, in fact, highly asymmetrical. If she assumes (incorrectly) that nothing will be returned, her conclusion is simple: send nothing. If, however, she correctly assumes that something may be returned for any non-zero send, detailed consequential thinking (CT) in the Complex game is essentially impossible. We suspect, then, that the vast majority of Sender decisions in Complex Trust Games do not involve much coherent linkage between predicted consequences of alternative actions and thoughtful evaluation of them. We suspect that A’s decision is typically made in a state of considerable confusion as to how best to think about the problem. It may reflect ‘‘trust’’ in the sense of a generalized belief in the benevolence of the world (cf. Messick & Kramer, 2001; Meyerson et al., 1996; Yamagishi & Yamagishi, 1 A related point concerning the effects of rapid growth of uncertainty is made by Basu (1977) in the context of Prisoner’s Dilemma games. We are grateful to an anonymous reviewer for pointing us to this paper. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm Journal of Behavioral Decision Making 1994), but it seems unlikely that it reflects detailed consequential thought in the sense of predicting how B will react to any given send, and how A will feel about these reactions. In the two experiments reported below, we attempt to assess the extent to which Senders in a Complex Trust Game think consequentially in deciding how much to send. We suspect that in general the Sender will not achieve a satisfactory framing of the problem that would allow her to lay out and apply backward induction analysis as a guide to how much to send. Our experimental approach is to shape the Sender’s decision process, either directly (Experiment 1) or indirectly (Experiment 2), into a more explicitly consequential form, and then to compare the amount sent to a control condition in which no such shaping was attempted. If Senders in the control condition are already thinking consequentially, these interventions will have no effect. Conversely, if the interventions do, in fact, change sending behavior, then we can infer that Senders in the control condition were not engaging in consequential thought, or at least not to the same extent. This thus throws some light on the thought process of Senders whose framing of the problem has not been shaped—that is, the vast majority of participants in Complex Trust Games to date. For reasons spelled out in Appendix A, our expectation is that increased consequential thought will lead to reduced sending, but direction of change is not crucial here. If inducing consequential thought changes sending behavior, it is reasonable to infer that participants not so induced were not engaging in such thought, or not to the same extent. Following this logic, in Experiment 1 Senders were encouraged to engage in a form of CT before deciding how much money to send. We asked them to make point predictions of how much money would be returned to them if they sent a large, a moderate, or a small amount—that is, to think about their problem in a simplified, but explicitly consequentialist framework. Experiment 2 tried to induce this mode of thought in a somewhat less transparent way by asking Senders to think about scenarios in which they might end up regretting their decisions. In one condition, they were asked to consider the regret they might feel after sending a large amount and receiving little or nothing in return (regret for over-trusting the Responder). In a second condition, Senders were asked to think about the regret they might feel if they sent a small amount and received a larger amount in return (regret for under-trusting the Responder). A third group considered both possible regrets. Details of the experiments and results are presented in the next sections. EXPERIMENT 1 Senders were assigned to either CT or Control conditions. In the CT condition, before they decided how much to send, they were asked to write down the amount they thought would be returned to them if they sent (a) their whole endowment ($20), (b) half their endowment ($10), and (c) the minimal transfer we allowed ($1). Senders in the Control condition were not asked to make these predictions.2 This CT manipulation made two different aspects of the game salient to the Senders. First, in order to answer the manipulation questions, Senders had to put themselves in the role of the Responder, imagine they had received the specified amount, and predict how much money, if any, they would send back. Such perspective taking is known to be a challenging task for many participants (Neale & Northcraft, 1991). Second, by replying to all three manipulation questions simultaneously, Senders were faced with anticipated consequences of three representative acts they could choose, encouraging direct comparison. The manipulation thus prompts the Sender to adopt a backwards induction framework, first predicting the Responder’s strategy and then comparing the results for each of her own possible moves. Simply put, the Senders are led to construct for themselves simple statements in the form of ‘‘If I do X then I expect to get 2 The requirement of a minimum send was introduced since, if A were allowed to send $0, B’s response would be constrained to be $0 and A would have no opportunity to reflect on the range of possible outcomes B might generate. The minimum send requirement does, however, introduce a variation from earlier Complex trust games, and might conceivably limit generalizability from one to the other. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm T. Kugler et al. Consequential Thinking in the Trust Game Y’’ for a range of possible Xs, and to record the results in an easily compared format. This framing is, of course, a considerable simplification of the full problem facing the Sender (see Appendix A). It does, however, represent a clear step in the direction of thinking about the range of possible actions available, and about at least point estimates of the other player’s likely responses. As we suggest in Appendix A, analysis of the Sender’s problem under expected positive transfers suggests that sending large amounts will be an unattractive option for most risk-averse participants. Even a simplified analysis, such as that set up by the experimental manipulation, is likely to make this preference clearer and reduce the amount sent. We thus propose: Hypothesis 1. Senders who are primed for explicit consequential thought will send less money than will Senders in the control condition. Method Participants The participants were undergraduate business students, recruited by campus advertisements promising monetary reward and class credit for participation in a decision-making task. All experiments were run in the Decision Behavior Laboratory at the University of Arizona. Design Participants were randomly assigned to CONTROL (no manipulation) or CT conditions. Sixty-four individuals participated, and were assigned into 32 Sender–Responder pairs, 16 pairs in each condition. Procedure Participants were scheduled in cohorts of 8. Upon arrival at the laboratory, each participant drew a card with a room number and role (Participant A—Sender, or B—Responder), and was shown to a private room. They were instructed to wait with the door closed until all the other participants had arrived, at which point an envelope of experimental instructions would be passed under their door. This arrangement ensured that participants remained anonymous to one another and had no opportunity to interact before the experiment. After all participants had arrived, envelopes containing the experimental materials were distributed. Senders received instructions (including the manipulation questions in condition CT), a decision form, and an envelope containing 20 $1 bills. Responders received instructions and a decision form, and were asked to wait for the decision of the Sender. (Instructions are included in Appendix B.) In the CT condition, Senders answered the manipulation questions before deciding how much money to send. In the control condition, Senders answered the same questions after deciding how much money to send, but before receiving the Responder’s reply. After completing the manipulation questions (if any), Senders were told to take the money out of the envelope. They completed the decision form and returned it to the envelope with the money they intended to send to the Responder. They then sealed the envelope and passed it under the door. The instructions emphasized that any money not sent to the Responder was theirs to keep. The envelopes were taken to the control room, where decisions were logged, and the amount sent was tripled. The experimenters then delivered the resealed envelopes to randomly chosen Responders, again passing the envelopes under closed doors. Responders reviewed the amount of money sent and decided how much to send back. They completed a decision form, sealed it in an envelope with the money to be returned, and passed it under the door. These envelopes were then logged and returned to the Sender. After both Senders and Responders had made their monetary decisions, they completed a postexperimental questionnaire, and were dismissed individually. They were given no opportunity to meet or identify the other participants. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm Journal of Behavioral Decision Making Results Sender behavior The mean amounts sent were $8.88 and $14.75 for CT and CONTROL conditions, respectively. This difference is significant by an independent samples t-test (t(30) ¼ 2.62, p < .05). The percentage of Senders who sent the minimum amount ($1) rose from 0% in condition CONTROL to 12.5% in condition CT; the percentage of Senders sending all $20 dropped from 56.3 to 18.8%. These results strongly support Hypothesis 1. Responder behavior Responders in condition CT returned significantly less than those in condition CONTROL ($8.25 and $17.69; t(30) ¼ 2.34, p < .05). Relative returns (y/3x) were, however, not different between the two groups (24.1 vs. 34.2%, t(30) ¼ 1.47, n.s.). When Senders transferred more than half of their endowment (x > $10), Responders returned an average of $24.92, a mean relative return of 42.7%. For transfers of $10 or less, the mean return was $4.79, a mean relative return of 19.8% (see Figure 1 for the full distribution of returns). Responders, in short, were proportionally more generous when they received generous sends. Senders who chose to send less than half of their endowment were better off sending none at all. This finding is consistent with Pillutla, Malhotra, and Murnighan (2003) who suggest that sending less than everything is viewed as lack of trust and is therefore not reciprocated. Payoffs and efficiency Average payoffs for the Sender and Responder were $19.38 and $18.38 for the CT condition and $22.94 and $26.56 for the CONTROL condition. The higher transfers in condition CONTROL resulted in more efficient interactions (higher joint payoff) as well as higher mean payoffs for each participant. The difference in payoff between conditions is significant for the Responders (t(30) ¼ 2.01, p < .05) but not for the Senders (t(30) ¼ 1.40, n.s.). Expectations Senders’ expectations of the Responder’s returns from a $1, a $10, and a $20 transfer were elicited in both conditions, before the transfer decision in the CT condition (where the questions constituted the experimental Figure 1. Distribution of amount sent (x) versus amount sent back (y), Experiment 1 Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm T. Kugler et al. Consequential Thinking in the Trust Game Table 2. Mean predicted return versus amount sent in each scenario, Experiment 1 Scenario Condition CT (n ¼ 16) CONTROL (n ¼ 16) Mean actual return $1 sent $10 sent $20 sent $0.44 $0.25 $0.00 (n ¼ 2) $7.50 $9.94 $8.33 (n ¼ 6) $17.19 $25.00 $25.83 (n ¼ 12) manipulation) and after the decision in the CONTROL condition. For most Senders (88% in condition CONTROL and 81% in condition CT), the amounts actually sent were consistent with the stated expectations: Those expecting small returns sent little, those expecting large returns sent more. (Of course, since Senders in the CONTROL conditions stated their expectations after deciding how much to send, they may simply have reported consistent expectations post hoc.) Table 2 presents the Senders’ mean predicted return for each of the three scenarios, as well as the mean actual returns from the subset of Bs who actually received $1, $10, or $20. The only scenario that shows a significant difference between experimental conditions is the one in which the Sender sends all $20. Senders in the CONTROL condition reported anticipating significantly higher returns from a $20 send than Senders in the CT condition, (t(30) ¼ 2.53, p < .05). This result adds further support for the logic underlying Hypothesis 1. Explicit consideration of the Responder’s likely behavior on receiving a $20 transfer suggests to most CT participants that the return to this transfer will be modest and the investment a poor one. In contrast, CONTROL participants appear to have arrived at their more generous offers by means other than backward induction, and then to have justified them post hoc by indicating an expectation of a large return. Interestingly, Senders in condition CT are too pessimistic; the expectations stated by the CONTROLs are, on average, quite accurate. Discussion The results from Experiment 1 show that the Consequential Thinking (CT) manipulation had a large and significant negative effect on the amount of money sent. Senders in the CT condition sent on average 40% less ($8.88 vs. $14.75) than did those in the CONTROL condition, and 12.5% of them sent the minimum allowed ($1), (vs. 0% of the controls). The CT manipulation thus appears to have had the intended effect of increasing the use of a consequentialist analysis similar to that embodied in the standard economic model and to have brought the majority of the participants closer to the ‘‘send nothing’’ implication of that model. The details of this linkage, however, remain unclear. The specific series of questions (‘‘Suppose you sent $1 . . . $20 . . . $10. . .’’ with specific questions about the Responder’s likely response and the final payoffs to each participant) were intended as both suggesting a framework and providing a worksheet for the participants, leading them to organize their thinking in a consequentialist framework. It is possible that the wording of the manipulation (‘‘. . . you may find it helpful to consider the likely consequences. . .’’; see Appendix B) may also have conferred legitimacy on this approach and thus encouraged its use. Further, answering the questions probably increased the length of time the participants thought about the problem before settling on an amount to send and this increased deliberation time may itself have favored the emergence of the consequentialist analysis. These possible linkages—suggestion, endorsement, organization, and delay—are, of course, not mutually exclusive, and all four may have contributed to the overall reduction in the amount sent. In Experiment 2, we attempted a less heavy-handed manipulation of the idea of consequential thought, based on consideration of possible regrets. Some Senders were asked to think about the regret they might Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm Journal of Behavioral Decision Making experience if they had been too trusting in their sending decisions. Others considered a scenario in which they had not been trusting enough. While the motivations made salient by the two scenarios are opposing, both share a focus on the consequences of one’s choices. If the Sender’s decision problem is fundamentally a motivational one, we would expect the two manipulations to shift patterns of sending in opposite directions, increasing sending in the under-trust condition and reducing sending in the over-trust condition. If, however, the fundamental problem is a framing issue—how to think about the problem—then both manipulations would, as in Experiment 1, induce more CT and shift sending in the same (negative) direction. Details of the argument and of the experiment are given in the next section. EXPERIMENT 2 Numerous studies have shown that decision makers for whom outcome regret is made salient may make different decisions than do those for whom regret is not made salient (Reb, 2008; Richard, de Vries, & van der Pligt, 1998; Simonson, 1992). Two alternative mechanisms link regret to choice (cf. Connolly & Zeelenberg, 2002). The first, favored by economic regret theorists (Bell, 1982; Loomes & Sugden, 1982, 1987), considers regret as an additional element in the utility of a specific outcome: an outcome has utility in itself, but this may be reduced by regret resulting from comparing the outcome with another that might have been received. A second possibility, proposed by scholars from Janis and Mann (1977) to Baumeister, DeWall, and Zhang (2007) and recently demonstrated empirically by several researchers (e.g., Connolly & Reb, 2005; Reb, 2008; Zeelenberg, 1999), is that anticipation of the regret associated with a poor choice may motivate decision makers to consider their decision more carefully—to achieve ‘‘vigilant decision making,’’ in Janis and Mann’s terminology, or otherwise to improve the quality of their decision processes. We exploit these findings in the context of the Trust Game to better understand the Senders’ decision processes. We considered two types of potentially regrettable errors: under-trusting, in which the Sender transfers a small amount and receives a generous response from the Responder, suggesting that sending a larger amount would have been financially advantageous (and would, perhaps, have presented the Sender in a nicer light); and over-trusting, in which the Sender sends a large amount and the Responder sends back little or none of the tripled product (thus causing the Sender both financial loss and annoyance at being taken for a sucker). In Experiment 2, we made one or other, both or neither of these possible types of regret salient to four different groups of Senders. Regret for over-trusting was manipulated as follows. Before deciding how much to send the Sender was asked to consider a scenario in which she decided to trust the Responder and sent a large amount, but the behavior was not reciprocated and the Responder keeps most (or all) of the money. She was asked to rate how much regret she would expect to feel in this situation. Regret for under-trusting was manipulated using the reverse scenario—a decision to send a small amount, followed by the Responder returning more than the original send. As before, the Sender was asked to how much regret she would feel if this happened. (The return is, of course, limited to three times the amount sent. A $3 return for a $1 send might be interpreted by the Sender either as a rebuke or as a signal of a lost opportunity for much larger financial gain. Either of these might be seen as highly regrettable.) Note that although two different regrets are highlighted, both inductions ask the respondent to consider a specific action and its specific consequence. It is likely that a respondent asked to think about the consequences of a very small send may also do the same for a larger one, and vice versa. The manipulation may thus elicit either regret-avoidant (i.e., avoiding the specific regret-inducing behavior that appears in the manipulation) or consequential behavior. The design allows us to test two alternative hypotheses, one derived from the general process-modifying effect of regret, the other from the specific, utility-modifying effect. Under the former mechanism: Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm T. Kugler et al. Consequential Thinking in the Trust Game Hypothesis 2a. Increased regret salience will be associated with reduced sending, regardless of the type of regret made salient. Under the second, utility-modifying mechanism, we would expect under-trust regret to lead to increased sending, and over-trust regret to lead to reduced sending, compared to control, with the combination of both sorts of regret off-setting one another: Hypothesis 2b. Increased salience of under-trust regret will be associated with increased sending, and increased salience of over-trust regret will be associated with reduced sending, compared to controls. Method Participants The participants were 128 undergraduate business students, recruited as for Experiment 1. No participants served in both experiments. Design The experiment employed a 2 (regret for over-trusting)  2 (regret for under-trusting) design, resulting in four experimental conditions: OVER, UNDER, CONTROL, and BOTH. Participants were randomly assigned to the four conditions, with 16 pairs in each condition. Procedure The procedure was similar to Experiment 1, except for the experimental manipulation. Senders answered the manipulation questions before deciding how much money to send. In condition UNDER, Senders were asked to imagine that they had sent a small amount ($3 or less), and received from the Responder more than they had sent. They then rated how much they would regret the outcome, how much they would regret the decision they had made, whether they would blame themselves for their decision, whether they would behave differently if faced with this decision again, and whether they would advise others to behave differently in similar circumstances. All replies were on one to seven scales (see Appendix C). In condition OVER, Senders were asked to imagine that they sent most or all of their money ($18 or more), and received nothing in return. They then answered the same questions as in the previous condition. In condition BOTH, Senders received both scenarios; in the control condition they received neither. Results Manipulation checks The four questionnaire measures of decision regret3 showed large inter-item correlations (mean correlation .43) and were combined into a single scale of anticipated regret (Cronbach’s a ¼ .75). Both manipulations appear to have been successful. The mean regret score in the under-trust condition was 3.94; in the over-trust condition it was 4.17, both means at or above the scale mid-point. Interestingly, in the combined regret condition, where participants rated both possible regrets, under-trust regret was rated more intense than over-trust regret (M ¼ 4.73, 3.52; t (15) ¼ 2.6, p < .05). Sender behavior A 2 (under-trust manipulation)  2 (over-trust manipulation) ANOVA reveals a significant effect of the under-trusting manipulation (F(1,60) ¼ 6.80, p < .05), no significant effect of the over-trusting manipulation 3 The first question taps into outcome regret, and is therefore not included in the anticipated process regret score. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm Journal of Behavioral Decision Making (F(1,60) ¼ 1.7, n.s.), and a marginally significant interaction effect (F(1,60) ¼ 2.96, p ¼ .09). The main effect for under-trusting is mainly attributable to the large difference between the control condition (mean send ¼ $13.13) and the amounts sent in the three experimental conditions (mean send $8.44, $7.94, $9.50 for BOTH, UNDER, and OVER conditions, respectively). Planned comparisons show a significant difference between the control condition and the other three conditions (t(60) ¼ 3.25, p < .05). The results thus strongly favor Hypothesis 2a over Hypothesis 2b. Consideration of either type of post-decision regret, or both, appears to have reduced the amount sent. Figure 2 displays the distribution across experimental conditions of x, the amount sent. To facilitate presentation, x is divided into four categories: minimum (x ¼ $1), up to half ($1 < x  $10), more than half ($10 < x < $20) and the entire endowment (x ¼ $20). The distributions are significantly different from one another (x2(9) ¼ 22.70, p < .05). In addition to corroborating the findings of the previous section, this analysis also shows that the entire $20 is seldom sent when regret is salient: only one Sender out of 48 (2%) sent all $20 after receiving (any) regret priming. Five of 16 Senders (31%) sent the whole amount in the control condition (x2(1) ¼ 12.02, p < .001). Responder behavior Overall, the more the money sent by the Sender, the more the Responder sent back. The correlation between x, the amount sent, and y, the amount returned, is high in all the conditions (0.89, 0.93, 0.73, and 0.83 in BOTH, OVER, UNDER, and CONTROL, respectively). Overall, Responders returned a mean of $8.55, a mean relative return of 25%. When Senders transferred more than half of their endowments (x > $10), responders returned an average of $21.44, a mean relative return of 36%. When half or less of the amount was sent, the average return was $17.70, a mean relative return of 21%. As in Experiment 1, generous sending stimulated more generous returns. Senders who chose to send less than half of their endowment would have been better off sending nothing. Discussion Overall, the results strongly support Hypothesis 2a. Both under- and over-trust regret manipulations caused a reduction in the amount sent. The effect is not additive: the mean amount sent when both sorts of regret were Figure 2. Distribution of amount sent (x) by Regret Condition, Experiment 2 Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm T. Kugler et al. Consequential Thinking in the Trust Game primed was indistinguishable from the amount sent in over-trust regret alone and under-trust regret alone. This argues further against the mechanism underlying Hypothesis 2b, that different types of regret modify the utility of specific decision elements. Instead, we read these results as supporting a mechanism of the sort proposed by Janis and Mann (1977) and Baumeister et al. (2007): anticipation of regret in general, and of consequence-related regrets in particular, motivates a ‘‘vigilant’’ decision process that, in this case, takes the form of consequentialist analysis, and results in reduced transfers from Sender. GENERAL DISCUSSION A very broad range of factors that have been expected, and often shown, to influence behavior in the Trust Game (see Table 1). The two experiments reported here do more than merely add a couple of new variables to this long list. We read our data, together with the formal analysis of the Sender’s task, as clarifying the fundamental interpretation of Complex Trust Game results. They suggest that Senders may be behaving in a task of which they have little coherent and comprehensive understanding, either of the structure of the game or of the appropriate principles by which to analyze it. The self-interested, backward-induction logic that comes so readily to the game theorist may simply not be available to most Senders. If Senders assume (correctly) that there is a non-zero probability of a positive return from a Responder who receives x > $0, they face a complex decision problem. Given this complexity, we see little prospect that the Sender’s behavior can be captured, even to a first approximation, by a revised rational model. While fairness, trust, reciprocity, and other unselfish preferences may affect the Sender’s decision, we are hesitant to explain the results by adding additional components to the Sender’s hypothetical utility function, an approach that has been favored by economists facing a discrepancy between theoretical predictions and experimental data (e.g., Cox, 2004). Certainly Senders may be guided by a variety of partial motivations and analyses, possibly including an altruistic wish to enrich the Responder, a wish for an equitable outcome, a desire to present oneself as a decent person to the experimenter, some home-spun proverb such as ‘‘Nothing ventured, nothing gained,’’ or a general disposition to trust others. What they are plainly not primarily guided by is a coherent framework connecting their actions to the possible responses they may elicit, the probabilities of these responses, and a set of utilities for the final outcomes each implies. First, the Sender’s problem is simply too complex to make spontaneous emergence of such a framework psychologically plausible. Second, our experiments demonstrate that simple consequentialist manipulations substantially change Sender behavior. This strongly implies that those not exposed to such manipulations do not spontaneously adopt such a framework. Senders may well be moved by a variety of motivations, but these can be reflected in their behavior only quite indirectly if the participants fail to trace out the consequences of their actions in systematic ways. Experiment 1 provides direct evidence (if more evidence were needed) that the standard game-theoretic analysis is not routinely applied by the majority of Senders. When elements of that analysis were presented in the form of a step-by-step worksheet of possible offers and likely responses, both the force and the implications of the analysis were seen by more of the Senders and the mean amount sent was sharply reduced. Presumably, this improving grasp on the core of the game and the standard analysis of it underlies the decline in amount sent over repeated play with strangers (DeBruin, 2002; Keser, 2003; Rigdon, McCabe, & Smith, 2007). It is also consistent with the finding that those who played the trust game after previously playing other games sent less than did those without that prior gaming experience (Schotter & Sopher, 2006); that players learned faster by observing others play a game than by playing it themselves (Dixit & Skeath, 2004); that players who participated in both roles exhibit less trust and less reciprocity (Burks, Carpenter, & Verhoogen, 2003); and that players who thought about the game in groups sent lower transfers (Kugler, Kocher, Sutter, & Bornstein, 2007). In Experiment 2, an increased proportion of the Senders who were induced to reflect on the emotional consequences of either over-trusting or under-trusting the Responder seem to have been led to the same Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm Journal of Behavioral Decision Making consequentialist analysis, and to its implication that sending a large part of one’s endowment is financially unwise. Note especially that the same effect was achieved by priming either over-trust or under-trust regret. It was not the case that thinking about the results of over-trusting led to less trust, while thinking about the results of under-trusting led to more. It is simply that thoughtful attention to either consequence led to consequentialist thinking, and thus to reduced sending—as suggested by the ‘‘regret as a tool for improving the quality of decisions’’ approach. Unfortunately, given the structure of the trust game, this ‘‘improved’’ thinking about consequences leads to poorer outcomes. This problem-framing interpretation of our results appears to have significant implications for several earlier analyses of trust-game results. For example, it is certainly possible that Senders base their sending decisions in part on an altruistic desire to enrich the Responder (e.g., Cox, 2004). But if such other-regarding utility does, in fact, shape the Sender’s behavior, why should it decline when the Sender is led (as CT participants were) to see clearly that the more she sends, the more the Responder will be able to enrich herself (and, not incidentally, at the Sender’s expense)? The altruistic account of positive sending seems incompatible with the evidence from Experiment 1 that increasing consequentialist thought reduces mean sending. This is consistent with evidence from repeated games with stranger and partner matching. A ‘‘repeated partners’’ design allows Senders to engage in two types of learning processes. First, they learn the structure of the game (e.g., rules and payoffs) and errors that appear in early rounds usually are corrected over time. Second, Senders learn the nature of their specific opponents, and establish reciprocal relationships with them. A ‘‘repeated strangers’’ design allows learning of the first type only, since the Responders change each round. Therefore, we anticipate that repeated interaction with different Responders will converge to results similar to those we observed in Experiments 1 and 2, and there is evidence that this is indeed the case. DeBruin (2002), Keser (2003), and Rigdon et al. (2007) report a decline in amounts sent over time in ‘‘repeated strangers’’ games. In contrast, in experiments with repeated partner designs many different dynamics develop (e.g., Cochard, Van, & Willinger, 2004; Engle-Warnick & Slonim, 2004). In summary, any realistic account of the Sender’s behavior in the Complex Trust Game must first acknowledge that she faces a quite complicated decision problem. This complexity, in our view, makes it implausible that Senders engage in a careful assessment of estimated probabilities of the other player’s responses to each possible send and of their utilities for each response (whether or not these utilities include considerations of equity, altruism, and reciprocity, as well as of personal monetary payoffs). They simply cannot: the problem is too complex. More plausibly, we would argue, the Sender is largely overwhelmed by the complexity of the problem and bases her decision on some very partial analysis and incomplete consideration of her own and the Responder’s motivations. Her behavior may well reflect a trust heuristic or tendency toward optimism about the benevolence of the world, and thus the hope that, if she puts a large amount of money in play by making a generous send, she may end up benefiting from it (or, at least, feeling good about the outcome). She may even intuit that a large send offers modest prospects of modest gain and substantial prospects of large or total loss. However, it seems far-fetched to think that her behavior reflects any precise grasp of all the possible consequences of each of her behavior choices, of their likelihood, and of their relative desirability to her. The manipulations used in both experiments reported here appear to have had the effect of changing the Senders’ problem understanding by inducing a modest amount of what we have called ‘‘CT.’’ This altered way of thinking about the problem appears to have led Senders to see that their essential choice was between a safe and satisfactory payoff and a larger, but quite risky, alternative with a significant possibility of substantial loss. As it happens, a majority preferred the former. But the crucial finding was that even this modest increase in CT produced substantial changes in Sender behavior. By implication, Senders in the control groups (and thus in all other Complex Trust Game experiments we are aware of) must have been engaging in even less CT than this modest level—that is, in hardly any consequential thought at all. (Ironically, of course, such non-CT leads to improved payoffs for both players in this game.) We conclude that the Complex Trust Game may be Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm T. Kugler et al. Consequential Thinking in the Trust Game unexpectedly limited as a vehicle for studying complex utility functions and probability assessments—in short, for studying the elements that presumably constitute the phenomenon of trust at any level of detail beyond a simple decision heuristic or broad and generalized faith in the benevolence of the external world. ACKNOWLEDGEMENTS We gratefully acknowledge financial support by a contract F49620-03-1-0377 from the AFOSR/MURI to the University of Arizona. We thank Bora Kim for her help in the collection of data used in this paper. APPENDIX A THE SENDER’S DECISION PROBLEM IN A $20 COMPLEX TRUST GAME Denote by x the transfer chosen by the Sender. The Sender’s strategy space is discrete and contains 20 strategies, x 2 f1; 2;    ; 20g: Denote by y(x) the return chosen by the Responder for a transfer of x, yðxÞ 2 f1; 2;    ; 3xg. Therefore, a strategy y for the Responder is a vector with Q20 elements, y ¼ ðyðx ¼ 1Þ; 28 yðx ¼ 2Þ;    ; yðx ¼ 20ÞÞ. It is straightforward to show that there are 20 (!) x¼0 ð3x þ 1Þ ffi 2:61  10 possible strategies in the Responder’s strategy space. However, since this is a sequential game, it has only (!) P20 x¼0 ð3x þ 1Þ ¼ 651 possible outcomes. Assume that the Sender is a rational, utility-maximizing player. Assume further that she believes (correctly) that the Responder is not a payoff maximizer, so that there is a non-zero probability of a positive return from any Responder who receives a positive transfer. To maximize her utility, the Sender needs to estimate the probability, for each strategy x, that each value of y(x) (and thus each outcome) will be realized. Since there are 3x þ 1 such values, she needs to estimate 3x probabilities. pðyðxÞ ¼ 0Þ; pðyðxÞ ¼ 1Þ;    ; pðyðxÞ ¼ 3x  1ÞÞ. To arrive at an expected utility for this alternative, x, she must now assess the utility of each of the (3x þ 1) possible outcomes, and combine these utilities with the appropriate probabilities to form an expected value (a ‘‘range of risk problem’’ requiring non-trivial computation for even an approximate solution: see Behn & Vaupel, 1982, pp. 133–270). The process must be repeated for each strategy x. In total, before she can select the value of x for which her expected utility is greatest, the rational Sender would be required to estimate 631 probabilities, assess 651 outcome utilities, and compute 20 expectations ranging in complexity from the four-element set generated by a send of x ¼ $1 up to the 61-element outcome set generated by a send of x ¼ $20. This is obviously not a plausible model of any actual Sender’s thought process, and it is unclear how a real Sender could achieve any reasonable approximation to it. The CT manipulation used in Experiment 1, based on thinking about the possible consequences of a small, moderate, and large send, considers only a few possible sends, and reduces the distribution of B’s response to each to a single point estimate. Further complexity would be needed to consider such elements of A’s utility function as a desire for equitable outcomes, an altruistic pleasure in B’s payoffs, reference points such as earning a return at least equal to the initial send, a premium for having exposed oneself to risk, B’s differential generosity in response to large and small sends, and so on. If A’s behavior is to be interpreted as reflecting some or all of these utility elements, we need some cognitively plausible way in which she might think about them, at least in an approximate way. A formal analysis of her task does not suggest any obvious candidate. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm Journal of Behavioral Decision Making We speculate that the actual cognitive processes of our participants are much less sophisticated than this. We imagine the Control participants as thinking something like: ‘‘If I send $20, then B will have $60 to play with and, as a benevolent person, might share some of it with me so I could easily come out ahead.’’ In contrast, we imagine the CT participants as thinking a little more hard-headedly: ‘‘B with $60 might well think it would be fair to simply pay me back my $20 and keep the rest, so I would be no better off for taking the risk. He surely will not send back more than $30, but there’s some chance of a modest upside for me. But equally, there is a real chance that he will send back less, maybe nothing at all. So sending $20 seems to offer some chance at a modest gain, and some chance at a substantial or complete loss. Why risk it, when I can guarantee getting $19 by sending the minimum?’’ This is, of course, pure speculation, and substantial further research will be needed to tease out participants’ actual thought processes. We offer it here merely as an illustration of the sort of ‘‘modestly consequential’’ thinking that might be stimulated by the CT manipulation of Experiment 1, without coming close to the comprehensive analysis sketched in the first part of this Appendix. APPENDIX B EXPERIMENT 1—INSTRUCTIONS AND MATERIALS Instructions—Participant A—Condition CT Welcome to the experiment. In the next few minutes, you will be making a significant money decision. The money you earn for the experiment will be determined by the decision you make as well as the decisions other people make. The cash you have at the end of the experiment will be yours to keep. Eight people are participating in this session (seven plus you). You have been randomly assigned to be Participant A. You will be paired anonymously with another Participant (B), but you will not learn, now or later, who the other member of your pair is. To start the experiment, you have been given $20. This money is in the envelope on the desk. Participant B will receive nothing at the start of the experiment. You must now decide how much money to send to B. You can send any whole-dollar amount between $1 and $20. (Note: You must send at least $1.) Any money you do not send to B is yours to keep. The money you send will be tripled by the experimenter before B receives it. That is, if you send $1, B will receive $3. If you send $9, B will receive $27. If you send $18, B will receive $54. This tripled money is B’s to keep or to split with you any way s/he wants. B has to decide how much of the money received s/he is going to send back to you. S/he can send none or all or any amount in between. (Note: This amount will not be tripled!) B will keep any money s/he does not send back to you. Please consider your decision carefully. If you have any questions slide your ‘‘HELP’’ card under the door and one of the experimenters will assist you. WHEN YOU ARE SURE YOU UNDERSTAND HOW THE EXPERIMENT WORKS, PLEASE TURN THE PAGE Before you decide how much money to send to B, you may find it helpful to think about the likely consequences of what you do. Obviously, your outcomes from participating in this experiment depend both on what you do and on what Player B does. Please read each of the following hypothetical situations. In each statement, please fill the spaces with the corresponding amounts of money depending on what you think B would send back. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm T. Kugler et al. Consequential Thinking in the Trust Game a. Suppose you send $1 (so Player B receives $3). How much money do you think Player B would send back to you? I think Player B would send $_____ back to me, so I’d end up with a total of $_____, and B would end up with a total of $_____. b. Suppose you send $20 (so Player B receives $60). How much money do you think Player B would send back to you? I think Player B would send $_____ back to me, so I’d end up with a total of $_____, and B would end up with a total of $_____. c. Suppose you send $10 (so Player B receives $30). How much money do you think Player B would send back to you? I think Player B would send $_____ back to me, so I’d end up with a total of $_____, and B would end up with a total of $_____. Please open the envelope if you haven’t already done so. You will find $20 and a decision form in it. Put back any money you want to send to Participant B in the envelope. (Remember, you must send at least $1). The rest of the money is yours to keep. We will triple the money you send before handing it to Participant B. Remember to fill in the amount you sent on the decision form, and put the form in the envelope as well. Slide the envelope under your door when you are ready and one of the experimenters will collect it. We will contact you again when the Participant B with whom you have been paired has made his or her decision about how much money to return to you. Meanwhile please remain in your room with the door closed. APPENDIX C EXPERIMENT 2—INSTRUCTIONS AND MATERIALS PARTICIPANT A—CONDITION BOTH Welcome to the experiment. In the next few minutes, you will be making a significant money decision. The money you earn for the experiment will be determined by the decision you make as well as the decisions other people make. The cash you have at the end of the experiment will be yours to keep. Eight people are participating in this session (seven plus you). You have been randomly assigned to be Participant A. You will be paired anonymously with another Participant (B), but you will not learn, now or later, who the other member of your pair is. To start the experiment, you have been given $20. This money is in the envelope on the desk. Participant B will receive nothing at the start of the experiment. You must now decide how much money to send to B. You can send any whole-dollar amount between $1 and $20. (Note: You must send at least $1). Any money you do not send to B is yours to keep. The money you send will be tripled by the experimenter before B receives it. That is, if you send $1, B will receive $3. If you send $9, B will receive $27. If you send $18, B will receive $54. This tripled money is B’s to keep or to split with you any way s/he wants. B has to decide how much of the money received s/he is going to send back to you. S/he can send none or all or any amount in between. (Note: This amount will not be tripled!) B will keep any money s/he does not send back to you. Please consider your decision carefully. If you have any questions slide your ‘‘HELP’’ card under the door and one of the experimenters will assist you. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm Journal of Behavioral Decision Making WHEN YOU ARE SURE YOU UNDERSTAND HOW THE EXPERIMENT WORKS, PLEASE TURN THE PAGE Before making your decision, try to imagine how you would feel if: 1. You sent B most or all of your money ($18 or more) and B sent you back nothing. I would regret this outcome 1 2 Not at all 3 4 5 6 7 a great deal I would regret my decision 1 2 Not at all 3 4 5 6 7 a great deal I would blame myself for what I did 1 2 Not at all 3 4 5 6 7 a great deal I would behave differently if I were to participate in a similar task again 1 2 Definitely would not 3 4 5 6 7 definitely would I would advise others not to act like this in similar circumstances 1 2 Definitely would not 3 4 5 6 7 definitely would 2. You sent B a small amount ($3 or less) and B sent you back more than you had sent. I would regret this outcome 1 2 Not at all 3 4 5 6 7 a great deal I would regret my decision 1 2 Not at all 3 4 5 6 7 a great deal I would blame myself for what I did 1 2 Not at all 3 4 5 6 7 a great deal I would behave differently if I were to participate in a similar task again 1 2 Definitely would not 3 4 5 6 7 definitely would I would advise others not to act like this in similar circumstances 1 2 Definitely would not 3 4 5 6 7 definitely would PLEASE ANSWER ALL THESE QUESTIONS BEFORE YOU GO ON Please open the envelope if you haven’t already done so. You will find $20 and a decision form in it. Put back any money you want to send to Participant B in the envelope. (Remember, you must send at least $1.) The rest of the money is yours to keep. We will triple the money you send before handing it to Participant B. Remember to fill in the amount you sent on the decision form, and put the form in the envelope as well. Slide the envelope under your door when you are ready and one of the experimenters will collect it. We will contact you again when the Participant B with whom you have been paired has made his or her decision about how much money to return to you. Meanwhile, please remain in your room with the door closed. REFERENCES Barclay, P. (2006). Reputational benefits of altruistic punishment. Evolution and Human Behavior, 27, 325–344. Basu, K. (1977). Information and strategy in the iterated Prisoner’s Dilemma. Theory and Decision, 8, 293–298. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm T. Kugler et al. Consequential Thinking in the Trust Game Baumeister, R. F., DeWall, C. N., & Zhang, L. (2007). Do emotions improve or hinder the decision making process? In K.D. Vohs, R.F. Baumeister, & G. Loewenstein (Eds.), Do emotions help or hurt decision making? A hedgefoxian perspective (pp. 11–31). New York: Russell Sage. Behn, R. D., & Vaupel, J. W. (1982). Quick analysis for busy decision makers. New York: Basic Books. Bell, D. E. (1982). Regret in decision making under uncertainty. Operations Research, 30, 961–981. Bellemare, C., & Kroger, S. (2007). On representative social capital. European Economic Review, 51, 183–202. Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10, 121–142. Bolton, G. E., & Ockenfels, A. (2000). ERC: A theory of equity, reciprocity and competition. American Economic Review, 90, 166–193. Buchan, N. R., & Croson, R. T. A. (2004). The boundaries of trust: own and others’ actions in the US and China. Journal of Economic Behavior and Organization, 55, 485–504. Buchan, N. R., Croson, R. T. A., & Dawes, R. M. (2002). Swift neighbors and persistent strangers: A cross-cultural investigation of trust and reciprocity in social exchange. American Journal of Sociology, 108, 168–206. Burks, S. V., Carpenter, J. P., & Verhoogen, E. (2003). Playing both roles in the trust game. Journal of Economic Behavior and Organization, 51, 195–216. Burns, J. (2006). Racial stereotypes, stigma and trust in post-apartheid South Africa. Economic Modeling, 23, 805– 821. Camerer, C. F. (2003). Behavioral game theory: Experiments in strategic interaction. Princeton, NJ: Princeton University Press. Charness, G., & Rabin, M. (2002). Understanding social preferences with simple tests. Quarterly Journal of Economics, 117, 817–869. Cochard, F., Van, P. N., & Willinger, M. (2004). Trusting behavior in repeated investment game. Journal of Economic Behavior and Organization, 55, 31–44. Connolly, T., & Reb, J. (2005). Regret and the control of temporary preferences. Behavioral and Brain Sciences, 28, 653–654. Connolly, T., & Zeelenberg, M. (2002). Regret in decision making. Current Directions in Psychological Science, 11, 212–220. Coricelli, G., Morales, L. G., & Mahlsted, A. (2006). The investment game with asymmetric information. Metroeconomica, 57, 13–30. Cox, J. C. (2004). How to identify trust and reciprocity. Games and Economic Behavior, 46, 260–281. Danielson, A. J., & Holm, H. J. (2007). Do you trust your brethren? Eliciting trust attitudes and trust behavior in a Tanzanian congregation. Journal of Economic Behavior and Organizations, 62, 255–271. Dasgupta, P. (1988). Trust as a commodity. In D. Gambetta (Ed.), Trust, making and breaking cooperative relations (pp. 49–72). Oxford, England: Basil Blackwell. DeBruin, L. M. (2002). Facial resemblance enhances trust. Proceedings of the Royal Society of London Series B-Biographical Sciences, 269, 1307–1312. Dixit, A., & Skeath, S. (2004). Games of Strategy (2nd ed.). New York: W.W. Norton. Eckel, C. C., & Wilson, R. K. (2004). Internet cautions: Experimental games with internet partners. Experimental Economics, 9, 53–66. Engle-Warnick, J., & Slonim, R. L. (2004). The evolution of strategies in a repeated trust game. Journal of Economic Behavior and Organization, 55, 553–573. Falk, A., & Fischbacher, U. (2006). A theory of reciprocity. Games and Economic Behavior, 54, 293–315. Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition and cooperation. Quarterly Journal of Economics, 144, 817–868. Fershtman, C., & Gneezy, U. (2001). Discrimination in a segmented society: An experimental approach. Quarterly Journal of Economics, 116, 351–377. Glaeser, E. L., Laibson, D. L., Scheinkman, J. A., & Soutter, C. L. (2000). Measuring trust. The Quarterly Journal of Economics, 115, 811–846. Greig, F. E., & Bohnet, I. (2007). Is there reciprocity in a reciprocal-exchange economy? Evidence from a slum in Nairobi, Kenya. KSG Working Paper No. RWP05-044, Retrieved 16 August 2007, from http://papers.ssrn.com/sol3/ papers.cfm?abstract_id¼807364 Gunnthorsdottir, A., McCabe, K., & Smith, V. (2002). Using the Machiavellianism instrument to predict trustworthiness in a bargaining game. Journal of Economic Psychology, 23, 49–66. Ho, T. H., & Weigelt, K. (2005). Trust building among strangers. Management Science, 51, 519–530. Holm, H., & Nystedt, P. (2005). Intra-generational trust—A semi-experimental study of trust among different generations. Journal of Economic Behavior and Organization, 58, 403–419. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm Journal of Behavioral Decision Making Houser, D., Xiao, E., McCabe, K., & Smith, V. (2007). When punishment fails: Research on sanctions, intentions, and non-cooperation. Working Paper, George Mason University. Innocenti, A., & Pazienza, M. G. (2007). Altruism and gender in the trust game. Labsi Working Paper No. 5/2006, Retrieved 16 August 2007, from http://papers.ssrn.com/sol3/papers.cfm?abstract_id¼884378 Janis, I. L., & Mann, L. (1977). Decision making: A psychological analysis of conflict, choice, and commitment. New York: Free Press. Johansson-Stenman, O., Mahmud, M., & Martinsson, P. (2005). Does stake size matter in trust games? Economics Letters, 88, 365–369. Keser, C. (2003). Experimental games for the design of reputation management systems. IBM Systems Journal, 42, 498–506. Kirchsteiger, G., & Dufwenberg, M. (2004). A theory of sequential reciprocity. Games and Economic Behavior, 47, 268–298. Kiyonari, T., Yamagishi, T., Cook, K. S., & Cheshire, C. (2006). Does trust beget trustworthiness? Trust and trustworthiness in two games and two cultures. Social Psychology Quarterly, 69, 270–283. Kugler, T., Kocher, M., Sutter, M., & Bornstein, G. (2007). Trust between individuals and groups: Groups are less trusting than individuals but just as trustworthy. Journal of Economic Psychology, 28, 646–657. Kreps, D. M. (1990). Corporate culture and economic theory. In J. E. Alt , & K. A. Shepsle (Eds.), Perspectives on Positive Political Economy (pp. 90–143). Cambridge, England: Cambridge University Press. Levine, D. (1998). Modeling altruism and spitefulness in experiments. Review of Economic Dynamics, 1, 593– 622. Loomes, G., & Sugden, R. (1982). Regret theory: An alternative theory of rational choice under uncertainty. The Economic Journal, 92, 905–924. Loomes, G., & Sugden, R. (1987). Testing for regret and disappointment in choice under uncertainty. The Economic Journal, 97, 118–129. Messick, D. M., & Kramer, R. M. (2001). Trust as a form of shallow morality. In R.M. Kramer , & K.S. Cook , (Eds.), Trust in society (pp. 89–117). New York: Russell Sage. Meyerson, D., Weick, K. E., & Kramer, R. M. (1996). Swift trust and temporary groups. In R.M. Kramer , & T.R. Tyler (Eds.), Trust in organizations: Frontiers of theory and research (pp. 166–195). Thousand Oaks, CA: Sage. Neale, M. A., & Northcraft, G. B. (1991). Behavioral negotiation theory: A framework for conceptualizing dyadic bargaining. In L. Cummins, & B. Staw, (Eds.), Research in Organizational Behavior (Vol. 13, pp. 147–190). Greenwich, CT: JAI Press. Pillutla, M. M., Malhotra, D., & Murnighan, J. K. (2000). Attribution of trust and the calculus of reciprocity. Journal of Experimental Social Psychology, 39, 448–455. Rabin, M. (1993). Incorporating fairness into game theory and economics. American Economic Review, 83, 1281– 1302. Reb, J. (2008). Regret aversion and decision process quality: Effects of regret salience on decision process carefulness. Organizational Behavior and Human Decision Processes, 105, 169–182. Richard, R., de Vries, N. K., & van der Pligt, J. (1998). Anticipated regret and precautionary sexual behavior. Journal of Applied Psychology, 28, 1411–1428. Rigdon, M. L., McCabe, K., & Smith, V. (2007). Sustaining cooperation in trust games. Economic Journal, 117, 991–1007. Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. F. (1998). Not so different after all: A cross-discipline view of trust. Academy of Management Review, 23, 393–404. Scharlemann, J. P. W., Eckel, C. C., Kacelnik, A., & Wilson, R. K. (2001). The value of a smile: Game theory with a human face. Journal of Economic Psychology, 22, 617–640. Schechter, L. (2007). Traditional trust measurement and the risk of confound: An experiment in rural Paraguay. Journal of Economic Behavior and Organization, 62, 272–292. Schotter, A., & Sopher, B. (2006). Trust and trustworthiness in games: An experimental study of intergenerational advice. Experimental Economics, 9, 123–145. Simonson, I. (1992). The influence of anticipating regret and responsibility on purchasing decisions. Journal of Consumer Research, 19, 1–14. Snijders, C., & Keren, G. (1999). Determinants of trust. In D. V. Budescu, & I. Erev (Eds.), Games and human behavior: Essays in honor of Amnon Rapoport (pp. 355–385). Mahwah, NJ: Erlbaum. Slonick, S. J. (2007). Cash and alternate methods of accounting in an experimental game. Journal of Economic Behavior and Organization, 62, 316–321. Sutter, M., & Kocher, M. G. (2007). Age and the development of trust and reciprocity. Working Paper, University of Innsbruck. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm T. Kugler et al. Consequential Thinking in the Trust Game Willinger, M., Keser, C., Lohmann, C., & Usunier, J. C. (2003). A comparison of trust and reciprocity between France and Germany: Experimental investigation based on the investment game. Journal of Economic Psychology, 24, 447– 466. Yamagishi, T., & Yamagishi, M. (1994). Trust and commitment in the United States and Japan. Motivation and Emotion, 18, 129–166. Zeelenberg, M. (1999). Anticipated regret, expected feedback and behavioral decision making. Journal of Behavioral Decision Making, 12, 93–106. Authors’ biographies: Tamar Kugler is an Assistant Professor of Management and Organizations at the University of Arizona. Her research focuses on interactive decision making, group decision making and the role of emotions in decision making. Terry Connolly is Eller Professor of Management and Organizations at the University of Arizona. Most of his recent research has been concerned with the role of emotion in decision making. Edgar E. Kausel is a doctoral candidate in the Management and Organizations Department at the University of Arizona. His research interests include the role of emotions in decision making, judgmental biases, and decision making in organizational entry. Authors’ addresses: Tamar Kugler, Terry Connolly and Edgar E. Kausel, Department of Management and Organizations, University of Arizona, Arizona, USA. Copyright # 2008 John Wiley & Sons, Ltd. Journal of Behavioral Decision Making (2008) DOI: 10.1002/bdm