As a first step, I propose a definition of a ‘mutually beneficial practice’.

Consider any game for *n* players (where *n*≥2), defined in terms of the strategies available to the players and the payoffs that result from the possible combinations of strategies. Payoffs are interpreted as in Sections 2 and 3. The players may move simultaneously, as in the Dilemma and Hi–Lo Games, or sequentially, as in the Trust, Market and Confidence Games. Simultaneous-move games are described in ‘normal form’, as in and ; sequential-move games are described in ‘extensive form’, as in Figures 1–3. A strategy in a sequential-move game determines which choice the relevant player will make at every contingency that is possible, given the rules of the game.

For each player *i*=1, …, *n*, there is a set *S*_{i} of strategies, from which she must choose one; a typical strategy for player *i* is written as *s*_{i}. For each strategy profile (*s*_{1}, …, *s*_{n}), there is a payoff to each player *i*, written as *u*_{i}(*s*_{1}, …, *s*_{n}). For each player *i*, let *u̅*_{i} be her *maximin* payoff – that is, the highest payoff that she can guarantee herself, independently of the other players’ strategy choices. (Formally: for each strategy in *S*_{i}, we find the minimum payoff that *i* can receive, given that this strategy is chosen; then we find the strategy for which this minimum payoff is maximised. This strategy’s minimum payoff is *i*’s maximin payoff.) I shall treat each player’s maximin payoff as the benchmark for defining the benefits of cooperation. The intuitive idea is that a player can guarantee that she receives at least this payoff without engaging in any intentional interaction with the other players.

This benchmark might be interpreted in the spirit of Hobbes’s (1651/1962) state of nature. A Hobbesian might say that whatever an individual can be sure of getting for herself by whatever means, irrespective of what others do, cannot be a product of cooperation, and so each player’s maximin payoff sets a lower bound to the value that she can achieve from the game without cooperating with others. Alternatively, one might take a more moralised approach, in which the rules of the game are interpreted as specifying what individuals can *legitimately* or *rightfully* do, rather than what they can *in fact* do.
^{6} For example, in a model of an exchange economy, one might postulate an initial distribution of endowments and a system of rules that allows each individual to keep her own endowments if she so chooses and allows any group of individuals to trade endowments by mutual consent. In such a model, each player’s maximin payoff would be the value to her of keeping her endowments.

I begin with the case of a two-player game, for which the concept of mutual advantage is relatively easy to define. I shall say that a strategy profile (*s*_{1}*, *s*_{2}*) is a *mutually beneficial practice* in a two-player game if and only if, for each player *i*, *u*_{i}(*s*_{1}*, *s*_{2}*) > *u*̅_{i}^{.}. In other words: (*s*_{1}*, *s*_{2}*) is a mutually beneficial practice if and only if each player benefits, relative to her maximin benchmark, from both players’ participation in the practice.

In each of the games that I have presented so far, I have deliberately calibrated payoffs so that each player’s maximin payoff is zero. For example, in the Trust Game, player A can guarantee a payoff of zero by choosing *hold*, but incurs the risk of a negative payoff if he chooses *send*. B can ensure a positive payoff if A chooses *send*, but she cannot prevent him from choosing *hold*, which would give her payoff of zero.

In the Trust Game, one and only one (pure) strategy profile, namely (*send*, *return*), is a mutually beneficial practice. Exactly the same is true of the Market Game, consistently with my argument about the parallelism between the two games. In contrast, but in line with my discussion of that game, there is no mutually beneficial practice in the Confidence Game. For completeness, I add that (*cooperate*, *cooperate*) is the unique mutually beneficial practice in the Dilemma Game, and that in the Hi–Lo game, (*high*, *high*) and (*low*, *low*) are both mutually beneficial practices.

Generalising the definition of ‘mutually beneficial practice’ to games with any number of players is not completely straightforward. Consider the three-player Snowdrift Game, shown in . The story behind the game is that A, B and C are the drivers of three cars stuck in the same snowdrift, each equipped with a shovel. If a way out is dug for any one car, the others can use it. Each driver chooses whether to *dig* or to *wait* (hoping either that someone else will dig, or that a snowplough will arrive on the scene). Digging has a cost of 6, divided equally between those who do the work; provided there is at least one digger, each player gets a benefit of 4 from the work that is done. Each player gets his maximin payoff of zero by choosing *wait*. However, if any two players *dig*, all three get positive payoffs.

Table 3 The Snowdrift Game.

It seems obviously right to say that (*dig*, *dig*, *dig*), which gives the payoff profile (2, 2, 2), is a mutually beneficial practice. But what about (*dig*, *dig*, *wait*), which gives (1, 1, 4)? Relative to their maximin payoffs, all three players benefit from this practice; but is the benefit *mutual*? Surely not: C benefits from A’s and B’s participation in the practice, but that benefit is not reciprocated. One way of putting this is to say that, irrespective of C’s strategy choice, A and B can each be sure of getting a payoff of at least 1 if they both choose *their components of* the practice (*dig*, *dig*, *wait*). Thus, neither of them benefits from C’s choosing her component.
^{7}

Generalising this argument, I propose the following definition. In any game for *n* players (where *n*≥2), a strategy profile **s***=(*s*_{1}*, …, *s*_{n}*) is a *mutually beneficial practice* if and only if two conditions are satisfied. *Condition 1* is that, for each player *i*=1, …, *n*, *u*_{i}(**s***) > *u*̅_{i}: relative to her maximin benchmark, each player benefits from the practice. To formulate the second condition, let *N* be the set of players {1, …, *n*}, and consider any *subgroup* *G*, where *G* is a subset of *N* that contains at least one and fewer than *n* players. Let *G*′ be the complement of *G*. For each player *j* in *G*, let *v*_{j}(G, **s***) be the minimum payoff that *j* can receive, given that each member of *G* chooses his component of **s***. I will say that *G* *benefits from the participation of* *G*′ *in* **s*** if and only if *u*_{i}(**s***) ≥ *v*_{j}(*G*, **s***) for all *j* in *G*, with a strict inequality for at least one *j*. *Condition 2* is that, for *every* subgroup *G* that contains at least one and fewer than *n* players, *G* benefits from the participation of *G*′.

In a two-player game, Condition 2 is redundant. (Consider any two-player game and any strategy profile (*s*_{1}*, *s*_{2}*) which satisfies Condition 1. Thus *u*_{1}(*s*_{1}*, *s*_{2}*) > *u*̅_{1}. By the definition of ‘maximin payoff’, *u*̅_{1} is as least as great as player 1’s minimum payoff, conditional on his having chosen *s*_{1}*. So *u*_{1}(*s*_{1}*, *s*_{2}*) is strictly greater than player 1’s minimum payoff, given his choice of *s*_{1}*. This implies that the subgroup {1} benefits from the participation of its complement {2} in (*s*_{1}*, *s*_{2}*). By the same reasoning, {2} benefits from the participation of {1}. So Condition 2 is satisfied.) But when *n* > 2, neither condition implies the other.

Notice that Condition 2 does not require that *every* player benefits from *every other* player’s participation in the practice **s***. For example, consider a variant of the Snowdrift Game in which A’s choice of *dig* benefits only A and B, B’s choice of *dig* benefits only B and C, and C’s choice of *dig* benefits only C and A. C does not benefit from A’s participation in the practice (*dig*, *dig*, *dig*), A does not benefit from B’s participation, and B does not benefit from C’s. Still, each subgroup benefits from the participation of its complement, and so Condition 2 is satisfied.

Notice also that, in defining the benefit that *G* receives from the participation of *G*′ in the practice **s***, Condition 2 takes *G*’s participation in that practice as given. It does not ask what payoff profiles *G* could have guaranteed itself by concerted action. Recall that I want to be able to say that an ongoing practice is mutually beneficial even if it is less than optimal. For example, suppose that **s*** and **s**** are two different priority rules that could be followed by the one million users of a national road network. In fact, everyone follows **s***, and this works well; relative to maximin benchmarks, everyone benefits greatly. However, traffic engineers can show that there would be a small but positive benefit to everyone if everyone switched to **s****. It is possible that a subgroup of 999,999 road users could guarantee that each of them would be better off if they all switched to **s****, irrespective of the behaviour of the one remaining individual. But it still seems right to say that this subgroup benefits from its complement’s participation in the ongoing practice **s***, and hence that this practice is mutually beneficial.

My definition of a mutually beneficial practice does not impose any restrictions on how the benefits of a practice are distributed between the participants, beyond the condition that every participant gains *some* benefit. One might argue that an account of cooperation needs to take account of the distribution of benefits, and that for a practice to be genuinely cooperative, benefits must be distributed in a reasonably fair way. I say ‘reasonably’ because my analysis is intended to apply to ongoing practices, without assuming that individuals can solve coordination problems by abstract team reasoning. It would be inappropriate to require that, in order for individuals to be led by team reasoning to participate in cooperative practices, those practices must be *perfectly* fair according to some well-defined criterion that everyone endorses. Still, by adding some minimum standards of fairness, it might be possible to construct a satisfactory definition of a *fair* mutually beneficial practice. For the purposes of this paper, however, I leave this issue aside.

As a preliminary to presenting a schema of team reasoning, I need to state some definitions. I shall say of any proposition *p* and any set of players *N* that *in N, there is common reason to believe* *p* if and only if (i) each player *i* in *N* has reason to believe *p*, (ii) each player *i* in *N* has reason to believe that each player *j* in *N* has reason to believe *p*, and so on.
^{8} For any property *q*, I shall say that *in N, there is reciprocal reason to believe that q holds for members of N* if and only if (i) each player *i* in *N* has reason to believe that *q* holds for each player *j*≠*i* in *N*, (ii) each player *i* in *N* has reason to believe that each player *j*≠*i* in *N* has reason to believe that *q* holds for each player *k*≠*j* in *N*, and so on.

Notice that the definition of ‘reciprocal reason to believe’ makes no reference to what any player has reason to believe *about himself*. This omission is significant when the property *q* refers to choices made by the players themselves. For example, take the Dilemma Game and consider what is implied by the proposition that, in the set of players {Row, Column}, there is reciprocal reason to believe that ‘will choose *cooperate*’ holds for members of that set. Among these implications are: that Row has reason to believe that Column will choose *cooperate*; that Row has reason to believe that Column has reason to believe that Row will choose *cooperate*; and so on. But nothing is said about whether Row has reason to believe that *Row* will choose *cooperate*. Nor (since one can have reason to believe a proposition that is in fact false) has anything been said about whether *in fact* Row will choose *cooperate*. For example, suppose that Row and Column have played the Dilemma Game against one another many times, and both players have always chosen *cooperate*. They are about to play the game again. One might argue that, by the canons of inductive reasoning, there is (in the set of players {Row, Column}) reciprocal reason to believe that each player will choose *cooperate*. But each player can still ask whether he or she has reason to make this choice.

I now present a schema of team reasoning that can be used by each player in any game that has two or more players. The set of players is *N*={1, …, *n*}; **s***=(*s*_{1}*, …, *s*_{n}*) is any strategy profile in that game. The propositions P1 to P3 are premises that ‘I’ (one of the players) accept; the proposition C is a conclusion that ‘I’ infer from those premises. I as author am not asserting that this schema ‘really’ is valid. Rather, it is a schema that any player *might* endorse. Were she to do so, she would *take it to be* valid.

*Schema of Cooperative Team Reasoning*

(P1) In *N*, there is common reason to believe that **s*** is a mutually beneficial practice.

(P2) In *N*, there is reciprocal reason to believe that each player will choose her component of **s***.

(P3) In *N*, there is reciprocal reason to believe that each player endorses and acts on the Schema of Cooperative Team Reasoning with respect to *N*.

(C) I should choose my component of **s*** (or some other strategy that is unconditionally at least as beneficial for every player).
^{9}

The concept of ‘endorsing and acting on the Schema of Cooperative Team Reasoning’ is the analogue of group identification in Bacharach’s theory. To endorse the schema is to dispose oneself to treat *N* as a unit of agency and to play one’s part in its joint actions. The schema itself prescribes what that part is. For each player *i* (and leaving aside the complication of the ‘or some other strategy …’ clause in C), that part is *i*’s component of a strategy profile **s*** for which there is common reason to believe in its being mutually beneficial (P1) and for which there is reciprocal reason to believe in its being chosen (P2). However, the schema has implications for each player’s choices only if there is assurance that all players endorse it (P3).

The status of P3 in the schema is analogous with that of a clause in a contract between two parties stating that the contract is to be activated if and when both parties have signed it. The first party to sign such a contract makes a unilateral commitment to abide by the terms of the contract, but those terms do not require anything of her unless and until the second party signs. Similarly, if a player commits herself to act on the Schema of Cooperative Team Reasoning, that commitment makes no demands on her unless there is reciprocal reason to believe that every player has made the same commitment.

One might ask why P3 is needed in addition to P2. It would certainly be possible to postulate a reasoning schema (call it the Simple Schema) in which C can be inferred merely from P1 and P2. Roughly speaking, a player who endorses the Simple Schema commits herself to the *individual* action of choosing her component of a mutually beneficial practice when other players can be expected to choose theirs. This is an intelligible moral principle, but it does not involve the idea of *joint* intention or *joint* action. For example, consider the Trust Game, with **s*** defined as the mutually beneficial practice (*send*, *return*). Consider how B might reason about the game, given that she has reason to believe that A will choose *send* (or indeed, given that she knows that A has already chosen *send*). If she endorses the Simple Schema, she does not need to enquire into A’s intentions in order to conclude that she should choose *return*. But this makes it difficult to represent the idea that she intends her action as a repayment of A’s trust.

In contrast, suppose that in the Trust Game, A and B each endorse the Schema of Cooperative Team Reasoning, and that there is reciprocal reason for them to believe that this is the case. Further, suppose that there is reciprocal reason for them to believe that A will choose *send* and B will choose *return*. The latter beliefs might be supported by inductive inferences from previous observations of *send* and *return* in Trust Games – perhaps previous games played between A and B, or perhaps games played by other pairs of players drawn from some population of which they are both members. Then A and B can each infer they should choose their respective components of the mutually beneficial practice (*send*, *return*), *with the joint intention of participating in that practice*. In choosing *send*, A acts on his part of this intention, trusting B to act on her part of it. B repays A’s trust by doing so.

Now consider how this argument extends to the Market Game. In the Market Game, (*send*, *return*) is the strategy profile that is uniquely recommended to individually rational and self-interested players who have reciprocal reason to believe one another to be individually rational and self-interested. Thus, A might choose *send* and B might choose *return*, each acting on an individual intention to pursue his or her self-interest, as suggested by Adam Smith’s account of how we get our dinners. But there is another possibility: A and B might both endorse the Schema of Cooperative Team Reasoning. If there is reciprocal reason for them to believe that this is the case, and if there is reciprocal reason for them to believe that A will choose *send* and that B will choose *return*, they can choose *send* and *return* with the joint intention of participating in a mutually beneficial practice.

## Comments (0)