In this section, we develop a game-theoretical model that, given estimates on the win probabilities for each combination of strategies adopted by the opponents, results in an strategic advice for each opponent. We assume that each team has two discrete strategic choices: an offensive (O) or defensive (D) approach.
We will now describe the input parameters, labelling the two teams A and B. We indicate αDD and βDD, respectively as the probabilities of a win for team A and team B if both adopt a defensive strategy (αDD + βDD ≤ 1); the probability of a draw will, therefore, correspond to 1 −αDD − βDD. In the same way, we define αOO and βOO (both teams with an offensive approach), αDO and βDO (A in defense and B in attack), αOD and βOD (A in attack and B in defense).
We define the expected payoff aij (bij) for team A (B) where team A follows strategy i and team B adopts strategy j (with i, j ∈ {O, D}). The payoffs are calculated as follows, based on the hypothesis that a win awards p points and a draw yields q points:
We are facing a normal-form variable-sum game. The competitive solution by John Nash (1950) requires the use of an algorithm of Linear Programming on the matrices [αij] and [βij] based on John von Neumann’s Minimax Theorem (1928). However, the specificity of the problem considered here allows closed solutions which we will present in this section for the reader’s convenience. Firstly, it is necessary to verify whether any team has a dominant strategy (i.e. a strategy that results in a higher expected payoff than the other strategy, regardless of the opponent’s strategy), in which case the solution can easily be found.
If there is no domination, the solution consists of in probability distributions (, ) on the strategies of A and (, ) on the strategies of B. Such assignments of probabilities to the offensive and defensive strategy correspond to mixed strategies, and can also be interpreted as different levels of adoption of both strategies.
Let’s define h = (aOD − aOO)/(aDO + aOD − aDD − aOO). The probability distribution for the strategy of team A is then as follows:
In all cases, .
Similarly, for team B, with k = (bOD − bOO)/(bDO + bOD − bDD − bOO), the distribution is given by:
In all cases, .
The expected payoffs of the two teams are:
Consider for example a setting where the expected payoffs for team A and B are as presented in .
Table 1: The expected payoffs for teams A and B. Each cell has values (aij, bij) with team A adopting strategy i and team B adopting strategy j.
This means that for team A, h = (1.5 − 1.1)/(1.8 + 1.5 − 1.0 − 1.1) = 1/3. Hence and , or in other words, in the long run, team A should attack in 2/3 of such cases and defend in 1/3 of such cases. For team B, k = (0.7 − 1.2)/(1.0 + 0.7 − 2.5 − 1.2) = 1/4, such that team B should attack with a probability of 75% () and choose a defensive approach with a probability of 25% ().
An important remark concerns the incompleteness of the information on which the table of win probability estimates (i.e. αij and βij) are based. In fact, every coach has at his disposal inside information regarding the psychophysical condition of his players, but may not have the same information about the opposing team. Consequently, his win probability estimates could differ from that of the opponent’s coach. Except in the cases when the two tables of probabilities lead to different results in terms of dominance, such differences do not influence the strategy chosen by the coach. In fact, only in such situations can the table of the opponent’s coach cause problems. The possibility to repeat the calculation given by the algorithm that we will present in the next section, helps to overcome this problem.
Comments (0)