Skip to content
Licensed Unlicensed Requires Authentication Published online by De Gruyter February 6, 2023

On Iterated Nash Bargaining Solutions

  • Cheng-Zhong Qin , Guofu Tan EMAIL logo and Adam C. L. Wong ORCID logo

Abstract

This paper introduces a family of domains of bargaining problems allowing for non-convexity. For each domain in this family, single-valued bargaining solutions satisfying the Nash axioms are explicitly characterized as solutions of the iterated maximization of Nash products weighted by the row vectors of the associated bargaining weight matrices. This paper also introduces a simple procedure to standardize bargaining weight matrices for each solution into an equivalent triangular bargaining weight matrix, which is simplified and easy to use for applications. Furthermore, the standardized bargaining weight matrix can be recovered from bargaining solutions of simple problems. This recovering result provides an empirical framework for determining the bargaining weights.

JEL Classification: C71; C78

Corresponding author: Guofu Tan, Department of Economics, University of Southern California, Los Angeles, CA 90089-0253, USA, E-mail:

Acknowledgement

We are grateful for comments from Youngsub Chun, Pradeep Dubey, Mamoru Kaneko, Abraham Neyman, Hans Peters, Yair Tauman, Shmuel Zamir, Yongsheng Xu, Junjie Zhou, an anonymous referee, and seminar participants at a number of universities and conferences.

Appendix A: Proof of Proposition 2

To prove Proposition 2, in the following we fix a domain B satisfying B1–B3 and a bargaining solution f on B .

For S R n , we let Par(S) ≡ {uS : ∄vS s.t. vu and vu} denote the strict Pareto frontier of S and c o m ( S ) S R + n R + n the comprehensive hull of S. When S is finite such as S = {u, v, w}, we may write Par({u, v, w}) and com({u, v, w}) as Par(u, v, w) and com(u, v, w). Note that, by B3, ( c o m ( S ) , 0 ) B .

Lemma A.1

If f satisfies INV, IIA, and SIR, then, for any ( S , d ) B , f(S, d) ∈ Par(S).

Proof

By B1 and INV, we assume d = 0 without loss of generality. Set S + S R + n . Since S+S, Par(S+) ⊆ Par(S). By B2, ( S + , 0 ) B and by SIR, f(S, 0) ∈ S+. Hence, by IIA, f(S, 0) = f(S+, 0).

Let T ≡ com(S+). Then S + T R + n , Par(T) = Par(S+), and ( T , 0 ) B by B3. Suppose vf(T, 0) ∉ Par(T). Then there exists wT such that wv and wv. Since v0 by SIR, we also have w0. Set β i v i /w i > 0 for i = 1, …, n. Then β ≡ (β1, …, β n ) ≤ (1, …, 1) and β ≠ (1, …, 1). Let T′ ≡ {(β1u1, …, β n u n ): uT}. By B1, ( T , 0 ) B . By construction T′ ⊆ T and f(T, 0) = vT′. Consequently, by IIA, f(T′, 0) = v. However, by INV, f(T′, 0) = (β1v1, …, β n v n ) ≠ v, a contradiction. Therefore, f(T, 0) ∈ Par(T).

Finally, since S+T and f(T, 0) ∈ Par(T) = Par(S+) ⊆ S+, IIA implies f(S+, 0) = f(T, 0). In summary, f(S, 0) = f(S+, 0) = f(T, 0) ∈ Par(T) = Par(S+) ⊆ Par(S). □

Define a binary relation ≻ f on R + + n based on f as follows:

u f v  iff  u v  and  f ( c o m ( u , v ) , 0 ) = u .

Lemma A.2

Suppose f satisfies INV, IIA, and SIR.

  1. f is a strongly monotone strict linear order; that is, for any u , v , w R + + n , (i) exactly one of u f v, v f u, and u = v holds, (ii) u f v and v f w imply u f w, and (iii) uv and uv imply u f v.

  2. For any ( S , 0 ) B , f(S, 0) is the unique maximizer of ≻ f on S R + + n ; that is, f ( S , 0 ) S R + + n and

f ( S , 0 ) f u  for any  u S R + + n \ { f ( S , 0 ) } .

Proof

Throughout this proof f is fixed and we drop the subscript “f” in ≻ f .

We first establish part (a). Property (i) is obvious and property (iii) follows from Lemma A.1. To show property (ii), suppose uv and vw. Then property (i) implies uw. Again by property (i), it suffices to show that wu is false. Suppose on the contrary wu. By Lemma A.1, f(com(u, v, w), 0) ∈ {u, v, w}. If f(com(u, v, w), 0) = u, then IIA implies uw, contradicting wu. Similarly, f(com(u, v, w), 0) = v contradicts uv and f(com(u, v, w), 0) = w contradicts vw. Therefore, we must have uw and hence property (ii).

We next establish part (b). Set S + + S R + + n and S + S R + n . That f(S, 0) ∈ S++ directly follows from SIR. Set u* ≡ f(S, 0). We need to show u* ≻ u, or equivalently, u* = f(com(u, u*), 0), for any uS++\{u*}. By B2, ( S + , 0 ) B . By IIA, u* = f(S+, 0). By B3, ( c o m ( S + ) , 0 ) B and ( c o m ( u , u * ) , 0 ) B . By Lemma A.1, we have f(com(S+), 0) ∈ S+. Thus, by IIA, we have f(com(S+), 0) = f(S+, 0) = u* ∈ com(u, u*). By IIA again, f(com(S+), 0) = f(com(u, u*), 0). This shows u* = f(com(u, u*), 0). □

Let log  u ≡ (log  u1, …, log  u n ) for u R + + n ; let log  S ≡ {log  u: uS} for S R + + n .

Lemma A.3

Suppose f satisfies INV, IIA, and SIR. For any u R + + n define

H f ( u ) v R + + n : v f u ,
L f ( u ) v R + + n : u f v .
  1. log  H f (u) = log  H f (1) + log  u;

  2. log  L f (u) = log  L f (1) + log  u;

  3. log  H f (1) = −log  L f (1);

  4. log  H f (1) and log  L f (1) are convex.

Proof

Throughout this proof f is fixed and we drop the subscript “f” in ≻ f , H f (⋅), and L f (⋅).

Step 1. We claim that, for any λ R and u 0 , u 1 , v 0 , v 1 R + + n with u1u0 and log  v1 − log  v0 = λ(log  u1 − log  u0), we have v1v0 if λ > 0 and v0v1 if λ < 0.

To see this, first define u t for t R by

log u t log u 0 + t log u 1 log u 0 .

Substep 1.1. We show that the claim of this step holds for λ = 1 and λ = −1. When λ = 1, log  v1 − log  v0 = log  u1 − log  u0, or equivalently v i 1 / u i 1 = v i 0 / u i 0 , for each i ∈ {1, …, n}. Thus, by INV and u1u0,

v 1 = v i 1 u i 1 u i 1 i = 1 , , n = v i 0 u i 0 u i 1 i = 1 , , n v i 0 u i 0 u i 0 i = 1 , , n = v 0 .

For the case of λ = −1, the claim can be similarly proved with the roles of v0 and v1 interchanged.

Substep 1.2. We show that the claim of this step holds when λ = 1/m for any positive integer m. By the assumptions on u0, u1, v0, v1 and Substep 1.1, it suffices to show u1/mu0. Suppose on the contrary u0u1/m. Since log  u(i−1)/m − log  ui/m for i = 1, 2, …, m are all equal to 1 m ( log u 0 log u 1 ) , it follows from Substep 1.1 that u0u1/m ≻ ⋯ ≻ u(m−1)/mu1. The transitivity of ≻ established in Lemma A.2(a) then implies that u0u1, which contradicts u1u0.

Substep 1.3. We show that the claim of this step holds for all λ > 0. Given that λ > 0, it follows from the assumptions on u0, u1, v0, v1 and Substep 1.1 that it suffices to show u λ u0. Consider the “log-convex hull” of {u0, u λ } defined by T = {u t : 0 ≤ tλ}. From B3, ( c o m ( T ) , 0 ) B . Suppose on the contrary f(com(T), 0) = u t for some t ∈ [0, λ). Then we can pick a large enough positive integer m such that t + 1 m < λ . By construction, ut+1/mT and by IIA, f(com(u t , ut+1/m), 0) = u t . It follows that u t ut+1/m. Hence, from Substep 1.1 it follows that u0u1/m. This contradicts Substep 1.2. Therefore, f(com(T), 0) ≠ u t for any t ∈ [0, λ). But f(com(T), 0) ∈ T from Lemma A.1. Consequently, f(com(T), 0) = u λ . Thus, by IIA, f(com(u0, u λ ), 0) = u λ or equivalently, u λ u0.

Substep 1.4. Substep 1.1 with λ = −1 and Substep 1.3 together imply that the claim of this step also holds for all λ < 0.

Step 2. Establish parts (a), (b), and (c).

The result in Step 1 with λ = 1 can be written as: if log  v1 − log  v0 = log  u1 − log  u0, then v1v0 and u1u0 are equivalent; v0v1 and u0u1 are equivalent. Since log  1 = 0, parts (a) and (b) follow. The result in Step 1 with λ = −1 can be written as: if log v 1 log v 0 = log u 1 log u 0 , then v1v0 and u0u1 are equivalent; v0v1 and u1u0 are equivalent. Part (c) follows.

Step 3. Establish part (d).

We only prove log  L(u) is convex; the proof for log  H(u) is analogous. Suppose that log  v0 and log  v1 are in log  L(u) or equivalently, uv0 and uv1. Consider log  v λ = λ log  v1 + (1 − λ) log  v0 for λ ∈ (0, 1). We need to show that uv λ . If v0 = v1, then uv0 = v λ . If v0v1, then Step 1 implies uv0v λ . If v1v0, then Step 1 implies uv1v λ . This shows uv λ . □

Lemma A.4

Suppose f satisfies INV, IIA, and SIR. Define H f (⋅) and L f (⋅) as in Lemma A.3. There exist pairwise orthogonal non-zero vectors α 1 , , α n R n with α10 such that x ∈ log  H f (1) (resp. x ∈ log  L f (1)) if and only if there is some j ∈ {1, …, n} such that α j x > 0 (resp. α j x < 0) and α k x = 0 for k = 1, …, j − 1. Moreover, every such α i (i = 1, …, n) is unique up to positive scalar multiplication.

Proof

For notational convenience, let H ̂ log H f ( 1 ) and L ̂ log L f ( 1 ) . By Lemma A.2(a), H ̂ L ̂ = , H ̂ L ̂ = R n \ { 0 } , and R + + n H ̂ . By Lemma A.3(c), H ̂ = L ̂ . By Lemma A.3(d), H ̂ , L ̂ are convex.

Note that H ̂ and L ̂ are nonempty. Denote their common boundary by E1. Since H ̂ and L ̂ are convex, E1 is a hyperplane with some non-zero normal vector α 1 R n . Since H ̂ = L ̂ , we have 0E1. This concludes that E1 is an (n − 1)-dimensional vector subspace of R n . Choose the direction of the normal vector α1 so that { x : α 1 x > 0 } H ̂ and { x : α 1 x < 0 } L ̂ . Since R + + n H ̂ , we have α1 ≥ 0.

Since E1 is a nontrivial vector space, H ̂ L ̂ = R n \ { 0 } , H ̂ = L ̂ , and H ̂ , L ̂ , E 1 are all convex, it follows that H ̂ E 1 and L ̂ E 1 are nonempty, convex, and they share the common boundary E2E1 with some non-zero normal vector α2E1. Applying the previous reasoning, E2 is an (n − 2)-dimensional vector subspace of R n . Choose the direction of the normal vector α2 so that { x : α 2 x > 0 } E 1 H ̂ and { x : α 2 x < 0 } E 1 L ̂ . Since α2E1 and α1 ⊥ E1, we have α2 ⊥ α1.

Repeating the preceding process and reasoning, a collection of subspaces {E1, …, E n } and a collection {α1, …, α n } of corresponding normal vectors can be generated, such that E1E2 ⊃ ⋯ ⊃ E n , each E j is an (nj)-dimensional vector subspace of R n so that E n = {0} and each normal vector α j of E j satisfies { x : α j x > 0 } E j 1 H ̂ and { x : α j x < 0 } E j 1 L ̂ (we let E 0 R n ). The process stops at E n = {0} because H ̂ E n and L ̂ E n are empty. Hence, the separation between H ̂ and L ̂ is complete, in that given any point x R n \ { 0 } , we can determine whether x H ̂ or x L ̂ using {α1, …, α n }.

It is easy to verify that H ̂ and L ̂ are exactly described as in the lemma. Furthermore, the direction of each α j is uniquely determined. It remains to show that α1, …, α n are pairwise orthogonal. To this end, pick any j, k ∈ {1, …, n} with j < k. By our construction, α k Ek−1E j and α j  ⊥ E j . It follows that α j  ⊥ α k . □

Proof of Proposition 2

Suppose f satisfies INV, IIA, and SIR. Let α 1 , , α n R n be vectors with the properties claimed in Lemma A.4. Let W be the n × n matrix with ith row being α i . Since α1, …, α n are pairwise orthogonal and non-zero, they are linearly independent.

By Lemma A.3(b), log  v ∈ log  L f (u) if and only if for some j ∈ {1, …, n}, α j ⋅ (log  v − log  u) < 0 and α k ⋅ (log  v − log  u) = 0 for k < j. Equivalently, u f v if and only if there is some j ∈ {1, …, n} such that g j (v, 0) < g j (u, 0) and g k (v, 0) = g k (u, 0) for k < j, where g j ( u , d ) i = 1 n ( u i d i ) α i j for ud and j = 1, …, n. Now, Lemma A.2(b) implies that f is the W-bargaining solution.

Notice that α1 is the unique maximizer of g1(⋅, 0) on S°. Hence, α1 = f(S°, 0). By SIR, α10. Therefore, W is an admissible weight matrix. From Lemma A.4 we know the α1, …, α n above can be uniquely normalized to be in Δ n . □

Appendix B: Other Proofs not in the Text

Proof of Lemma 1

“If” part. The fact that any row operation of the first kind does not change the represented solution is rather obvious. Now let us consider a row operation of the second kind. Consider replacing the jth row α j by α j + θα k , where θ is a scalar (not necessarily positive) and α k is a preceding row (i.e. 1 ≤ k < j). From (2), all elements in Σj−1(S, d) must have a common value of i = 1 n u i d i α i k . This common value, denoted as π, must be positive. Therefore, the new maximization problem in the jth round can be written as

max u Σ j 1 ( S , d ) i = 1 n u i d i α i j + θ α i k = max u Σ j 1 ( S , d ) π θ i = 1 n u i d i α i j ,

which is clearly equivalent to the original maximization problem in the jth round.

“Only if” part. First, clearly the row operations in this lemma are reversible; that is, if W ̃ can be obtained from W through an application of those row operations, then W can also be obtained from W ̃ through an application of those row operations. Also note that our row operations of the second kind are the operations needed in Gram–Schmidt orthogonalization process. Therefore, a finite sequence of our row operations allows us to transform W through Gram–Schmidt orthogonalization process into some n × n matrix whose rows are pairwise orthogonal. This new matrix has full rank because W has. Therefore, each row of the new matrix must be non-zero and hence we can apply our row operations of the first kind to normalize each row to be in Δ n . Thus, we know that a finite sequence of our row operations can transform W into some n × n matrix A whose rows are pairwise orthogonal and in Δ n . Similarly, W ̃ can also be transformed into some n × n matrix A ̃ with the same properties through a finite sequence of our row operations. From the “if” part we proved above, we know that f is the A-bargaining solution and f ̃ is the A ̃ -bargaining solution. Now, if it is the case that f = f ̃ , then from the uniqueness part of Proposition 2(b) we know that A = A ̃ . Therefore, we can from W obtain A and then from A obtain W ̃ through a finite sequence of our row operations. □

Proof of Proposition 3

Existence. Let W be an admissible weight matrix (i.e. is n × n, of full rank, and with first row in R + + n ). Let M k denote the set of k × k full rank matrices with its first row belonging to Δ + k and the first non-zero entry in every column being positive. First, W is equivalent to some matrix in M n because the first row of W is in R + + n and hence can be multiplied by a positive scalar to become in Δ + + n . In the following, we shall show that, for any positive integer k, if B M k + 1 , then B is equivalent to some matrix D M k + 1 such that, for some i* ∈ {1, …, k + 1}, (i) the i*th column of D has all entries being zero except the first entry (which must be positive), and (ii) the submatrix of D after deleting the first row and i*th column of D belongs to M k . (B and D must also have the same first row since they are equivalent and both are in M k + 1 .) Once the last claim is proved, it is easy to see that we can iteratively transform W into equivalent matrices such that an equivalent triangularly standard weight matrix is ultimately obtained.

Now, let k be a positive integer and B M k + 1 with its (j, i)th entry denoted as β i j . Recursively define

K 1 i { 1 , , k + 1 } : β i 1 > 0 ,
K j argmin i K j 1 β i j β i 1  for  j = 2 , , k + 1 .

Note that the above sets are nonempty because K1 is nonempty (since β 1 Δ + k + 1 ). Note also that Kk+1 must be a singleton (otherwise B must not have full rank). Now, let i* be the unique element of Kk+1. Then we use the first row of B to eliminate the i*th entry in all other rows. That is, we replace each jth row β j (j = 2, …, k + 1) by β j β i * j / β i * 1 β 1 . After the above row operations, the second row must be non-negative and non-zero. Then we multiply the second row by a positive constant to make it belonging to Δ + k . Now, the resulting matrix is our desired matrix D.

Uniqueness. It suffices to show that, if A and B are n × n full rank matrices which have all rows in Δ + n and are upper triangular up to permutations of columns, then A and B are equivalent only when they are equal. In the rest of this proof, we show the last claim by a proof by contradiction.

Take any A, B that have the above properties. Suppose on the contrary that A and B are equivalent but AB. Let α j denote the jth row of A, and β j the jth row of B. Since AB, there is a unique row index k ∈ {1, …, n} such that

(B.1) α k β k ,
(B.2) α j = β j  whenever  j < k .

Since all rows in A and B are in Δ + n , we have

(B.3) α j , β j Δ + n  for  j = 1 , , n .

Note that any upper triangular matrix has full rank if and only if all of its diagonal entries are non-zero. Therefore, any n × n full rank matrix that is upper triangular up to permutations of columns has only one permuting of columns that can make it upper triangular. To save one notation, we without loss of generality assume that this permuting for A is the identity permuting (i.e. A itself is upper triangular). Let σ: {1, …, n} → {1, …, n} denote the permuting of columns that makes B upper triangular. Thus,

(B.4) α i j = β σ ( i ) j = 0  whenever  i < j .

Since A and B are of full rank, we have

(B.5) α i i , β σ ( i ) i > 0  for  i = 1 , , n .

Since A and B are equivalent, we know from our definition of matrix equivalence that there exist (unique) constants θ1, …, θ k such that

(B.6) θ 1 α 1 + + θ k α k = β k

and

(B.7) θ k > 0 .

Now, if it is the case that θ j = 0 whenever j < k, then (B.6) becomes θ k α k = β k , which together with (B.3) implies

θ k = θ k i = 1 n α i k = i = 1 n β i k = 1 ,

so that (B.6) becomes α k = β k , contradicting (B.1). Therefore, k ≥ 2 and at least one of θ1, …, θk−1 is non-zero. Then there is a unique index k′ ∈ {1, …, k − 1} such that

(B.8) θ k 0 ,
(B.9) θ j = 0  whenever  j < k .

Using (B.9) to simplify (B.6), we have

(B.10) θ k α k + + θ k α k = β k .

Considering the k′th entry of (B.10) and using (B.4) to simplify, we have

(B.11) θ k α k k = β k k .

Since we have α k k > 0 from (B.5) and β k k 0 from (B.3), it follows from (B.8) and (B.11) that

(B.12) θ k > 0 .

We are now ready to derive a contradiction as follows:

0 = β σ ( k ) k = θ k α σ ( k ) k + + θ k α σ ( k ) k = θ k β σ ( k ) k + + θ k 1 β σ ( k ) k 1 + θ k α σ ( k ) k = θ k β σ ( k ) k + θ k α σ ( k ) k > θ k α σ ( k ) k 0 .

In the above chain, the first and fourth lines are from (B.4); the second and third lines are from (B.10) and (B.2) respectively; the second last line is from (B.5) and (B.12); the last line is from (B.7) and (B.3). □

Proof of Theorem 2

By Theorem 1(b) or Proposition 2, f is the W-bargaining solution for some admissible weight matrix W. Let α i (i = 1, …, n) denote the ith row of W. Then α 1 , , α n R n are linearly independent, α10, and (1) is satisfied. We shall show that for j = 1, …, n,

  1. S j is nonempty,

  2. β j Δ + + n ,

  3. there are constants θ1, …, θ j such that θ1α1 + ⋯ + θ j α j = β j and θ j > 0,

  4. β1, …, β j are linearly independent.

Once (i)–(iv) are established, we would know that W ̂ is an admissible weight matrix that is equivalent to W; hence from Lemma 1 f is the W ̂ -bargaining solution and the proof would be completed.

We prove (i)–(iv) by induction. Observe that claims (i)–(iv) hold for j = 1. Take any k ∈ {2, …, n} and suppose (i)–(iv) hold for j = 1, …, k − 1. Then,

S k = u R + + n : i = 1 n u i 1  and  i = 1 n β i j ln u i = ln ϕ  for  j = 1 , , k 1 .

The log-transformation of S k can be written as ln S k = V1V2 where

V 1 ln u R n : i = 1 n u i 1 ,
V 2 ln u R n : i = 1 n β i j ln u i = ln ϕ  for  j = 1 , , k 1 .

Since (iv) holds for j = k − 1 (i.e. β1, …, βk−1 are linearly independent), V2 is an (nk + 1)-dimensional flat surface in R n and hence, it is nonempty. On the boundary of V1, there are many points above V2 and many below V2. Indeed, for any u Δ + + n close enough to (1/n, …, 1/n), it follows from β 1 , , β k 1 Δ + + n and ϕ ∈ (0, 1/n) that ln u lies above V2; for any u Δ + + n that has at least one coordinate close enough to 0, ln u lies below V2. This shows that ln S k is nonempty. Consequently, S k is nonempty and thus (i) holds for j = k.

Since (iii) holds for j = 1, …, k − 1, from the proof of the “if” part of Lemma 1 it follows that the sets of maximizers in the first k − 1 rounds of the iterative maximization in Definition 2 do not change if we replace α1, …, αk−1 with β1, …, βk−1. Therefore, Σk−1(com(S k ), 0) = S k . The kth round of the iterative maximization can be written as

(B.13) max u R + + n i = 1 n α i k ln u i  s.t.  i = 1 n u i 1  and  i = 1 n β i j ln u i = ln ϕ , j = 1 , , k 1 .

Considering u ̂ ln u , (B.13) is equivalent to

(B.14) max u ̂ R n i = 1 n α i k u ̂ i  s.t.  i = 1 n e u ̂ i 1  and  i = 1 n β i j u ̂ i = ln ϕ , j = 1 , , k 1 .

Problem (B.14) is a convex optimization problem. Moreover, the constraints satisfy the Slater condition because (ln ϕ, …, ln ϕ) satisfies the affine equality constraints (since (ii) holds for j = 1, …, k − 1) and strictly satisfies the inequality constraint (since ϕ < 1/n).

By definition, β k must be a solution of problem (B.13) and hence, ln β k must be a solution of problem (B.14). Thus, the following Kuhn–Tucker conditions for problem (B.14) hold:

(B.15) α i k λ β i k j = 1 k 1 μ j β i j = 0  for  i = 1 , , n ,
(B.16) λ 0 , i = 1 n β i k 1 , λ 1 i = 1 n β i k = 0 ,
i = 1 n β i j ln β i k = ln ϕ  for  j = 1 , , k 1 ,

where λ and μ1, …, μk−1 are the Langrangian multipliers. If λ = 0, then (B.15) implies that α k is a linear combination of β1, …, βk−1 and hence, it is also a linear combination of α1, …, αk−1 because (iii) holds for j = 1, …, k − 1. This contradicts the linear independence of α1, …, α n . Therefore, it must be the case that λ > 0. From (B.16), i = 1 n β i k = 1 , which implies that (ii) holds for j = k. Moreover, from (B.15),

β i k = 1 λ α i k j = 1 k 1 μ j β i j .

This establishes (iii) for j = k.

Finally, the validity of claim (iii) for j = 1, …, k together with the linear independence of α1, …, α k implies that β1, …, β k are linearly independent. It follows that claim (iv) holds for j = k. □

References

Aoki, M. 1980. “A Model of the Firm as a Stockholder-Employee Cooperative Game.” The American Economic Review 70 (4): 600–10.10.4337/9781783476213.00018Search in Google Scholar

Bishop, R. L. 1960. “Duopoly: Collusion or Warfare?” The American Economic Review 50 (5): 933–61.Search in Google Scholar

Conley, J. P., and S. Wilkie. 1996. “An Extension of the Nash Bargaining Solution to Nonconvex Problems.” Games and Economic Behavior 13 (1): 26–38. https://doi.org/10.1006/game.1996.0023.Search in Google Scholar

Harrington, J. E.Jr. 1991. “The Determination of Price and Output Quotas in a Heterogeneous Cartel.” International Economic Review: 767–92. https://doi.org/10.2307/2527033.Search in Google Scholar

Herrero, M. J. 1989. “The Nash Program: Non-convex Bargaining Problems.” Journal of Economic Theory 49 (2): 266–77. https://doi.org/10.1016/0022-0531(89)90081-1.Search in Google Scholar

Hougaard, J. L., and M. Tvede. 2003. “Nonconvex N-Person Bargaining: Efficient Maxmin Solutions.” Economic Theory 21 (1): 81–95. https://doi.org/10.1007/s00199-001-0246-7.Search in Google Scholar

Kalai, E. 1977. “Nonsymmetric Nash Solutions and Replications of 2-person Bargaining.” International Journal of Game Theory 6 (3): 129–33. https://doi.org/10.1007/bf01774658.Search in Google Scholar

Kaneko, M. 1980. “An Extension of the Nash Bargaining Problem and the Nash Social Welfare Function.” Theory and Decision 12 (2): 135–48. https://doi.org/10.1007/bf00154358.Search in Google Scholar

Mariotti, M. 1998. “Extending Nash’s Axioms to Nonconvex Problems.” Games and Economic Behavior 22 (2): 377–83. https://doi.org/10.1006/game.1997.0590.Search in Google Scholar

McDonald, I. M., and R. M. Solow. 1981. “Wage Bargaining and Employment.” The American Economic Review 71 (5): 896–908.10.1007/978-94-011-2378-5_5Search in Google Scholar

Miyazaki, H. 1984. “Internal Bargaining, Labor Contracts, and a Marshallian Theory of the Firm.” The American Economic Review 74 (3): 381–93.Search in Google Scholar

Moulin, H. 1988. Axioms of Cooperative Decision Making. Cambridge: Cambridge University Press.10.1017/CCOL0521360552Search in Google Scholar

Nash, J. F. 1950. “The Bargaining Problem.” Econometrica 18 (2): 155–62. https://doi.org/10.2307/1907266.Search in Google Scholar

Nash, J. F. 1953. “Two-person Cooperative Games.” Econometrica 21 (1): 128–40. https://doi.org/10.2307/1906951.Search in Google Scholar

Naumova, N., and E. Yanovskaya. 2001. “Nash Social Welfare Orderings.” Mathematical Social Sciences 42 (3): 203–31. https://doi.org/10.1016/s0165-4896(01)00070-1.Search in Google Scholar

Peters, H., and D. Vermeulen. 2012. “WPO, COV and IIA Bargaining Solutions for Non-convex Bargaining Problems.” International Journal of Game Theory 41 (4): 851–84. https://doi.org/10.1007/s00182-010-0246-6.Search in Google Scholar

Qin, C.-Z., S. Shi, and G. Tan. 2015. “Nash Bargaining for Log-Convex Problems.” Economic Theory 58 (3): 413–40. https://doi.org/10.1007/s00199-015-0865-z.Search in Google Scholar

Qin, C.-Z., G. Tan, and A. C. L. Wong. 2019. “Implementation of Nash Bargaining Solutions with Non-convexity.” Economics Letters 178: 46–9. https://doi.org/10.1016/j.econlet.2019.02.016.Search in Google Scholar

Ross, S. A. 1973. “The Economic Theory of Agency: The Principal’s Problem.” The American Economic Review 63 (2): 134–9.Search in Google Scholar

Roth, A. E. 1977. “Individual Rationality and Nash’s Solution to the Bargaining Problem.” Mathematics of Operations Research 2 (1): 64–5. https://doi.org/10.1287/moor.2.1.64.Search in Google Scholar

Roth, A. E. 1979. Axiomatic Models of Bargaining. No. 170. Berlin: Springer-Verlag.10.1007/978-3-642-51570-5Search in Google Scholar

Schmalensee, R. 1987. “Competitive Advantage and Collusive Optima.” International Journal of Industrial Organization 5 (4): 351–67. https://doi.org/10.1016/s0167-7187(87)80001-2.Search in Google Scholar

Shimer, R. 2005. “The Cyclical Behavior of Equilibrium Unemployment and Vacancies.” The American Economic Review 95 (1): 25–49.10.1257/0002828053828572Search in Google Scholar

Shubik, M. 1959. Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games. New York: Wiley.Search in Google Scholar

Tirole, J. 1988. The Theory of Industrial Organization. Cambridge: The MIT press.Search in Google Scholar

Zhou, L. 1997. “The Nash Bargaining Theory with Non-convex Problems.” Econometrica 65 (3): 681–5. https://doi.org/10.2307/2171759.Search in Google Scholar

Received: 2022-08-14
Revised: 2022-12-07
Accepted: 2023-01-04
Published Online: 2023-02-06

© 2023 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 31.3.2023 from https://www.degruyter.com/document/doi/10.1515/bejte-2022-0095/html
Scroll to top button