Matrix representation of a cross product and related curl-based differential operators in all space dimensions

A higher dimensional generalization of the cross product is associated with an adequate matrix multiplication. This index-free view allows for a better understanding of the underlying algebraic structures, among which are generalizations of Grassmann's, Jacobi's and Room's identities. Moreover, such a view provides a higher dimensional analogue of the decomposition of the vector Laplacian which itself gives an explicit index-free Helmholtz decomposition in arbitrary dimensions $n\ge2$.


Introduction
The interplay between different differential operators is at the basis not only of pure analysis but also of many applied mathematical considerations. One possibility is to study, instead of the properties of a linear homogeneous differential operator with constant coefficients where α = (α 1 , . . . , α n ) T ∈ N n 0 is a multi-index of length |α| := α 1 + . . . + α n , ∇ α := ∂ α 1 1 . . . ∂ αn n and A α ∈ R N ×m , its symbol where we use the notation b α = b α 1 1 · . . . · b αn n for b ∈ R n . Note that A : C ∞ c (Ω, R m ) → C ∞ c (Ω, R N ) with Ω ⊆ R n open and we obtain for all a ∈ C ∞ c (Ω, R m ) also the expression A a = A(D a) with A ∈ Lin(R m×n , R N ). The approach to look and algebraically operate with the vector differential operator ∇ in a manner of a vector is also referred to as vector calculus or formal calculations.
An example of such a differential operator is the derivative D itself, but also div, curl, ∆ or inc. One of the most prominent relation in vector calculus is curl ∇ζ ≡ 0 for scalar fields ζ ∈ C ∞ c (Ω), Ω ⊆ R 3 open, which, from an algebraic point of view, reads b × b = 0 for all b ∈ R 3 (where a scalar factor can be and is omitted).
In this paper we take a closer look at a higher dimensional analogue of the curl or rather the underlying generalized cross product. An extension of the usual cross product of vectors in R 3 to vectors in R n depends on which properties are to be fulfilled. The three basic properties of the vector product are: the linearity in both arguments, that the vector a × b is perpendicular to both a, b ∈ R 3 (and, thus belongs to the same space) and that its length is the area of the parallelogram spanned by a and b. Gibbs uses these properties also to define the cross product, see [8,Chapter II]. It turns out that such a vector product exists only in three and seven dimensions, cf. [19]. However, the 7-dimensional vector product does not satisfy Jacobi's identity but rather a generalization of it, namely the Malcev identity, cf. [5, p. 279] and the references contained at the end of the section therein. We do not follow these constructions here and instead generalize the cross product to all dimensions by omitting one of its basic properties. These considerations are usually carried out using coordinates, i.e. index notations. However, we are concerned with their matrix representation, which provides a better understanding of the underlying algebraic structures. Such a view has already proved very useful in extending Korn inequalities for incompatible tensor fields to higher dimensions, cf. [17], where first results in these matrix representations have been obtained. In the present paper, we catch up with the underlying algebraic structures, among which are generalizations of Grassmann's, Jacobi's and Room's identities. Moreover, such a view provides a higher dimensional analogue of the decomposition of the vector Laplacian which itself gives an explicit index-free Helmholtz decomposition in arbitrary dimensions n ≥ 2.

Notations
As usual, . ⊗ . and ., . denote the dyadic and the scalar product, respectively. We write . · . to highlight the scalar multiplication of a scalar with a vector or a matrix. The space of symmetric (n × n)-matrices is denoted by Sym(n) and the space of skew-symmetric (n × n)-matrices by so(n). We use lower-case Greek letters to denote scalars, lower-case Latin letters to denote column vectors and upper-case Latin letters to denote matrices, with the exceptions for the dimensions: if not otherwise stated we have n, m, N ∈ N and n ≥ 2. The identity matrix is denoted by I n . For the symmetric part, the skew-symmetric part and the transpose of a matrix P we write sym P , skew P and P T , respectively.
3 Algebraic view of a generalized cross product 3

.1 Inductive introduction
From an algebraic point of view the components of the usual cross product a×b are of the form α i β j −α j β i for 1 ≤ i < j ≤ 3 sorted (and multiplied with −1) in such a way that the resulting vector is perpendicular to both a and b. For a general n ∈ N we have n(n−1) 2 combinations of the form α i β j − α j β i with 1 ≤ i < j ≤ n and we array them as the column vector This becomes a vector from R n(n−1) 2 and only for n = 3 lies in the same space as the vectors a and b. More precisely, using the notation we introduce the following generalized cross product × n : R n × R n → R n(n−1) 2 inductively by wherefrom the bilinearity and anti-commutativity follow immediately. We show in section 3.3 that this generalized cross product × n also satisfies the area property: Remark 3.1. The anti-commutativity of the (usual or generalized) cross product is a consequence of the area property. Indeed, let n, d ∈ N, n ≥ 2 and × : R n × R n → R d be a bilinear map which satisfies the area property Then for a = b we obtain: Linearizing the last equality leads to Furthermore, in case d = n, we call × a vector product to emphasize that the vector a × b is in the same space as a and b. In this situation we can further talk about orthogonality of the vector a × b to both a and b. Massey [19] showed that assuming these three properties, i.e. bilinearity, area property and orthogonality, a vector product exists only in the dimension n = 3 or n = 7. However, there are many cross products, different from each other, depending on the properties one requires to hold. In the present paper, we drop the orthogonality condition since we consider the case d = n(n−1) 2 and introduce in (3.3) the generalized cross product by induction over the space dimension n. This is equivalent to the coordinate-wise expression from (3.1) but allows for a better understanding of the algebraic properties of the generalized cross product × n .

Lagrange identity
In three dimensions, Lagrange's identity reads in terms of the usual cross product and the scalar product and for c = a and d = b becomes meaning that the length of the vector a × b ∈ R 3 is equal to the area of the parallelogram spanned by the vectors a, b ∈ R 3 . In higher dimensions, the inductive definition (3.3) can be used to directly deduce an analogue to Lagrange's identity, namely: Indeed, in the dimension n = 2 we have Furthermore, with a = (a, α n ) T , b = (b, β n ) T , c = (c, γ n ) T , d = (d, δ n ) T we obtain on the one hand and on the other hand: so that (3.12) follows by induction over n ∈ N , n ≥ 2. Especially for c = a and d = b we obtain for the squared norm of the generalized cross product meaning that the length of the vector a × n b ∈ R n(n−1) 2 is equal to the area of the parallelogram spanned by the vectors a, b ∈ R n . Two (non-zero) vectors a, b ∈ R n are linearly dependent (and thus parallel) if and only if a × n b = 0.

Matrix representation
It is well known that an identification of the usual cross product × with an adequate matrix multiplication facilitates some of the common proofs in vector algebra and allows one to extend the cross product of vectors to a cross product of a vector and a matrix, cf. [22,10,25,26,15]. Our next goal is to achieve a similar identification of the generalized cross product × n with a corresponding matrix multiplication. Indeed, since for a fixed a ∈ R n the operation a × n . is linear in the second component there exists a unique matrix denoted by a ×n ∈ R n(n−1) 2 ×n such that (3.14) In view of (3.3) the matrices . ×n can be characterized inductively, and for a = (a, α n ) T the matrix a ×n has the form The entries of the generalized cross product a × 3 b, with a, b ∈ R 3 , are permutations (with a sign) of the entries of the classical cross product a × b. Remember that the operation a × . can be identified with the left multiplication by the following skew-symmetric matrix which differs from the expression a × 3 for a = (α 1 , α 2 , α 3 ) T , cf. (3.16), and also form A 3 (a) which reads Thus, in three dimensions, it holds for the usual cross product Also the notations T a , W (a) or even [a] × are used for Anti(a), however, the latter emphasizes that we deal with a skew-symmetric matrix. For the analysis and the properties of such matrices we refer to [22,10,25,26,15].

Scalar triple product
In case of the usual cross product in three dimensions the scalar triple product remains unchanged under a circular shift of the three vectors (from the same space): it does not make sense to think of an analogue of a scalar triple product with three vectors coming from the same vector space but rather instead: Note the slight difference from the case of the usual cross product. The latter can be represented by a left multiplication with a square skew-symmetric matrix, whereas for the generalized cross product by matrices of the form (3.15) which are neither square (except the case n = 3) nor skew-symmetric matrices. These matrices . ×n also appear in further identities involving the generalized cross product and are very important in the subsequent considerations.

Grassmann identity
In three dimensions, the usual vector triple product fulfills where the relation to scalar products, marked by * , is referred to as Grassmann identity. However, in a generalization of a vector triple product we cannot expect the double appearance of the generalized cross product but focus on the matrices . ×n , as in the generalization of the scalar triple. Thus, as a generalization of Grassmann's identity we obtain for a, b, c ∈ R n It remains to prove the first equality (3.23) 1 . In the dimension n = 2 we have and on the other hand: so that (3.23) 1 follows by induction over n ∈ N , n ≥ 2.

Jacobi identity
In three dimensions, the usual cross product satisfies the Jacobi identity: which follows directly from the usual Grassmann identity (3.22) for the usual vector triple product. Similarly, having established the generalization of Grassmann's identity involving the generalized cross product × n in the previous section, we obtain the following generalization of Jacobi's identity: or, equivalently: Surely, the relation (3.23) can also be used to obtain (3.12).

Cross product with a matrix
Furthermore, the generalized cross product can be written as This allows us to define a generalized cross product of a vector b ∈ R n and a matrix P ∈ R m×n from the right and with a matrix B ∈ R n×m from the left, where m ∈ N, via 2 seen as row-wise cross product, (3.30a) and b × n B := b ×n B ∈ R n(n−1) 2 ×m seen as column-wise cross product, (3.30b) and they are connected via So, especially for the identity matrix P = I n we obtain Moreover, for a ∈ R m and b, c ∈ R n it follows and especially for c = b: As a consequence we obtain

Another vector triple
Already in the scalar triple product we come across the expression b T ×n a ∈ R n . Hence, we may consider also the following vector triple product for a ∈ R n(n−1) 2 Again, the corresponding relations to (3.23) and (3.33) for the usual cross product coincide, whereas the situation is different for the generalized cross product due to the non-symmetry of the matrices . ×n . The inductive view (3.15) 1 on the appearing matrix in (3.33) shows for all a, b ∈ R n : and especially for a = b: Moreover, we may also consider the following matrix multiplication: and, like in (3.30), related by transposition also b T ×n (.) for an n(n−1) 2 × m -matrix.

Room identity
Surely, the considerations in the previous subsections were inspired by the corresponding relations known for the usual cross product. So, from the usual Grassmann identity (3.22) one can deduce the usual Jacobi (3.27) and Lagrange (3.10) identities. Moreover, the usual Grassmann identity (3.22) for the vector triple in three dimensions allows also to conclude that This algebraic relation is already contained in [22, p. 691 (ii)]. For this reason let us call it Room identity. The relation (3.37) turned out to be very important also from an application point of view, cf. [15,16] and the references contained therein.
Returning to the n-dimensional case, we have for arbitrary a, b ∈ R n : so that as an analogue to Room's identity it follows and especially for a Note that the minus sign is missing in the generalized Room identity (3.39) due to the lack of skewsymmetry of the matrix a T ×n . Interchanging the roles of a and b in (3.39) we further deduce that Since tr(a ⊗ b) = a, b , the expression (3.39) shows that the entries of a T ×n b ×n are linear combinations of the entries of the dyadic product a ⊗ b. Also the converse holds true: where we leave it as a short exercise for the reader to verify (e.g., by induction) that Recall that the associated matrix Anti(.) with the usual cross product × in R 3 is a (skew-symmetric) square matrix, while the associated matrix . ×n with the generalized cross product × n is an ( n(n−1) 2 × n)matrix and therefore is a square matrix only in the case of n = 3. Hence, despite of the situation in Room's identity (3.37) we may also interchange the matrices in its n-dimensional analogue (3.39), i.e. consider the expression in (3.34).
Returning to the usual Room identity we have denoting by L(.) a corresponding linear operator with constant coefficients, not necessarily the same in any two places here and in the following. On the one hand, we associate with the matrix Anti(.) a representation of the usual cross product. Room's identity can be generalized to higher dimensions in three different ways. We have already seen in (3.39) and (3.42) an extension to: However, a similar result to (3.44a) also holds true for the generalized cross product of the matrix coming from the matrix representation of the generalized cross product with a vector, see [17]: These relations also apply to the case of a × n b T ×n = a ×n b T ×n = − a ×n × n b, which for n = 2 is only a scalar, so that the last relation in (3.44c) is only valid for n ≥ 3.
On the other hand, Room's identity in three dimensions can also be seen as an expression for the cross product of a skew-symmetric matrix with a vector: where axl : so(3) → R 3 denotes the inverse of Anti(.). It is interesting that a similar result holds true for (n × n)-skew symmetric matrices in all dimensions n ≥ 2, see [17]: A × n b = L(a n (A) ⊗ b) and a n ( Remark 3.4. We have seen that Room's identity (3.37) admits three different generalizations to higher dimensions (3.44b), (3.44c), (3.44d) which coincide in three dimensions when considering the usual cross product and the associated matrix with it, since the latter is a skew-symmetric (square) matrix. Whereas, Grassmann's and Jacobi's identities generalize only in the ways presented in (3.23) and (3.28). Indeed, these relations are comparable to the situation in three dimensions when considering the usual triple vector product a × (b × c) = Anti(a) (b × c) since Anti(a) T = − Anti(a).

Simultaneous cross product
Of special interest is a simultaneous cross product of a square matrix P ∈ R n×n and a vector b ∈ R n from both sides: where, due to the associativity of matrix multiplication, we can omit parenthesis. Since it follows for S ∈ Sym(n) and A ∈ so(n) immediately: and for all P ∈ R n×n : Furthermore, for a square matrix P ∈ R n(n−1) 2 × n(n−1) 2 and a vector b ∈ R n we obtain which has comparable properties to the simultaneous cross product above, for instance: which gives: 50b) as well as And for the identity matrix P = I n(n−1) Again, the corresponding expressions to (3.45) and (3.49) coming from the usual cross product in three dimensions just coincide:

Differential operators
Let us now come back to the interplay between a linear homogeneous differential operator with constant coefficients and its symbol, thus, replacing b by the vector differential operator ∇ in the algebraic relation presented in the previous sections. For that purpose, let Ω ⊆ R n be open, n ≥ 2 and n, m ∈ N. As usual, the derivative and the divergence of a vector field rely on the dyadic product and the scalar product, respectively: where the latter can be generalized to a matrix divergence in a row-wise way: In three dimensions, the usual curl is seen as curl a := a × (−∇) = ∇ × a = Anti(∇) a = 2 · axl(skew D a) for a ∈ C ∞ c (Ω, R 3 ), n = 3. Similarly, in arbitrary dimension n ≥ 2 the generalized curl is related to the generalized cross product via curl n a := a × n (−∇) = ∇ × n a = ∇ ×n a (3.9) = 2 · a n (skew D a) ∈ C ∞ c (Ω, R n(n−1) 2 ) for a ∈ C ∞ c (Ω, R n ), (4.4) where the latter expression is usually considered in index-notation to introduce the generalized curl. Furthermore, we consider the new differential operation which differs from the usual curl and from curln(n−1) 2 a also in the three-dimensional case: (4.6) To the best of our knowledge, the operator ∇ T ×n : C ∞ c (Ω, R n(n−1) 2 ) → C ∞ c (Ω, R n ) has not received any attention in the literature so far, not even in index notation. However, this differential operator plays the counterpart in the integration by parts formula for the generalized curl n , see (4.31a) below. This adjoint differential operator appears here because the matrix associated with the generalized cross product has no symmetry.
Furthermore, it is the matrix representations of the cross product which allows us to introduce also a row-wise generalized matrix curl operator: which is connected to the the column-wise differential operation: (4.8) and like in the three dimensional setting can be referred to as curl T n . Moreover, the matrix representation of the curl operation offers also a further differential operator (.) ∇ ×n for m × n(n−1) 2 -matrix fields: , (4.9) i.e. the row-wise differentiation from (4.5), and again related by transposition also ∇ T ×n (.) for n(n−1) 2 × mmatrix fields.
Surely, it follows from (3.32b): And, as analogue to the usual div • curl ≡ 0, we have in n-dimensions: ). (4.12) We recall the following definition.
Let Ω ⊆ R n be open. A linear homogeneous differential operator with constant coeffi- It follows, from b × b = 0 for b ∈ R 3 that the usual curl operator is not elliptic. Similarly, also the generalized curl n is not elliptic.
To see that ∇ T ×n is not elliptic for all n ≥ 3 we consider  which gives the non-ellipticity of ∇ T × 3 and the non-ellipticity in the higher dimensional cases follows from the inductive structure.

Nye formulas
Denoting by curl the matrix curl operator related to the usual curl for vector fields in R 3 , Room's identity (3.44a) becomes after interchanging b by ∇: curl(Anti(a)) = L(D a) and D a = L(curl Anti(a)) for a ∈ C ∞ c (Ω, R 3 ), (4.14a) where Ω(open) ⊆ R 3 for a moment. More precisely, they read curl(Anti(a)) = div a · I 3 − (D a) T (4.14b) and D a = tr(curl Anti(a)) 2 · I 3 − (curl Anti(a)) T (4.14c) and are better known as Nye formulas [20, eq. (7)]. Surely, (4.14a) 1 is not surprising at all, but (4.14a) 2 implies that the entries of the derivative of a skew-symmetric matrix field are linear combinations of the entries of the matrix curl: (3)). (4.14d) Returning to the higher dimensional case we conclude, from (3.42) or (3.44b), that and from (3.44c) Note, however, that the latter expression is (in general) not related to curl n a. Finally, from (3.44d) we deduce that D a n (A) = L(curl n A) for A ∈ C ∞ c (Ω, so(n)). which implies (4.14d) in all dimensions n ≥ 2: a relation that is usually derived in index notations.

Incompatibility operator
In three dimensions, the incompatibility inc is usually defined via In higher dimensions, for P ∈ C ∞ c (Ω, R n×n ) we consider the generalized incompatibility operator (where for simplicity we drop the transposition and the minus sign) given by: inc n P := ∇ × n P × n ∇ = − ∇ ×n P ∇ T ×n (4.20) It possesses the properties known from the usual incompatibility operator in three dimensions, it follows namely from (3.46c) that sym inc n P = inc n sym P and skew inc n P = inc n skew P (4.22) and from (3.48) for a ∈ C ∞ c (Ω, R n ): inc n D a = inc n (D a) T = inc n (sym D a) = inc n (skew D a) ≡ 0. (4.23) Furthermore, for matrix fields P ∈ C ∞ c (Ω, R n(n−1) 2 × n(n−1) 2 ) we consider the new differential operation with similar properties to the generalized incompatibility operator, see section 3.11. Especially for ζ ∈ C ∞ c (Ω, R) we obtain: where we have used that from an algebraic point of view the Laplacian ∆ = ∇ 2 behaves like a scalar and where D ∇ζ is the Hessian matrix of ζ. The latter expression reminds of the known identity in n = 3 dimensions for the usual incompatibility operator: It is clear from the integration by parts formula for the generalized curl (4.31b), how the operator ∇ T ×n (.) ∇ ×n plays the counterpart in the corresponding integration by parts formula for the generalized incompatibility operator. For the corresponding formula in the usual three dimensional case we refer to [1].

Remark 4.2.
In three dimensions, the usual incompatibility operator inc occurs, e.g., in the modelling of dislocated crystals or in the modelling of elastic materials with dislocations, where the notion of incompatibility is at the basis of a new paradigm to describe the inelastic effects, see e.g. [12,27,28,18,1,6]. The index-free view presented above should provide a better understanding of such phenomena also in higher dimensions.

Vector Laplacian
Recalling (3.40) we have for all a, b ∈ R n : Thus, interchanging b by ∇ we deduce which is the generalization of the known expression for the vector Laplacian in n = 3 dimensions: (4.29) and the appearance of the minus sign comes from the fact that the matrix associated with the usual cross product is a skew-symmetric matrix.
Since the matrix divergence and matrix curl act row-wise, we obtain ∆P = D div P + (curl n P ) ∇ ×n for P ∈ C ∞ c (Ω, R m×n ), (4.30) for m, n ∈ N, n ≥ 2, meaning that the entries of the Laplacian of a matrix field P are linear combinations of the entries of the derivative of the matrix curl and of the entries of the derivative of the matrix divergence.

Integration by parts
For the sake of completeness, we include the integration by parts formula for the generalized matrix curl: Let Ω ⊂ R n be an open and bounded set with Lipschitz boundary ∂Ω and outward unit normal ν. For all a ∈ C 1 (Ω, R n ) and all a ∈ C 1 (Ω, R n(n−1) 2 ) we have Ω curl n a, a + a, ∇ T ×n a dx = ∂Ω a × n (−ν), a dS, (4.31a) so that for matrix fields P ∈ C 1 (Ω, R m×n ) and P ∈ C 1 (Ω, R m× n(n−1) 2 ) it follows Ω curl n P, P + P, P ∇ ×n dx = ∂Ω P × n (−ν), P dS, (4.31b) and we refer to [17] for a coordinate-free proof for square matrix fields P .

Helmholtz decomposition
It is well known that any vector field a ∈ C ∞ c (R n , R n ) admits a decomposition into a divergence-free vector field and a gradient field, i.e. a curl n -free part, see e.g. [7] and for a deviation from the Hodge decomposition see [11]. Let us denote the divergence-free part by a div and the curl n -free by a curln , so having a = a div + a curl n . At the end of our vector calculus, we give the explicit index-free expressions of these parts and provide the Helmholtz decomposition explicitly in all dimensions n ≥ 2. More precisely, we show that and a div (x) = ∇ x T ×n R n G (n) (x, y) · curl n a(y) dy (4.32c) x − y −n · curl n a(y) dy, (4.32d) where G (n) (x, y) denotes the normalized fundamental Green function for the Laplacian for the entire space R n and is given by 2π ln x − y , for n = 2, 1 n(2−n)ωn x − y 2−n , for n ≥ 3, (4.33) denoting by ω n the volume of the unit ball in R n , see [9,Section 2.4]. Indeed, the first expressions in (4.32) follow from the decomposition of the vector Laplacian in (4.28) since for a ∈ C ∞ c (R n , R n ) we have where in ( * ) we used that ∇ x G (n) (x, y) = −∇ y G (n) (x, y) and in ( * * ) the relations div(α · a) = ∇α, a + α div a and curl n (α · a) = ∇α × n a + α · curl n a, (4.34) for α ∈ C ∞ c (Ω) and a ∈ C ∞ c (Ω, R n ). Since we have we obtain

Robbin's proof of the div-curl lemma in higher dimensions
In this last section, we show that the proof of the div-curl lemma presented in [21] in three dimensions can be directly adopted to all dimensions using the matrix representation of the generalized cross product presented above. More precisely, we show  ). (4.38b) Then in the sense of distributions we have i.e. it holds for all ϕ ∈ C ∞ c (Ω, R): It is not the most general formulation of the div-curl lemma, and we refer to [4,24] and the references contained therein for both historical comments and generalizations. The main objective here is to demonstrate that the algebraic view advocated in the previous sections allows us to carry out the proof from [21] even in all dimensions without introducing the language of differential geometry. In three dimensions, Robbin's proof is based on the decomposition of the vector Laplacian (4.29). In the previous section, we have obtained the desired decomposition in all dimensions, see (4.28).
Furthermore, multiplying (3.40) with b ×n from the left we deduce for all b ∈ R n : so that in the language of vector calculus it becomes: curl n ∇ T ×n curl n a = ∆ curl n a for a ∈ C ∞ c (Ω, R n ). (4.41) Now we have prepared all the relations between differential operators that we need to follow Robbin's proof.
Proof of Lemma 4.3. It suffices to assume that the functions u k and v k have compact support, see [21] and the corresponding relations (4.34). Let us extend u k by zero to the entire space R n and denote by w k the unique solution in L 2 (Ω, R n ) of ∆w k = u k . (4.42) Thus, we write w k = ∆ −1 u k (4.43) and set ψ k = div w k and g k = curl n w k , (4.44) so that ∇ψ k + ∇ T ×n g k = ∇ div w k + ∇ T ×n curl n w k (4.28) = ∆w k (4.42) = u k . (4.45) Next, we show that ∆ −1 commutes with the div and curl n operators in all dimensions n ≥ 2: And we can conclude as in [21]: at least for a subsequence by (4.38a). Moreover, ∇ T ×n curl n ∆ −1 v k =:f k = ∇ T ×n ∆ −1 curl n v k → ∇ T ×n f in L 2 (Ω, R n ) (4.49) at least for a subsequence by (4.38b). Furthermore: g k → g in L 2 (Ω, R n(n−1) 2 ), ∇ T ×n g k ⇀ ∇ T ×n g in L 2 (Ω, R n ) (4.50) and for φ k := div ∆ −1 v k : φ k → φ in L 2 (Ω, R), ∇φ k ⇀ ∇φ in L 2 (Ω, R n ). (4.51) With the above results the proof completes as in [21].

Conclusion
In the present paper, we have studied the algebraic structures underlying the generalized cross product, by relating it to an adequate matrix multiplication. The situation differs from that of the usual cross product in three dimensions, where a matrix representation results in a skew-symmetric matrix. The lack of symmetry in the general case leads to the fact that the known algebraic identities have to be adapted in an appropriate way and that also other combinations must be included. In vector calculus, this led not only to the generalized curl n , but also to a new operator ∇ T ×n . The importance of the latter has been highlighted in the previous sections, in particular by the fact that the image of the ∇ T ×n operator lies in the kernel of the divergence operator, see (4.12). Here we have thoroughly examined the matrix analysis behind such operations. Such a view has already proved very useful in extending Korn inequalities for incompatible tensor fields to higher dimensions, cf. [17], where first results in these matrix representations have been obtained. With the better understanding presented here, we are now in a position to further extend Korn-Maxwell-Sobolev type inequalities. This will be the subject of a forthcoming paper.