Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access October 13, 2022

Displacement structure of the DMP inverse

  • Jin Zhong EMAIL logo and Hong Yang
From the journal Open Mathematics

Abstract

A matrix A is said to have the displacement structure if the rank of the Sylvester displacement A U V A or the Stein displacement A V A U is much smaller than the rank of A . In this article, we study the displacement structure of the DMP inverse A d , . Estimations for the Sylvester displacement rank of DMP inverse are presented under some restrictions. The generalized displacement is also discussed. The general results are applied to the core inverse.

MSC 2010: 15A09; 15A10

1 Introduction

Let C m × n be the set of all m × n complex matrices. For A C m × n , ( A ) , Ker ( A ) , A , and A T denote the range, kernel, conjugate transpose, and transpose of A , respectively. I n is the identity matrix of order n . The index of A C n × n , denoted by Ind ( A ) , is the smallest nonnegative integer k such that rank ( A k ) = rank ( A k + 1 ) . The present article is to study the displacement structure of the DMP inverse [1], which is motivated by [2,3,4, 5,6,7, 8,9,10]. To begin with, we shall recall definitions of some generalized inverses.

The Moore-Penrose inverse of a matrix A C m × n , denoted by A , is the unique matrix X C n × m satisfying the following Penrose equations [11]:

A X A = A , X A X = X , ( A X ) = A X , ( X A ) = X A .

The Drazin inverse of A C n × n is the unique matrix X = A d such that

A k + 1 X = A k , X A X = X , A X = X A ,

where k is the index of A . When Ind ( A ) = 1 , A d is called the group inverse of A and is denoted by A # , and see [11].

In [12], Baksalary and Trenkler introduced the notion of core inverse for a square matrix of index 1. Let A C n × n be with Ind ( A ) = 1 . Then the core inverse of A is defined as the unique matrix X C n × n satisfying

A X = A A , ( X ) ( A )

and is denoted by X = A # . It is known that A # = A # A A .

Then three generalizations of the core inverse were recently introduced for n × n complex matrices, namely, core-EP inverse [13], BT inverse [14], and DMP inverse [15]. The DMP inverse [16] of A C n × n , denoted as A d , , is the unique solution to the system of equations:

X A X = X , X A = A d A , A k X = A k A ,

where k = Ind ( A ) . It was shown that A d , = A d A A . Thus, if Ind ( A ) = 1 , then A d , = A # A A = A # , the core inverse of A . For more properties of the DMP inverse, we refer to [17,18, 19,20].

Furthermore, the definition of the DMP inverse of a square matrix was extended to rectangular matrices in [21], which is called the W -weighted DMP inverse.

Malik and Thome [15] gave the canonical form for the DMP inverse of a square matrix by using the Hartwig-Spindelböck decomposition (see [22]), which is useful in analyzing the properties of the DMP inverse. For any A C n × n of rank r > 0 , the Hartwig-Spindelböck decomposition is given by

(1) A = S Σ K Σ L 0 0 S ,

where S C n × n is unitary, Σ = diag ( σ 1 I r 1 , σ 2 I r 2 , , σ t I r t ) is a diagonal matrix, and the diagonal entries σ i ( 1 i t ) are the singular values of A with σ 1 > σ 2 > > σ t > 0 , r 1 + r 2 + + r t = r and K C r × r , L C r × ( n r ) satisfy K K + L L = I r .

Lemma 1.1

[15] Let A C n × n be of the form (1). Then

(2) A d , = S ( Σ K ) d 0 0 0 S .

Lemma 1.2

[15] Let A C n × n be a matrix of index m written as (1). Then Ind ( Σ K ) = m 1 .

The concept of the displacement structure was presented in [23] for the inverse of an integral operator with a convolution kernel. A matrix A is said to have the displacement structure if we can find two-dimensionally compatible matrices U and V such that the rank of the Sylvester displacement A U V A or the Stein displacement A V A U is much smaller than the rank of A [24]. It is well known that fast inversion algorithms for a matrix A can be constructed if A is a matrix with a displacement structure. The displacement structure is commonly exploited in the computation of generalized inverses. In recent years, displacement structures of various generalized inverses, such as Moore-Penrose inverse, weighted Moore-Penrose inverse, group inverse, M -group inverse, Drazin inverse, W -weighted Drazin inverse, and core inverse, were studied, and see [2,3,4, 5,6,8, 9,10].

In this article, we will give an upper bound for the Sylvester displacement rank of the DMP inverse under some restrictions in Section 2. In Section 3, we will consider a more general displacement. An estimation for the generalized displacement rank of the DMP inverse under some restrictions is presented. The general results are applied to the core inverse.

2 Sylvester displacement rank

Let A C m × n and let U C n × n , V C m × m be some fixed matrices. The operator

d ( U , V ) A = A U V A

is called the Sylvester displacement of A . The rank of d ( U , V ) A is called the Sylvester displacement rank of A . It is well known that the Sylvester displacement rank of a Toeplitz rank is at most 2. This low displacement rank property can be exploited to develop fast algorithms for triangular factorization, and inversion, among others [24]. For a nonsingular matrix A , the equality A U V A = A ( U A 1 A 1 V ) A tells us that the Sylvester displacement rank of a nonsingular matrix A equals that of its inverse A 1 . In other words, if A is structured with respect to the Sylvester displacement rank associated with ( U , V ) , then its inverse A 1 is also structured with respect to the Sylvester displacement rank associated with ( V , U ) . It is natural to consider the displacement rank of generalized inverses. In this section, we will give an upper bound for the displacement rank of DMP inverse under some restrictions.

Lemma 2.1

[11] Let A , P C n × n satisfy P 2 = P . Then

  1. P A = A if and only if ( A ) ( P ) .

  2. A P = A if and only if Ker ( P ) Ker ( A ) .

Define the matrices

M = A d , A , N = I M , M = A A d , , N = I M .

Then it is easy to see that A d , ( A U V A ) A d , = ( I N ) U A d , A d , V ( I N ) . Thus, we obtain the following lemma.

Lemma 2.2

Let A , U , V C n × n . Then

(3) A d , V U A d , = A d , V N N U A d , A d , ( A U V A ) A d , .

By considering the ranks of both sides in equality (3), we can obtain an upper bound for the Sylvester displacement rank of A d , .

Theorem 2.1

The V U -displacement rank of A d , satisfies the following estimate:

(4) rank ( A d , V U A d , ) rank ( A U V A ) + rank ( M V N ) + rank ( N U M ) .

Proof

It is clear that ( M ) = ( A d , A ) ( A d , ) . Since

( A d , ) = ( A d A A ) ( A d A ) = ( A d , A ) = ( M ) ,

then ( M ) = ( A d , ) .

On the other hand, it is clear that Ker ( A d , ) Ker ( A A d , ) = Ker ( M ) . Conversely, for any x Ker ( M ) , A A d , x = A A d A A x = 0 , which implies that A d A A d A A x = A d A A x = A d , x = 0 , i.e., Ker ( M ) Ker ( A d , ) . Therefore, Ker ( A d , ) = Ker ( M ) .

It follows that

rank ( N U A d , ) = dim [ ( N U A d , ) ] = dim [ N U ( A d , ) ] = dim [ N U ( M ) ] = dim [ ( N U M ) ] = rank ( N U M )

and

rank ( A d , V N ) = n dim [ Ker ( A d , V N ) ] = n dim [ Ker ( M V N ) ] = rank ( M V N ) .

Now, the inequality (4) follows immediately from Lemma 2.2.□

Next we give an upper bound for the sum of the second and third terms on the right-hand side of (4) under some restrictions.

Theorem 2.2

Let A C n × n be of the form (1) and let

U = S U 11 U 12 U 21 U 22 S C n × n , V = S V 11 V 12 V 21 V 22 S C n × n ,

where U 11 , V 11 C r × r . If ( U 11 ) [ ( Σ K ) p 1 ] and Ker [ ( Σ K ) p 1 ] Ker ( V 11 ) , then

(5) rank ( M V N ) + rank ( N U M ) rank ( U G G V ) ,

where G = A p A , p = Ind ( A ) .

Proof

We first determine the structure of G . If A has the form (1), then it is not difficult to see that

A = S K Σ 1 0 L Σ 1 0 S .

Thus,

G = A p A = S ( Σ K ) p ( Σ K ) p 1 Σ L 0 0 S S K Σ 1 0 L Σ 1 0 S = S ( Σ K ) p K Σ 1 + ( Σ K ) p 1 Σ L L Σ 1 0 0 0 S = S ( Σ K ) p 1 Σ ( K K + L L ) Σ 1 0 0 0 S = S ( Σ K ) p 1 0 0 0 S .

Next, we determine the block representations of M V N and N U M . Denoted P = ( Σ K ) ( Σ K ) d . Then

(6) M V N = A A d , V ( I A A d , ) = S P 0 0 0 V 11 V 12 V 21 V 22 I P 0 0 I S = S P V 11 ( I P ) P V 12 0 0 S .

Since Ind ( A ) = p , then by Lemma 1.2, Ind ( Σ K ) = p 1 . If Ker [ ( Σ K ) p 1 ] Ker ( V 11 ) , then Ker ( P ) = Ker [ ( Σ K ) d ] = Ker [ ( Σ K ) p 1 ] Ker ( V 11 ) . Thus, by Lemma 2.1, V 11 ( I P ) = 0 . Now, it can be seen from (6) that

(7) rank ( M V N ) = rank ( P V 12 ) = n dim Ker ( P V 12 ) = n dim Ker [ ( Σ K ) p 1 V 12 ] = rank [ ( Σ K ) p 1 V 12 ] .

On the other hand,

(8) N U M = ( I A d , A ) U ( A d , A ) = S I P Q 0 I U 11 U 12 U 21 U 22 P Q 0 0 S = S ( I P ) U 11 P Q U 21 P ( I P ) U 11 Q Q U 21 Q U 21 P U 21 Q S ,

where Q = ( Σ K ) d ( Σ L ) .

We can see from (8) that

(9) rank ( N U M ) = rank ( S I Q 0 I S N U M ) = rank ( S ( I P ) U 11 P ( I P ) U 11 Q U 21 P U 21 Q S ) = rank ( S ( I P ) U 11 0 U 21 P 0 P Q 0 0 S ) rank ( ( I P ) U 11 0 U 21 P 0 ) .

Notice that [ ( Σ K ) p 1 ] = [ ( Σ K ) d ] = ( P ) . If ( U 11 ) [ ( Σ K ) p 1 ] , then by Lemma 2.1, P U 11 = U 11 . It follows from (9) that

(10) rank ( N U M ) rank ( U 21 P ) = rank [ U 21 ( Σ K ) d ] = rank [ U 21 ( Σ K ) p 1 ] .

Set F = U G G V . Then

S F S = S U S S G S S G S S V S = U 11 U 12 U 21 U 22 ( Σ K ) p 1 0 0 0 ( Σ K ) p 1 0 0 0 V 11 V 12 V 21 V 22 = U 11 ( Σ K ) p 1 ( Σ K ) p 1 V 11 ( Σ K ) p 1 V 12 U 21 ( Σ K ) p 1 0 .

It follows from [25], (7), and (10) that

rank ( U 11 ( Σ K ) p 1 ( Σ K ) p 1 V 11 ( Σ K ) p 1 V 12 U 21 ( Σ K ) p 1 0 ) rank [ ( Σ K ) p 1 V 12 ] + rank [ U 21 ( Σ K ) p 1 ] rank ( M V N ) + rank ( N U M ) ,

which completes the proof.□

We remark that if the two conditions ( U 11 ) [ ( Σ K ) p 1 ] and Ker [ ( Σ K ) p 1 ] Ker ( V 11 ) were removed from Theorem 2.2, then the inequality (5) may not hold. Let’s give an example to show this.

Example 2.1

Let

A = 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 .

Then rank ( A ) = 4 and rank ( A 2 ) = rank ( A 3 ) = 3 . Thus, Ind ( A ) = 2 . The singular value decomposition of A is

A = S Σ ˜ Q T = 0 1 2 0 0 1 2 y 0 0 x 0 0 0 1 0 0 0 1 2 0 0 1 2 x 0 0 y 0 5 + 1 2 0 0 0 0 0 2 0 0 0 0 0 2 0 0 0 0 0 5 1 2 0 0 0 0 0 0 x 0 0 y 0 0 0 1 2 0 1 2 0 1 0 0 0 y 0 0 x 0 0 0 1 2 0 1 2 = S Σ 0 0 0 Q T S S T = S Σ 0 0 0 K L M N S T = S Σ K Σ L 0 0 S T ,

where x = 5 5 10 , y = 5 + 5 10 , and Σ K = 0 ( x + y ) ( 5 + 1 ) 2 2 0 0 0 0 2 0 ( x + y ) 0 0 x y 0 ( x y ) ( 5 1 ) 2 2 0 0 .

Let

U = V = S I 4 0 0 0 S T = 1 2 0 0 1 2 0 0 1 0 0 0 0 0 1 0 0 1 2 0 0 1 2 0 0 0 0 0 1 .

Then it can be seen from rank ( Σ K ) = 3 and rank ( U 11 ) = rank ( V 11 ) = 4 that ( U 11 ) [ ( Σ K ) ] and Ker [ ( Σ K ) ] Ker ( V 11 ) .

A direct calculation shows that

M = A d , A = 1 3 0 0 2 3 0 0 2 3 0 0 2 3 0 0 1 0 0 1 3 0 0 2 3 0 0 1 3 0 0 1 3 , M = A A d , = 1 2 0 0 1 2 0 0 2 3 0 0 2 3 0 0 1 0 0 1 2 0 0 1 2 0 0 1 3 0 0 1 3

and

G = A 2 A = 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 2 0 0 1 2 0 .

Now, it can be seen that

rank ( M V N ) = 3 , rank ( N U M ) = 2 , rank ( U G G V ) = 2 ,

i.e., rank ( M V N ) + rank ( N U M ) > rank ( U G G V ) .

By combining Theorems 2.1 and 2.2 we have the following.

Corollary 2.1

Let A C n × n be of the form (1) and let

U = S U 11 U 12 U 21 U 22 S C n × n , V = S V 11 V 12 V 21 V 22 S C n × n ,

where U 11 , V 11 C r × r . If ( U 11 ) [ ( Σ K ) p 1 ] and Ker [ ( Σ K ) p 1 ] Ker ( V 11 ) , then

rank ( A d , V U A d , ) rank ( A U V A ) + rank ( U G G V ) ,

where G = A p A , p = Ind ( A ) .

Let A C n × n be of the form (1). If Ind ( A ) = 1 , it was shown in [22] that K is nonsingular. In this case, Σ K is also nonsingular. Then ( Σ K ) p 1 = I and the two restrictions ( U 11 ) [ ( Σ K ) p 1 ] and Ker [ ( Σ K ) p 1 ] Ker ( V 11 ) can be removed from Theorem 2.2. Hence, we can obtain an estimate for the V U -displacement rank of the core inverse.

Corollary 2.2

For any A C n × n with Ind ( A ) = 1 , we have

(11) rank ( A # V U A # ) rank ( A U V A ) + rank ( U G G V ) ,

where G = A A .

We remark that the author in [6] showed that (11) holds for G = A A , while we derive a different G such that the inequality (11) holds.

3 Generalized displacement rank

In this section, we will consider a more general displacement, and the results of Sylvester displacement in Section 2 will be extended.

Let a = [ a i j ] 0 1 be a nonsingular 2 × 2 matrix. For any fixed U C n × n and V C m × m , the generalized a ( V , U ) displacement of A C m × n is defined by

(12) a ( V , U ) A = i , j = 0 1 a i j V i A U j .

If we set

a = 0 1 1 0 and a = 1 0 0 1 ,

respectively, in (12), then we obtain the Sylvester displacement and the Stein displacement.

For Z C n × n and a = [ a i j ] 0 1 C 2 × 2 , if a 00 I + a 01 Z is nonsingular, then we denote by f a ( Z ) the matrix

f a ( Z ) = ( a 00 I + a 01 Z ) 1 ( a 10 I + a 11 Z ) .

We first give two lemmas, which are taken from [4].

Lemma 3.1

Let a be a 2 × 2 nonsingular matrix and U C m × m , V C n × n . Then there exist 2 × 2 matrices b = [ b i j ] 0 1 and c = [ c i j ] 0 1 such that b 00 I + b 01 V and c 00 I + c 01 U are nonsingular and

a = b T d c for d = 0 1 1 0 .

Lemma 3.2

Let b and c be matrices satisfying the conditions in Lemma 3.1. Then for A C m × n ,

a ( V , U ) A = ( b 00 I + b 01 V ) [ A f c ( U ) f b ( V ) A ] ( c 00 I + c 01 U ) .

The following results play important roles in obtaining an extension of Corollary 2.1 for generalized a ( U , V ) displacement.

Theorem 3.1

  1. If [ ψ i j ] 0 1 is nonsingular and ψ 00 I + ψ 01 V is nonsingular, then

    rank ( M V N ) = rank ( M V ˜ N ) ,

    where V ˜ = f ψ ( V ) .

  2. If [ ϕ i j ] 0 1 is nonsingular and ϕ 00 I + ϕ 01 U is nonsingular, then

    rank ( N U M ) = rank ( N U ˜ M ) ,

    where U ˜ = f ϕ ( U ) .

Proof

Define

Ψ = Ker ( G ) Ker ( G V ) , Ψ 1 = Ker ( G ) Ψ ,

where G = A p A , p = Ind ( A ) . We conclude that Ker ( A d , ) = Ker ( G ) . Indeed, for any x Ker ( A d , ) , A d , x = A d A A x = 0 ; thus, A d A d A A x = A d A x = 0 . It follows that A x K e r ( A d ) = Ker ( A p ) . Hence, A p A x = 0 , i.e., Ker ( A d , ) Ker ( G ) . Conversely, for any x Ker ( A p A ) , we have A x Ker ( A p ) = Ker ( A d ) . Then A d A x = 0 , which gives that A A d A x = A d A A x = 0 , i.e., x Ker ( A d A A ) = Ker ( A d , ) . Therefore, Ker ( A d , ) = Ker ( G ) .

Next, we will show that M V N is one-to-one on Ψ 1 . For any x Ker ( M V N ) and x Ψ 1 , then V N x Ker ( M ) . Since Ker ( M ) = Ker ( A d , ) = Ker ( G ) , then G V N x = G V x = 0 . Now, it is easy to see that x Ψ Ψ 1 = { 0 } , i.e., x = 0 .

Furthermore, for any x Ψ , we have A d , x = 0 and A d , V x = 0 . Thus, M V N x = M V ( I A A d , ) x = M V x = A A d , V x = 0 , i.e., ( M V N ) Ψ = 0 . Now we conclude that rank ( M V N ) = dim ( Ψ 1 ) .

Similarly, we define

Ψ ˜ = Ker ( G ) Ker ( G V ˜ ) , Ψ ˜ 1 = Ker ( G ) Ψ ˜ ,

then it follows that rank ( M V ˜ N ) = dim ( Ψ ˜ 1 ) .

Now we show that the nonsingular matrix ψ 00 I + ψ 01 V bijectively maps Ψ onto Ψ ˜ . Suppose that x Ψ , then x , V x Ker ( G ) . Hence, both y ( ψ 10 I + ψ 11 V ) x and z ( ψ 00 I + ψ 01 V ) x are contained in Ker ( G ) . Since

y = ( ψ 10 I + ψ 11 V ) x = ( ψ 00 I + ψ 01 V ) 1 ( ψ 00 I + ψ 01 V ) ( ψ 10 I + ψ 11 V ) x = ( ψ 00 I + ψ 01 V ) 1 ( ψ 10 I + ψ 11 V ) ( ψ 00 I + ψ 01 V ) x = V ˜ z ,

then z , V ˜ z Ker ( G ) , which implies that z Ψ ˜ . Conversely, in a similar way, we can obtain that ( ψ 00 I + ψ 01 V ) 1 z Ψ for z Ψ ˜ .

Therefore,

rank ( M V N ) = dim ( Ψ 1 ) = dim [ Ker ( G ) ] dim ( Ψ ) = dim [ Ker ( G ) ] dim ( Ψ ˜ ) = dim ( Ψ ˜ 1 ) = rank ( M V ˜ N ) ,

which proves the assertion ( a ) .

Assertion ( b ) can be proved analogously.□

Now we can generalize Corollary 2.1 for general a ( U , V ) displacement.

Theorem 3.2

Let a and b be 2 × 2 nonsingular matrices and let A C n × n be of the form (1). Suppose that

f w ( U ) = S U ˜ 11 U ˜ 12 U ˜ 21 U ˜ 22 S , f z ( V ) = S V ˜ 11 V ˜ 12 V ˜ 21 V ˜ 22 S .

If ( U ˜ 11 ) [ ( Σ K ) p 1 ] and Ker [ ( Σ K ) p 1 ] Ker ( V ˜ 11 ) , Then

rank [ a ( U , V ) A d , ] rank [ a T ( V , U ) A ] + rank [ b ( U , V ) G ] ,

where G = A p A and p = Ind ( A ) .

Proof

By Lemma 3.1, there exist 2 × 2 matrices w , x , y , z such that w 00 I + w 01 U , x 00 I + x 01 V , y 00 I + y 01 U , and z 00 I + z 01 V are nonsingular and a = w T d z , b = x T d y .

By Lemma 3.2,

rank [ a ( U , V ) A d , ] = rank [ A d , f z ( V ) f w ( U ) A d , ]

and

rank [ a T ( V , U ) A ] = rank [ A f w ( U ) f z ( V ) A ] .

Moreover, it follows from Theorem 2.1 that

rank [ A d , f z ( V ) f w ( U ) A d , ] rank [ A f w ( U ) f z ( V ) A ] + rank [ M f z ( V ) N ] + rank ( N f w ( U ) M ] .

Hence, according to Corollary 2.1 and Theorems 2.2 and 3.1, we obtain

rank [ a ( U , V ) A d , ] rank [ a T ( V , U ) A ] = rank [ f w ( U ) A d , A d , f z ( V ) ] rank [ A f w ( U ) f z ( V ) A ] rank [ N f w ( U ) M ] + rank [ M f z ( V ) N ] = rank [ N f y ( U ) M ] + rank [ M f x ( V ) N ] rank [ f y ( U ) G G f x ( V ) ] = rank [ b ( U , V ) G ] .

Corollary 3.1

Let a and b be 2 × 2 nonsingular matrices. Then

(13) rank [ a ( U , V ) A # ] rank [ a T ( V , U ) A ] + rank [ b ( U , V ) G ] ,

where G = A A .

Example 3.1

Denote

U = 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 , V = 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0

as the shift-down matrix and the shift-up matrix, respectively. Let

A = 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 1 0 1 0 0 1 0 1 0 1 0 0

be a singular Toeplitz matrix with Ind ( A ) = 1 and rank ( A ) = 5 .

The core inverse of A is

A # = 1 20 11 20 9 20 1 4 3 10 9 20 7 20 3 20 3 20 1 4 1 10 3 20 1 5 4 5 1 5 0 1 5 1 5 3 20 7 20 7 20 1 4 1 10 13 20 7 20 3 20 3 20 3 4 1 10 3 20 9 20 1 20 1 20 1 4 3 10 1 20 .

If we choose

a = b = 1 0 0 1

in Corollary 3.1, then a direct computation shows that

rank [ a ( U , V ) A # ] = 4 , rank [ a T ( V , U ) A ] = 2 , rank [ b ( U , V ) A A ] = 5 ,

i.e., the estimate (13) in Corollary 3.1 holds.

4 Computation of the displacement

In this section, we study the explicit form of the displacement, and we give an expression for the displacement of the DMP inverse of A through DMP inverse solutions of some special linear systems of equations. For simplicity, we only consider the Sylvester displacement, and the starting point is (3).

(1) To compute A d , ( A U V A ) A d , , first find a full-rank decomposition of A U V A ,

A U V A = G F = i = 1 r g i f i ,

where, G = [ g 1 , g 2 , , g r ] C n × r and F = [ f 1 , f 2 , , f r ] C n × r are of rank r , and then compute the DMP inverse solutions A d , g i and f i A d , .

(2) Next we show how to compute A d , V ( I A A d , ) . We start with a full-rank decomposition

A d , V ( I A A d , ) = M N .

Now we determine the kernels of A p A ( p = Ind ( A ) ) and the matrix

C = A p A N

and a set of vectors w 1 , w 2 , , w p forming an orthogonal basis for the orthogonal complements of Ker ( C ) in Ker ( A p A ) . We generate the matrix W = [ w 1 , w 2 , , w p ] . Then we have

Ker ( A p A ) = Ker ( C ) ( W ) .

The matrix R = W W is an orthogonal projection onto ( W ) . We have

R ( I A A d , ) = W W ( I A A d , ) = W W = R

and

A d , V ( I R ) ( I A A d , ) = 0 .

Hence,

A d , V ( I A A d , ) = A d , V R = A d , V W W .

(3) We proceed analogously for ( I A d , A ) U A d , . We obtain

( I A d , A ) U A d , = S U A d , = Z Z U A d , ,

where S is the orthogonal complement onto Ker ( A ) Ker ( C ) with C defined by C = [ A p A , X ] and Z being a matrix with columns forming an orthogonal basis of ( S ) .

To compute A d , V U A d , , one has to find 2 r DMP inverse solutions A d , g i and f i A d , , where r = rank ( A U V A ) , and the p + q DMP inverse solutions A d , V w i ( 1 i p ) and z j U A d , ( 1 j q ) , where p + q rank ( A p A V U A p A ) .

5 Concluding remark

In this article, we studied the displacement structure of the DMP inverse. An upper bound for the Sylvester displacement rank of the DMP inverse was given under some restrictions. An example was presented to show that these restrictions can not be removed. Estimations for the general displacement rank of the DMP inverse were also investigated. As corollaries, estimations for the Sylvester displacement rank and the general displacement rank of the core inverse without any restrictions were obtained.

Furthermore, the theorems obtained before can be applied to many classical structured matrices, such as Toeplitz matrices, Hankel matrices, and Cauchy matrices. For simplicity, we only consider close-to-Toeplitz matrices.

Close-to-Toeplitz matrices are a class of matrices whose U V -displacement ranks are small compared with the sizes of U and V being (forward or backward) (block) shifts, including Toeplitz matrices, Hankel matrices, and more general block matrices with Toeplitz or Hankel blocks, and sums, products, and inverses of these matrices.

Let U be the shift-down matrix

Z = 0 1 0 1 0 ,

V be the shift-up matrix Z .

If ( U ˜ 11 ) [ ( Σ K ) p 1 ] and Ker [ ( Σ K ) p 1 ] Ker ( V ˜ 11 ) , then choosing

a = b = 1 0 0 1

in Theorem 3.2 gives

(14) rank ( A d , Z A d , Z ) r + + r A p A ,

where r + = rank ( A Z A Z ) , r A p A = rank ( A p A Z A p A Z ) , and p = Ind ( A ) .

If the estimate of (14) is small for a close-to-Toeplitz matrix, then it leads to the famous representation Gohberg-Semencul type of A d , :

A d , = k = 1 r L k U k ,

where r is the displacement rank of A d , , i.e., r + + r A p A in (14).

As a corollary of (14), for core inverse, we have

rank ( A # Z A # Z ) r + + r A A ,

where r + = rank ( A Z A Z ) and r A A = rank ( A A Z A A Z ) .

Acknowledgements

The authors would like to thank the referees for their valuable comments and suggestions.

  1. Funding information: This research was supported by the National Natural Science Foundation of China (Grant No. 12261043), the Program of Qingjiang Excellent Young Talents, and Jiangxi University of Science and Technology (JXUSTQJYX2017007).

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: The authors state no conflict of interest.

  4. Data availability statement: Data sharing is not applicable to this article as no dataset were generated or analysed during the current study.

References

[1] K. Zuo, D. Cvetković-Ilić, and Y. Cheng, Different characterizations of DMP-inverse of matrices, Linear Multilinear Algebra 70 (2022), no. 3, 411–418, https://doi.org/10.1080/03081087.2020.1729084. Search in Google Scholar

[2] J. Cai and Y. Wei, Displacement structure of weighted pseudoinverses, Appl. Math. Comput. 153 (2004), no. 2, 317–335, DOI: https://doi.org/10.1016/S0096-3003(03)00633-7. 10.1016/S0096-3003(03)00633-7Search in Google Scholar

[3] H. Diao, Y. Wei, and S. Qiao, Displacement rank of the Drazin inverse, J. Comput. Appl. Math. 167 (2004), no. 1, 147–161, https://doi.org/10.1016/j.cam.2003.09.050. Search in Google Scholar

[4] G. Heinig and F. Hellinger, Displacement structure of pseudoinverses, Linear Algebra Appl. 197/198 (1994), 623–649, DOI: https://doi.org/10.1016/0024-3795(94)90506-1. 10.1016/0024-3795(94)90506-1Search in Google Scholar

[5] S. Li, Displacement structure of the generalized inverse AT,S(2), Appl. Math. Comput. 156 (2004), no. 1, 33–40, https://doi.org/10.1016/j.amc.2003.07.002. Search in Google Scholar

[6] H. Ma, Displacement structure of the core inverse, Linear Multilinear Algebra 70 (2022), no. 2, 203–214, https://doi.org/10.1080/03081087.2020.1716677. Search in Google Scholar

[7] X. Meng, H. Wang, and A. Liu, Displacement structure of Core-EP inverses, J. Wuhan Univ. Natur. Sci. Ed. 25 (2020), no. 6, 483–488. Search in Google Scholar

[8] M. Qin and G. Wang, Displacement structure of W-weighted Drazin inverse Ad,w and its perturbation, Appl. Math. Comput. 162 (2005), no. 1, 403–419, https://doi.org/10.1016/j.amc.2003.12.140. Search in Google Scholar

[9] H. Wang and X. Liu, Displacement structure of M-group inverse, Math. Numer. Sin. 31 (2009), no. 3, 225–230, https://doi.org/10.12286/jssx.2009.3.225. Search in Google Scholar

[10] Y. Wei and M. K. Ng, Displacement structure of group inverses, Numer. Linear Algebra Appl. 12 (2005), no. 2–3, 103–110, https://doi.org/10.1002/nla.405. Search in Google Scholar

[11] G. Wang, Y. Wei, and S. Qiao, Generalized Inverses: Theory and Computations, Developments in Mathematics, vol. 53, Springer: Singapore and Science Press, Beijing, 2018. 10.1007/978-981-13-0146-9Search in Google Scholar

[12] O. M. Baksalary and G. Trenkler, Core inverse of matrices, Linear Multilinear Algebra 58 (2010), no. 6, 681–697, https://doi.org/10.1080/03081080902778222. Search in Google Scholar

[13] P. K. Manjunatha and K. S. Mohana, Core-EP inverse, Linear Multilinear Algebra 62 (2014), no. 6, 792–802, https://doi.org/10.1080/03081087.2013.791690. Search in Google Scholar

[14] O. M. Baksalary and G. Trenkler, On a generalized core inverse, Appl. Math. Comput. 236 (2014), 450–457, https://doi.org/10.1016/j.amc.2014.03.048. Search in Google Scholar

[15] S. B. Malik and N. Thome, On a new generalized inverse for matrices of an arbitrary index, Appl. Math. Comput. 226 (2014), 575–580, https://doi.org/10.1016/j.amc.2013.10.060. Search in Google Scholar

[16] H. Ma, Characterizations and representations for the CMP inverse and its application, Linear Multilinear Algebra 2021 (2021), https://doi.org/10.1080/03081087.2021.1907275. Search in Google Scholar

[17] C. Deng and A. Yu, Relationships between DMP relation and some partial orders, Appl. Math. Comput. 266 (2015), 41–53, https://doi.org/10.1016/j.amc.2015.05.023. Search in Google Scholar

[18] X. Liu and N. Cai, High-order iterative methods for the DMP inverse, J. Math. 2018 (2018), 1–6, Article ID 8175935, https://doi.org/10.1155/2018/8175935. Search in Google Scholar

[19] H. Ma, X. Gao, and P. Stanimirović, Characterizations, iterative method, sign pattern and perturbation analysis for the DMP inverse with its applications, Appl. Math. Comput. 378 (2020), 125196, https://doi.org/10.1016/j.amc.2020.125196. Search in Google Scholar

[20] M. Zhou and J. Chen, Integral representations of two generalized core inverses, Appl. Math. Comput. 333 (2018), 187–193, https://doi.org/10.1016/j.amc.2018.03.085. Search in Google Scholar

[21] L. Meng, The DMP inverse for rectangular matrices, Filomat 31 (2017), no. 19, 6015–6019, https://doi.org/10.2298/FIL1719015M. Search in Google Scholar

[22] R. Hartwig and K. Spindelböck, Matrices for which A∗ and A† commute, Linear Multilinear Algebra 14 (1983), no. 3, 241–256, https://doi.org/10.1080/03081088308817561. Search in Google Scholar

[23] L. A. Sakhnovich, On similarity of operators, Sibirsk. Mat. Zh. 14 (1972), no. 2, 868–863, (in Russian). 10.1007/BF00971053Search in Google Scholar

[24] T. Kailath and A. H. Sayed, Displacement structure: theory and applications, SIAM Rev. 37 (1995), no. 3, 297–386, https://doi.org/10.1137/1037082. Search in Google Scholar

[25] G. Marsaglia and G. P. H. Styan, Equalities and inequalities for ranks of matrices, Linear Multilinear Algebra 2 (1974), no. 3, 269–292, https://doi.org/10.1080/03081087408817070. Search in Google Scholar

Received: 2021-07-07
Revised: 2022-08-30
Accepted: 2022-09-13
Published Online: 2022-10-13

© 2022 Jin Zhong and Hong Yang, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.