Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access August 4, 2022

A study of generalized hypergeometric Matrix functions via two-parameter Mittag–Leffler matrix function

  • Shilpi Jain , Rahul Goyal , Georgia Irina Oros , Praveen Agarwal EMAIL logo and Shaher Momani
From the journal Open Physics

Abstract

The main aim of this article is to study a new generalizations of the Gauss hypergeometric matrix and confluent hypergeometric matrix functions by using two-parameter Mittag–Leffler matrix function. In particular, we investigate certain important properties of these extended matrix functions such as integral representations, differentiation formulas, beta matrix transform, and Laplace transform. Furthermore, we introduce an extension of the Jacobi matrix orthogonal polynomial by using our generalized Gauss hypergeometric matrix function, which is very important in scattering theory and inverse scattering theory.

1 Introduction and preliminaries

A wide range of special functions in applied sciences are defined via improper integrals or infinite series. During last decades, several special functions become essential tools for scientists and engineering due to their applications in mathematical physics, engineering, and Lie theory. This inspire the study of the extensions of the special functions. In last few years, many extensions of gamma function, beta function, and Gauss hypergeometric functions have been studied by many researchers [4,5,6].

During the last two decades, Mittag–Leffler function has come into prominence after about nine decades of its discovery by a Swedish mathematician G.M. Mittag–Leffler, due to its vast potential of its applications in solving the problems of physical, biological, engineering, and earth sciences and world of fractional calculus. Many researches [7,8,9, 10,11] are continuously working on Mittag–Leffler functions and their properties. Just as the exponential naturally arises out of the solution to integer order differential equations, the Mittag–Leffler function plays an analogous role in the solution of noninteger order differential equations. In fact, the exponential function itself is a very special form, one of an infinite set of these seemingly ubiquitous functions. Mittag–Leffler function is entire function and contains several well-known special functions as particular cases. That is why Mittag–Leffler function is very convenient for used as kernel in any integral equations.

In 1998, Jódar and Cortés [12,13] have introduced special matrix functions. These functions are found in the solutions of the physical problems as well as applications of these functions also increased in the statistics, Lie group theory, and differential equations. In recent years, matrix extensions of some known classical special function have become important tool of research.

On the other hand, the theory of orthogonal polynomials and special functions is of intrinsic interest to many parts of mathematics. Moreover, it can be used to explain many physical and chemical phenomena. Recently, the techniques of scattering theory and inverse scattering theory have been used to study the properties of orthogonal polynomials on a segment of the real line [1] and to investigate the properties of matrix orthogonal polynomials [2].

As we know that two highly developed theories of mathematical physics are those of orthogonal polynomials and potential scattering. While much of the work on orthogonal polynomials predates that on scattering theory, the latter has been much more intensively investigated in the last 25 years. In 1974, Case and Kac [3] have introduced that the theory of orthogonal polynomials sheds considerable light on the inverse problem of scattering theory and showed that methods of scattering theory form a unified basis for obtaining the various properties of orthogonal polynomials.

As motivated by earlier facts, here, we introduce a new generalization of hypergeometric matrix functions via two-parameter Mittag–Leffler matrix function and study important properties of these functions. Finally, we present an extension of the Jacobi matrix orthogonal polynomial by using our generalized hypergeometric matrix functions.

To discuss our main results, we need definition and results of some special matrix functions.

Throughout the article, let I and O denote the identity matrix and Zero matrix in C r × r , respectively, where C r × r is vector space of r -square matrices with complex entries. For a matrix B C r × r the spectrum, denote by σ ( B ) and it is the set of all eigenvalues of the matrix B . A matrix B C r × r is a positive stable matrix if ( μ ) > 0 μ σ ( B ) .

The gamma matrix function is defined as ref. [12]:

(1.1) Γ ( B ) = 0 e t t B I d t ,

where B is a positive stable matrix in C r × r .

Also if B + k I is invertible matrix k 0 , then the reciprocal gamma matrix function is defined as ref. [12]:

(1.2) Γ 1 ( B ) = B ( B + I ) ( B + ( n 1 ) I ) Γ 1 ( B + n I ) , n 1 .

The pochammer matrix symbol defined as ref. [13]:

(1.3) ( B ) n = I , if n = 0 , B ( B + I ) ( B + ( n 1 ) I ) , if n 1 .

From the aforementioned definition of the Pochammer matrix symbol and Eq. (1.2), we can observe that

(1.4) ( B ) n = Γ 1 ( B ) Γ ( B + n I ) , n 1 .

The beta matrix function is defined as ref. [12]:

(1.5) B ( P , Q ) = 0 1 t P I ( 1 t ) Q I d t ,

where P and Q are positive stable matrices in C r × r .

Also if P , Q , and P + Q are positive stable matrices in C r × r and P and Q are commutative with each other, i.e., P Q = Q P , then

(1.6) B ( P , Q ) = Γ ( P ) Γ ( Q ) Γ 1 ( P + Q ) .

In continuation, Jódar and Cortés [13] defined a Gauss hypergeometric matrix function as follows:

(1.7) F 1 2 ( P , Q , R ; z ) = F ( P , Q , R ; z ) = k = 0 ( P ) k ( Q ) k ( R ) k 1 z k k ! ,

where Matrices P , Q , R in C r × r such that R + n I is invertible for all integer n 0 .

The series (1.7) converges absolutely for z < 1 and for z = 1 , if α ( P ) + α ( Q ) < β ( R ) , where α ( P ) = max { ( z ) z σ ( A ) } , β ( P ) = min { ( z ) z σ ( A ) } , and β ( P ) = α ( P ) .

The confluent hypergeometric matrix function(Kummer’s matrix function) defined as refs [14,15]

(1.8) F 1 1 ( Q , R ; z ) = Φ ( Q , R ; z ) = k = 0 ( Q ) k ( R ) k 1 z k k ! ,

where matrices Q , R in C r × r such that R + n I is invertible for all integer n 0 .

In 2013, Çekim inspired by the matrix generalizations has generalized the Gauss hypergeometric and confluent hypergeometric matrix functions by using confluent hypergeometric matrix function.

The generalized Gauss hypergeometric matrix function defined as ref. [16]:

(1.9) F r ( X , Y ) ( P , Q , R ; z ) = k = 0 ( P ) k B r ( X , Y ) ( Q + k I , R Q ) × B ( Q , R Q ) 1 z k k ! .

The generalized confluent hypergeometric matrix function defined as ref. [16]:

(1.10) F 1 ( X , Y , r ) 1 ( Q , R ; z ) = k = 0 B r ( X , Y ) ( Q + k I , R Q ) × B ( Q , R Q ) 1 z k k ! .

where X , Y , P , Q , and R in C r × r satisfying conditions X , Y , Q , R Q , R are positive stable matrix and Q R = R Q with r be a number with ( r ) > 0 and B r ( X , Y ) ( P , Q ) is extended beta matrix function defined in ref. [16].

In the aforementioned extensions, Çekim has used confluent hypergeometric matrix function as regularizer to extend Gauss hypergeometric and confluent hypergeometric matrix functions. They have introduced confluent hypergeometric matrix as kernel in definitions of Gauss hypergeometric and confluent hypergeometric matrix functions and studied several properties of these extended matrix functions.

In 2018, Abdalla and Bakhet [17] have extended the Gauss hypergeometric and confluent hypergeometric matrix functions by using the exponential matrix function.

The extended Gauss hypergeometric matrix function is defined as ref. [17]:

(1.11) F ( X ) ( P , Q , R ; z ) = k = 0 ( P ) k B ( X ) ( Q + k I , R Q ) × B ( Q , R Q ) 1 z k k ! .

The extended confluent hypergeometric matrix function defined as follows [17]:

(1.12) Φ ( X ) ( Q , R ; z ) = k = 0 B ( X ) ( Q + k I , R Q ) × B ( Q , R Q ) 1 z k k ! ,

where P , Q , and R in C r × r satisfying conditions X , Q , R Q , and R are positive stable matrix and Q R = R Q and B ( X ) ( P , Q ) is extended beta matrix function defined in ref. [20].

In the aforementioned extensions, Abdalla and Bakhet have used exponential matrix function as regularizer to extend Gauss hypergeometric and confluent hypergeometric matrix functions. They have introduced exponential matrix function as kernel in definitions of Gauss hypergeometric and confluent hypergeometric matrix functions and investigated many properties of these extended matrix functions.

In the sequence, Verma and Dwivedi have defined a new extensions of Gauss hypergeometric and confluent hypergeometric matrix functions by using confluent hypergeometric matrix function.

The new extended Gauss hypergeometric matrix function is defined as follows [18]:

(1.13) F 1 ( X , Y , W ) 2 ( P , Q , R ; z ) = k = 0 ( P ) k B W ( X , Y ) ( Q + k I , R Q ) × B ( Q , R Q ) 1 z k k ! .

The new confluent hypergeometric matrix function is defined as follows [18]:

(1.14) F 1 ( X , Y , W ) 1 ( Q , R ; z ) = k = 0 B W ( X , Y ) ( Q + k I , R Q ) × B ( Q , R Q ) 1 z k k ! ,

where W , X , Y , P , Q , and R in C r × r satisfying conditions X , Y , Q , R Q , and R are positive stable matrix and Q R = R Q and B W ( X , Y ) ( P , Q ) is the extended beta matrix function defined in ref. [18].

In the aforementioned extensions, Verma and Dwivedi have used confluent hypergeometric matrix function as regularizer to extend Gauss hypergeometric and confluent hypergeometric matrix functions. They have introduced confluent hypergeometric matrix as kernel in definitions of Gauss hypergeometric and confluent hypergeometric matrix functions and studied some properties of these extended matrix functions.

Very recently, Goyal et al. [19] introduced an extension of the beta matrix function using the Wiman matrix function, thus studying various properties and relationships of that function.

Let P, Q, and X are positive stable matrices in C r × r . Then extension of beta matrix function defined as in ref. [19]:

(1.15) B ( α 1 , α 2 ) ( X ) ( P , Q ) = 0 1 t P I ( 1 t ) Q I E ( α 1 , α 2 ) X t ( 1 t ) d t ,

where ( α 1 ) , ( α 2 ) > 0 and E ( α 1 , α 2 ) ( B ) is two-parameter Mittag–Leffler matrix function defined in ref. [21].

The 2-parameter Mittag–Leffler matrix function defined as in ref. [21]:

(1.16) E ( r 1 , r 2 ) ( B ) = n = 0 B n Γ ( r 1 n + r 2 ) ,

where B is a positive stable matrix in C r × r and ( r 1 ) , ( r 2 ) > 0 .

In refs [16,18], Çekim, and Verma and Dwivedi have used confluent hypergeometric matrix function to regularize the Gauss hypergeometric and confluent hypergeometric matrix functions and Abdalla and Bakhet [17] have used exponential matrix function to regularize the Gauss hypergeometric and confluent hypergeometric matrix functions. In the aforementioned studies, they highlighted the important aspects of special matrix functions and inspired many researchers to work in this field. Later, motivated by the certain interesting recent above work done by authors [16,17,18] on generalizations of matrix functions with different special matrix functions. Here, we introduce new extensions of Gauss hypergeometric and confluent hypergeometric matrix functions with Mittag–Leffler type matrix function.

2 Main results

In this section, we consider new extensions of Gauss hypergeometric and confluent hypergeometric matrix functions by using two-parameter Mittag–Leffler matrix function as kernel in integral representations of these functions, and also we discuss some important basic properties of these extended matrix functions.

Definition 2.1

Let X, P , Q , and R, R Q be matrices in C r × r such that R + k I is invertible for all integer k 0 . Then, new generalization of Gauss hypergeometric matrix function is defined as follows:

(2.1) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = k = 0 ( P ) k B ( α 1 , α 2 ) ( X ) ( Q + k I , R Q ) × [ B ( Q , R Q ) ] 1 z k k ! ,

where ( α 1 ) , ( α 2 ) > 0 and E ( α 1 , α 2 ) ( B ) is two-parameter Mittag–Leffler matrix function defined as in (1.16).

Definition 2.2

Let X, Q , and R, R Q be matrices in C r × r such that R + k I is invertible for all integer k 0 . Then, new generalization of confluent hypergeometric matrix function defined as follows:

(2.2) Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) = k = 0 B ( α 1 , α 2 ) ( X ) ( Q + k I , R Q ) × [ B ( Q , R Q ) ] 1 z k k ! ,

where ( α 1 ) , ( α 2 ) > 0 and E ( α 1 , α 2 ) ( B ) is two-parameter Mittag–Leffler matrix function defined as (1.16).

Remark

(i) If we set α 1 = α 2 = 1 in Eqs. (2.1) and (2.2), then we get extended Gauss hypergeometric matrix function (1.11) and extended confluent hypergeometric function (1.12), respectively:

(2.3) F ( 1 , 1 ) ( X ) ( P , Q , R ; z ) = F ( X ) ( P , Q , R ; z ) ,

and

(2.4) Φ ( 1 , 1 ) ( X ) ( Q , R ; z ) = Φ ( X ) ( Q , R ; z ) .

(ii) If we take α 1 = α 2 = 1 and X = O in Eqs. (2.1) and (2.2), then we get Gauss hypergeometric matrix function (1.7) and confluent hypergeometric function (1.8), respectively:

(2.5) F ( 1 , 1 ) ( O ) ( P , Q , R ; z ) = F ( P , Q , R ; z )

and

(2.6) Φ ( 1 , 1 ) ( O ) ( Q , R ; z ) = Φ ( Q , R ; z ) .

Theorem 2.3

For positive stable matrices P , Q , R , and X in C r × r , the new generalization of Gauss hypergeometric matrix function have following integral representation:

(2.7) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = 0 1 ( 1 z t ) P E ( α 1 , α 2 ) × X t ( 1 t ) t Q I ( 1 t ) R Q I d t × [ B ( Q , R Q ) ] 1 ,

where ( α 1 ) , ( α 2 ) > 0 , z < 1 , and E ( α 1 , α 2 ) ( B ) is two-parameter Mittag–Leffler matrix function defined in ref. [21].

Proof

Using the following known relation from Eq. (1.15):

(2.8) B ( α 1 , α 2 ) ( X ) ( P , Q ) = 0 1 t P I ( 1 t ) Q I E ( α 1 , α 2 ) X t ( 1 t ) d t ,

in (2.1), we have:

(2.9) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = k = 0 ( P ) k 0 1 t Q + K I I ( 1 t ) R Q I E ( α 1 , α 2 ) × X t ( 1 t ) d t [ B ( Q , R Q ) ] 1 z k k ! = k = 0 ( P ) k 0 1 t Q I ( 1 t ) R Q I E ( α 1 , α 2 ) × X t ( 1 t ) t k d t [ B ( Q , R Q ) ] 1 z k k ! .

On changing the order of integration and summation, we obtain:

(2.10) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = 0 1 t Q I ( 1 t ) R Q I E ( α 1 , α 2 ) X t ( 1 t ) × k = 0 ( P ) k ( z t ) k k ! d t [ B ( Q , R Q ) ] 1 .

Since

(2.11) ( 1 z t ) P = k = 0 ( P ) k k ! ( z t ) k , t < 1 ,

the last expression becomes the searched result:

(2.12) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = 0 1 ( 1 z t ) P t Q I ( 1 t ) R Q I E ( α 1 , α 2 ) × X t ( 1 t ) d t [ B ( Q , R Q ) ] 1 .

Hence, the proof of the Theorem 2.3 is completed.□

By assigning particular values to the variable t , we get different examples of the above result (2.7).

Corollary 2.4

For new generalization of Gauss hypergeometric matrix function have following integral form hold true:

(2.13) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = 0 π 2 ( 1 z sin 2 ( θ ) ) P E ( α 1 , α 2 ) X sin 2 ( θ ) cos 2 ( θ ) × sin 2 Q I ( θ ) cos 2 R 2 Q I ( θ ) d θ 2 [ B ( Q , R Q ) ] 1 .

Proof

If we apply the substitution t = sin 2 ( θ ) , we get our desired result.□

Corollary 2.5

For new generalization of Gauss hypergeometric matrix function have following integral form hold true:

(2.14) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = 0 π 2 ( 1 z cos 2 ( θ ) ) P E ( α 1 , α 2 ) X cos 2 ( θ ) sin 2 ( θ ) × cos 2 Q I ( θ ) sin 2 R 2 Q I ( θ ) d θ 2 [ B ( Q , R Q ) ] 1 .

Proof

If we apply the substitution t = cos 2 ( θ ) , we get our desired result.□

Corollary 2.6

For new generalization of Gauss hypergeometric matrix function have following integral form hold true:

(2.15) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = 0 ( cosh 2 ( θ ) z sinh 2 ( θ ) ) P E ( α 1 , α 2 ) × ( X cosh 2 ( θ ) coth 2 ( θ ) ) sinh 2 Q I ( θ ) × cosh 2 P 2 R + I ( θ ) d θ 2 [ B ( Q , R Q ) ] 1 .

Proof

If we apply the substitution t = tanh 2 ( θ ) , we get our desired result.□

Corollary 2.7

For new generalization of Gauss hypergeometric matrix function have following integral form hold true:

(2.16) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = 0 ( 1 + u ( 1 z ) ) P E ( α 1 , α 2 ) 2 X X u + 1 u × u Q I ( 1 + u ) P R d u [ B ( Q , R Q ) ] 1 .

Proof

If we apply the substitution t = u 1 + u , we get our desired result.□

Corollary 2.8

For new generalization of Gauss hypergeometric matrix function have following integral form hold true:

(2.17) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = 0 1 ( 1 z ( 1 u ) ) P E ( α 1 , α 2 ) X u ( 1 u ) × u R Q I ( 1 u ) Q I d u [ B ( Q , R Q ) ] 1 .

Proof

If we apply the substitution t = 1 u , we get our desired result.□

Theorem 2.9

For positive stable matrices Q , R , and X in C r × r , the new generalization of confluent hypergeometric matrix function have following integral representation

(2.18) Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) = 0 1 e z t E ( α 1 , α 2 ) X t ( 1 t ) t Q I ( 1 t ) R Q I d t × [ B ( Q , R Q ) ] 1 ,

where ( α 1 ) , ( α 2 ) > 0 , and E ( α 1 , α 2 ) ( B ) is two-parameter Mittag–Leffler matrix function defined in ref. [21].

The proof of Theorem 2.9 is similar to Theorem 2.3 and hence omitted.

Corollary 2.10

For new generalization of confluent hypergeometric matrix function has the following integral form hold true:

(2.19) Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) = 0 π 2 ( e ) z sin 2 ( θ ) E ( α 1 , α 2 ) X sin 2 ( θ ) cos 2 ( θ ) × sin 2 Q I ( θ ) cos 2 R 2 Q I ( θ ) d θ 2 [ B ( Q , R Q ) ] 1 .

Corollary 2.11

For new generalization of confluent hypergeometric matrix function have following integral form hold true:

(2.20) Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) = 0 π 2 ( e ) z cos 2 ( θ ) E ( α 1 , α 2 ) X cos 2 ( θ ) sin 2 ( θ ) × cos 2 Q I ( θ ) sin 2 R 2 Q I ( θ ) d θ 2 [ B ( Q , R Q ) ] 1 .

Corollary 2.12

For new generalization of confluent hypergeometric matrix function have following integral form hold true:

(2.21) Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) = 0 cosh 2 ( e ) z tanh 2 ( θ ) E ( α 1 , α 2 ) ( X cosh 2 ( θ ) coth 2 ( θ ) ) × sinh 2 Q I ( θ ) cosh 2 R + I ( θ ) d θ 2 [ B ( Q , R Q ) ] 1 .

Corollary 2.13

For new generalization of confluent hypergeometric matrix function have following integral form hold true:

(2.22) Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) = 0 e u z 1 + u E ( α 1 , α 2 ) 2 X X u + 1 u × u Q I ( 1 + u ) R d u [ B ( Q , R Q ) ] 1 .

Corollary 2.14

For new generalization of confluent hypergeometric matrix function have following integral form hold true:

(2.23) Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) = 0 1 e z ( 1 u ) E ( α 1 , α 2 ) X u ( 1 u ) × ( 1 u ) Q I u R Q I d t [ B ( Q , R Q ) ] 1 .

Proofs of Corollaries 2.10–2.14 are similar to Corollaries 2.4–2.8, respectively, and hence omitted.

Theorem 2.15

Let X, P , Q , and R, R Q be matrices in C r × r such that R + k I is invertible for all integer k 0 . Then, new generalization of Gauss hypergeometric matrix function has the following differential formula:

(2.24) d m d z m { F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) } = ( P ) m ( Q ) m ( R ) m 1 F ( α 1 , α 2 ) ( X ) ( P + m I , Q + m I , R + m I ; z ) ,

where ( α 1 ) , ( α 2 ) > 0 , and z < 1 .

Proof

Let d m d z m be the m th derivative with respect to the variable z . Differentiating (2.1), we obtain

d m d z m { F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) } = d m d z m n = 0 ( P ) n B ( α 1 , α 2 ) ( X ) ( Q + n I , R Q ) [ B ( Q , R Q ) ] 1 z n n ! ,

that is,

d m d z m { F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) } = n = m ( P ) n B ( α 1 , α 2 ) ( X ) ( Q + n I , R Q ) × [ B ( Q , R Q ) ] 1 z n m n ! n ! ( n m ) ! .

Replacing n m with m , we can read:

d m d z m { F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) } = m = 0 ( P ) 2 m B ( α 1 , α 2 ) ( X ) ( Q + m I + m I , R Q ) × [ B ( Q , R Q ) ] 1 z m m ! .

Using the property of Pochhammer symbol ( P ) m + k = ( P ) m ( P + k I ) m , and after manipulation, we obtain:

d m d z m { F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) } = ( P ) m ( Q ) m ( R ) m 1 m = 0 B ( α 1 , α 2 ) ( X ) ( Q + m I + m I , R Q ) × [ B ( Q + m I , R Q ) ] 1 ( P + m I ) m z m m ! .

By exploiting (2.1), we get our statement:

d m d z m { F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) } = ( P ) m ( Q ) m ( R ) m 1 F ( α 1 , α 2 ) ( X ) ( P + m I , Q + m I , R + m I ; z ) .□

Corollary 2.16

If we consider m = 1 in (2.24), we get following derivative formula:

(2.25) d dz { F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) } = ( P ) 1 ( Q ) 1 ( R ) 1 1 F ( α 1 , α 2 ) ( X ) ( P + I , Q + I , R + I ; z ) .

Theorem 2.17

Let X, Q , and R, R Q be matrices in C r × r such that R + k I is invertible for all integer k 0 . Then, new generalization of confluent hypergeometric matrix function has the following differential formula

(2.26) d m d z m { Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) } = ( Q ) m ( R ) m 1 Φ ( α 1 , α 2 ) ( X ) ( Q + m I , R + m I ; z ) ,

where ( α 1 ) , ( α 2 ) > 0 .

Corollary 2.18

If we consider m = 1 in (2.26), we get following derivative formula:

(2.27) d d z { Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) } = ( Q ) 1 ( R ) 1 1 Φ ( α 1 , α 2 ) ( X ) ( Q + I , R + I ; z ) .

Theorem 2.19

The Pfaff’s transformations for F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) is given by

(2.28) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = ( 1 z ) P F ( α 1 , α 2 ) ( X ) P , R Q , R ; z z 1 ,

where ( α 1 ) , ( α 2 ) > 0 and z z 1 < 1 and X, P , Q , and R, R Q be matrices in C r × r such that R + k I is invertible for all integer k 0 .

Proof

From (2.3), we deduce:

(2.29) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = 0 1 ( 1 z v ) P E ( α 1 , α 2 ) X v ( 1 v ) v Q I × ( 1 v ) R Q I d v [ B ( Q , R Q ) ] 1 .

Let v = 1 t . After some algebraic manipulation, the last expression reads as follows:

[ 1 z ( 1 t ) ] P = ( 1 z ) P 1 + z 1 z t P .

Thus, we get our desired result (2.28):

F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = ( 1 z ) P F ( α 1 , α 2 ) ( X ) P , R Q , R ; z z 1 .

Corollary 2.20

For F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) , the following transformation holds true:

(2.30) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = z P F ( α 1 , α 2 ) ( X ) ( P , R Q , R ; 1 z ) ,

where 1 z < 1 .

Proof

If we apply the substitution z = 1 1 z , we get our desired result.□

Corollary 2.21

For F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) , the following transformation holds true:

(2.31) F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) = ( 1 + z ) P F ( α 1 , α 2 ) ( X ) ( P , R Q , R ; z ) ,

where z < 1 .

Proof

If we apply the substitution z = z 1 + z , we get our desired result.□

Theorem 2.22

The Kummer transformations for Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) is given by:

(2.32) Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) = e z Φ ( α 1 , α 2 ) ( X ) ( R Q , R ; z ) ,

where ( α 1 ) , ( α 2 ) > 0 and X, Q and R, R Q be matrices in C r × r such that R + k I is invertible for all integer k 0 .

Proof

From (2.9), we deduce:

(2.33) Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) = 0 1 e z t E ( α 1 , α 2 ) X t ( 1 t ) × t Q I ( 1 t ) R Q I d t × [ B ( Q , R Q ) ] 1 .

On substitute t = 1 u and re-arranging the terms, we get our desired result.

(2.34) Φ ( α 1 , α 2 ) ( X ) ( Q , R ; z ) = e z Φ ( α 1 , α 2 ) ( X ) ( R Q , R ; z ) .

3 Integral transforms

In this section, we have introduced Beta matrix transform and Laplace transform of

F ( α 1 , α 2 ) ( X ) ( P , Q , R ; z ) .

Theorem 3.1

The following Beta matrix transform formula holds true:

(3.1) B { F ( α 1 , α 2 ) ( X ) ( U + V , R , S ; x z ) : U , V } = B ( U , V ) n = 0 ( U ) n B ( α 1 , α 2 ) ( X ) ( R + n I , S R ) × [ B ( R , S R ) ] 1 x n n ! = B ( U , V ) F ( α 1 , α 2 ) ( X ) ( U , R , S ; x ) ,

where, X , U , V , R , S , and U + V are positive stable matrices and commutative in C r × r with x < 1 and Beta matrix transform of f ( z ) is defined in ref. [22]

Proof

Applying Beta matrix transform to the F ( α 1 , α 2 ) ( X ) ( U + V , R , S ; x z ) and using (2.1), we have

(3.2) 0 1 z U I ( 1 z ) V I F ( α 1 , α 2 ) ( X ) ( U + V , R , S ; x z ) d z = 0 1 z U I ( 1 z ) V I n = 0 ( U + V ) n B ( α 1 , α 2 ) ( X ) × ( R + n I , S R ) [ B ( R , S R ) ] 1 ( x z ) n n ! d z .

On changing the order of integration and summation,

(3.3) 0 1 z U I ( 1 z ) V I F ( α 1 , α 2 ) ( X ) ( U + V , R , S ; x z ) d z = n = 0 ( ( U + V ) n B ( α 1 , α 2 ) ( X ) ( R + n I , S R ) × [ B ( R , S R ) ] 1 ( x ) n n ! × 0 1 z U + n I I ( 1 z ) V I d z .

Using (1.5), (2.1), and some re-arrangements, we get our desired result.

(3.4) B { F ( α 1 , α 2 ) ( X ) ( U + V , R , S ; x z ) : U , V } = B ( U , V ) n = 0 ( U ) n B ( α 1 , α 2 ) ( X ) ( R + n I , S R ) × [ B ( R , S R ) ] 1 x n n ! = B ( U , V ) F ( α 1 , α 2 ) ( X ) ( U , R , S ; x ) .□

Theorem 3.2

The following Laplace transform formula holds true:

(3.5) L { z M I F ( α 1 , α 2 ) ( X ) ( P , Q , R ; x z ) } = n = 0 ( P ) n B ( α 1 , α 2 ) ( X ) ( Q + n I , R Q ) [ B ( Q , R Q ) ] 1 × Γ ( M + n I ) s ( M + n I ) x n n ! .

where ( s ) > 0 , M C r × r , P, Q, R, R-Q are positive stable matrices in C r × r and Laplace transform is defined in ref. [23].

Proof

Applying Laplace transform to the z M I F ( α 1 , α 2 ) ( X ) ( P , Q , R ; x z ) and using (2.1), we have

(3.6) L { z M I F ( α 1 , α 2 ) ( X ) ( P , Q , R ; x z ) } = 0 z M I e s z n = 0 ( P ) n B ( α 1 , α 2 ) ( X ) ( Q + n I , R Q ) × [ B ( Q , R Q ) ] 1 ( x z ) n n ! d z .

On changing the order of integration and summation,

(3.7) L { z M I F ( α 1 , α 2 ) ( X ) ( P , Q , R ; x z ) } = n = 0 ( P ) n B ( α 1 , α 2 ) ( X ) ( Q + n I , R Q ) [ B ( Q , R Q ) ] 1 × x n n ! 0 z M + n I I e s z d z .

Using Laplace transform definition [23] and some re-arrangements, we get our desired result.

(3.8) L { z M I F ( α 1 , α 2 ) ( X ) ( P , Q , R ; x z ) } = n = 0 ( P ) n B ( α 1 , α 2 ) ( X ) ( Q + n I , R Q ) × [ B ( Q , R Q ) ] 1 Γ ( M + n I ) s ( M + n I ) x n n ! .

4 An extended Jacobi matrix orthogonal polynomial

In this section, we introduce the extended Jacobi matrix orthogonal polynomial by using the extended Gauss hypergeometric matrix function (2.1). The Jacobi matrix orthogonal polynomial plays very important role in scattering and approximation theory [24].

Definition

For any positive integer n , the n th extended Jacobi matrix orthogonal polynomial is given as follows:

(4.1) J α 1 , α 2 , n ( P , Q ; X ) ( z ) = ( P + I ) n n ! F ( α 1 , α 2 ) ( X ) × n I , P + Q + ( n + 1 ) I ; A + I ; 1 z 2 ,

where P , Q , and X be positive stable matrices in C r × r whose eigenvalues, z , all satisfy ( z ) > 1 given in (2.1).

Remark

If we set α 1 , α 2 = 1 , in (4.1), we obtain the known result [25, p. 05, Eq. (32)].

5 Conclusion

We conclude our analysis by remarking that the results presented in this article are new and very potential for the extension of other special matrix functions. First, we have generalized the Gauss hypergeometric and confluent hypergeometric matrix functions, then we have studied several basic properties like integral representations, differentiation formulas, and transformations for these extended hypergeometric functions. We have also derived two special integral transforms, beta matrix transform, and Laplace transform of these extended matrix functions. Finally, we defined the extended Jacobi matrix orthogonal polynomial, which appear in the scattering theory and inverse scattering theory.

The results presented in this article find an interesting application in the evaluation of certain infinite integrals whose specialized forms arise frequently in a number of applied problems. We will also use our main results to further extensions of Appell matrix and Lauricella matrix functions as well as their application in the scattering theory and inverse scattering theory.

Acknowledgments

The authors would like to thank the anonymous referees for their valuable comments and suggestions. Praveen Agarwal is thankful to the NBHM (DAE) (project 02011/12/2020 NBHM (R.P)/RD II/7867) for providing necessary support and facility. Also, the article and its translation were prepared within the framework of the agreement between the Ministry of Science and High Education of the Russian Federation and the Peoples Friendship University of Russia No. 075-15-2021-603: Development of the new methodology and intellectual base for the new-generation research of Indian philosophy in correlation with the main World Philosophical Traditions.

  1. Funding information: This article has been supported by the RUDN University Strategic Academic Leadership Program.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: The authors state no conflict of interest.

References

[1] Case KM. Orthogonal polynomials from the viewpoint of scattering theory. J Math Phys. 1974;15(12):2166–74. 10.1063/1.1666597Search in Google Scholar

[2] Geronimo JS. Scattering theory and matrix orthogonal polynomials on the real line. Circuits Syst Signal Process. 1982;1:471–95. 10.1007/978-1-4899-6790-9_15Search in Google Scholar

[3] Case KM, Kac M. A discrete version of the inverse scattering problem. J Math Phys. 1973;14(5):594–603. 10.1063/1.1666364Search in Google Scholar

[4] Ata E, Kymaz O. A study on certain properties of generalized special functions defined by Fox-Wright function. Appl Math Nonlinear Sci. 2020;5(1):147–62. 10.2478/amns.2020.1.00014Search in Google Scholar

[5] Şahin R, Ya989680ci, O, Ya989680basan MB, Kıymaz O. Çetinkaya A. Further generalizations of gamma, beta and related functions. J Inequalities Special Funct. 2018;9(4):1–7. Search in Google Scholar

[6] Çetinkaya A, Kymaz O, Agarwal P, Agarwal R. A comparative study on generating function relations for generalized hypergeometric functions via generalized fractional operators. Adv Differ Equ. 2018;1:1–11. 10.1186/s13662-018-1612-0Search in Google Scholar

[7] Bas E, Acay B. The direct spectral problem via local derivative including truncated Mittag–Leffler function. Appl Math Comput. 2020;367:124787. 10.1016/j.amc.2019.124787Search in Google Scholar

[8] Yilmazer R, Bas E. Fractional solutions of a confluent hypergeometric equation. J Chungcheong Math Soc. 2012;25(2):149. 10.14403/jcms.2012.25.2.149Search in Google Scholar

[9] Hammouch Z, Mekkaoui T. Control of a new chaotic fractional-order system using Mittag–Leffler stability. Nonlinear Stud. 2015;22(4):565–77. Search in Google Scholar

[10] Veeresha P, Prakasha DG, Hammouch Z. An efficient approach for the model of thrombin receptor activation mechanism with Mittag–Leffler function. In: The International Congress of the Moroccan Society of Applied Mathematics. Cham: Springer; 2019. p. 44–60. 10.1007/978-3-030-62299-2_4Search in Google Scholar

[11] Uçar S, Özdemir N, Hammouch Z. A fractional mixing propagation model of computer viruses and countermeasures involving Mittag–Leffler type kernel. In: International Conference on Computational Mathematics and Engineering Sciences. Cham: Springer; 2019. p. 186–99. 10.1007/978-3-030-39112-6_13Search in Google Scholar

[12] Jódar L, Cortés JC. Some properties of gamma and beta matrix functions. Appl Math Lett. 1998;2(1):89–93. 10.1016/S0893-9659(97)00139-0Search in Google Scholar

[13] Jódar L, Cortés JC. On the hypergeometric matrix function. J Comput Appl Math. 1998;99:205–17. 10.1016/S0377-0427(98)00158-7Search in Google Scholar

[14] Metwally MS. On p-Kummeras matrix function of complex variable under differential operators and their properties. Southeast Asian Bull Math. 2011;35(2):261–76.Search in Google Scholar

[15] Wehowar G, Erika H. The second Kummer function with matrix parameters and its asymptotic behaviour. In Abstract and applied analysis. Vol. 2018. London: Hindawi; 2018. 10.1155/2018/7534651Search in Google Scholar

[16] Çekim B. Generalized Euler’s beta matrix and related functions. AIP Confer Proc Am Instit Phys. 2013;1558(1):1132–5. 10.1063/1.4825707Search in Google Scholar

[17] Abdalla M, Bakhet A. Extended Gauss hypergeometric matrix functions. Iran J Sci Technol Trans A Sci. 2018;42(3):1465–70. 10.1007/s40995-017-0183-3Search in Google Scholar

[18] Verma A, Dwivedi R. On the matrix version of new extended Gauss, Appell and Lauricella hypergeometric functions. 2018. p. 11310. arXiv: http://arXiv.org/abs/arXiv:2108. Search in Google Scholar

[19] Goyal R, Agarwal P, Oros GI, Jain S. Extended beta and gamma matrix functions via two-parameter Mittag–Leffler matrix function. Mathematics 2022;10:892. 10.3390/math10060892Search in Google Scholar

[20] Abdalla M, Bakhet A. Extension of beta matrix function. Asian J Math Comput Res. 2016;9:253–64. Search in Google Scholar

[21] Garrappa R, Popolizio M. Computing the matrix Mittag–Leffler function with applications to fractional calculus. J Sci Comput. 2018;77(1):129–53. 10.1007/s10915-018-0699-5Search in Google Scholar

[22] Bakhet A. On some topics of special functions of complex matrices. M.Sc. thesis. Assiut, Egypt: Al-Azhar University; 2015. Search in Google Scholar

[23] Debnath L, Bhatta D. Integral transforms and their applications. New York: Chapman and Hall/CRC; 2016. 10.1201/9781420010916Search in Google Scholar

[24] Abdalla M. Special matrix functions: characteristics, achievements and future directions. Linear Multilinear Algebra. 2020;68(1):1–28. 10.1080/03081087.2018.1497585Search in Google Scholar

[25] He F, Bakhet A, Abdalla M, Hidan M. On the extended hypergeometric matrix functions and their applications for the derivatives of the extended Jacobi matrix polynomial. Math Problems Eng. 2020;2020:4268361.10.1155/2020/4268361Search in Google Scholar

Received: 2022-03-07
Revised: 2022-06-08
Accepted: 2022-06-09
Published Online: 2022-08-04

© 2022 Shilpi Jain et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.