In this subsection we concentrate on a particularly important and fundamental special case of the class of differential equations discussed in Subsection 2.1, namely we shall look at the solutions to homogeneous linear equations with constant coefficients, i.e. to differential equations of the form
$$\begin{array}{}\left(\begin{array}{c}{D}_{\ast}^{{\alpha}_{1}}\\ & \ddots \\ & & {D}_{\ast}^{{\alpha}_{d}}\end{array}\right)x(t)=Ax(t),\end{array}$$(2.5)

which is the special case of (1.3) where *g*(*t*) = 0 for all *t*.

Our basic result in this section, Theorem 2.6, provides some information about the structure of the solutions to the system (2.5) in the case of an arbitrary matrix *A* ∈ ℂ^{d×d} and an arbitrary vector (*α*_{1}, …, *α*_{d}) ∈ (0, 1]^{d}.

In order to motivate our results, we start with the case *d* = 2. In this case, the system (2.5) has the form
$$\begin{array}{}{D}_{\ast}^{{\alpha}_{1}}{x}_{1}(t)={a}_{11}{x}_{1}(t)+{a}_{12}{x}_{2}(t),\end{array}$$(2.6a)
$$\begin{array}{}{D}_{\ast}^{{\alpha}_{2}}{x}_{2}(t)={a}_{21}{x}_{1}(t)+{a}_{22}{x}_{2}(t).\end{array}$$(2.6b)

First of all, Corollary 2.4 asserts that, for any initial condition (*x*_{1}(0), *x*_{2}(0))^{⊤} = $({x}_{1}^{0},{x}_{2}^{0}{)}^{\mathrm{\top}}$
∈ ℂ^{2}, this system has a unique continuous solution *x* = (*x*_{1}, *x*_{2})^{⊤} on [0, ∞). Moreover, for equations of this structure, the fractional version of the variation-of-constants method
[7, Theorem 7.2 and Remark 7.1] provides the relations
$$\begin{array}{}{\displaystyle {x}_{1}(t)={x}_{1}^{0}{E}_{{\alpha}_{1}}({a}_{11}{t}^{{\alpha}_{1}})+{a}_{12}{\int}_{0}^{t}(t-s{)}^{{\alpha}_{1}-1}{E}_{{\alpha}_{1},{\alpha}_{1}}\left({a}_{11}(t-s{)}^{{\alpha}_{1}}\right){x}_{2}(s)\phantom{\rule{thinmathspace}{0ex}}ds,}\end{array}$$(2.7a)
$$\begin{array}{}{\displaystyle {x}_{2}(t)={x}_{2}^{0}{E}_{{\alpha}_{2}}({a}_{22}{t}^{{\alpha}_{2}})+{a}_{21}{\int}_{0}^{t}(t-s{)}^{{\alpha}_{2}-1}{E}_{{\alpha}_{2},{\alpha}_{2}}\left({a}_{22}(t-s{)}^{{\alpha}_{2}}\right){x}_{1}(s)\phantom{\rule{thinmathspace}{0ex}}ds,}\end{array}$$(2.7b)

for all *t* ≥ 0. This representation indicates that we should seek the solution components in the class of generalized power series of the form
$$\begin{array}{}{\displaystyle {x}_{1}(t)={x}_{1}^{0}+\sum _{k=1}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}{b}_{1k\ell}{t}^{k{\alpha}_{1}+\ell {\alpha}_{2}},}\end{array}$$(2.8a)
$$\begin{array}{}{\displaystyle {x}_{2}(t)={x}_{2}^{0}+\sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =1}^{\mathrm{\infty}}{b}_{2k\ell}{t}^{k{\alpha}_{1}+\ell {\alpha}_{2}}.}\end{array}$$(2.8b)

Assuming a suitable convergence behavior of these series, we may differentiate in a termwise manner and obtain
$$\begin{array}{}{D}_{\ast}^{{\alpha}_{1}}{x}_{1}(t)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}& {\displaystyle =\sum _{k=1}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}{b}_{1k\ell}\frac{\mathrm{\Gamma}(k{\alpha}_{1}+\ell {\alpha}_{2}+1)}{\mathrm{\Gamma}((k-1){\alpha}_{1}+\ell {\alpha}_{2}+1)}{t}^{(k-1){\alpha}_{1}+\ell {\alpha}_{2}}}\\ & {\displaystyle =\sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}{b}_{1,k+1,\ell}\frac{\mathrm{\Gamma}((k+1){\alpha}_{1}+\ell {\alpha}_{2}+1)}{\mathrm{\Gamma}(k{\alpha}_{1}+\ell {\alpha}_{2}+1)}{t}^{k{\alpha}_{1}+\ell {\alpha}_{2}},}\\ {D}_{\ast}^{{\alpha}_{2}}{x}_{2}(t)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}& {\displaystyle =\sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =1}^{\mathrm{\infty}}{b}_{2k\ell}\frac{\mathrm{\Gamma}(k{\alpha}_{1}+\ell {\alpha}_{2}+1)}{\mathrm{\Gamma}(k{\alpha}_{1}+(\ell -1){\alpha}_{2}+1)}{t}^{k{\alpha}_{1}+(\ell -1){\alpha}_{2}}}\\ & {\displaystyle =\sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}{b}_{2,k,\ell +1}\frac{\mathrm{\Gamma}(k{\alpha}_{1}+(\ell +1){\alpha}_{2}+1)}{\mathrm{\Gamma}(k{\alpha}_{1}+\ell {\alpha}_{2}+1)}{t}^{k{\alpha}_{1}+\ell {\alpha}_{2}}.}\end{array}$$

Plugging these representations into the differential equation system (2.5), we find
$$\begin{array}{}{\displaystyle \phantom{\rule{1em}{0ex}}{a}_{11}{x}_{1}^{0}+{a}_{11}\sum _{k=1}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}{b}_{1k\ell}{t}^{k{\alpha}_{1}+\ell {\alpha}_{2}}+{a}_{12}{x}_{2}^{0}+{a}_{12}\sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =1}^{\mathrm{\infty}}{b}_{2k\ell}{t}^{k{\alpha}_{1}+\ell {\alpha}_{2}}}\\ {\displaystyle =\sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}{b}_{1,k+1,\ell}\frac{\mathrm{\Gamma}((k+1){\alpha}_{1}+\ell {\alpha}_{2}+1)}{\mathrm{\Gamma}(k{\alpha}_{1}+\ell {\alpha}_{2}+1)}{t}^{k{\alpha}_{1}+\ell {\alpha}_{2}},}\\ {\displaystyle \phantom{\rule{1em}{0ex}}{a}_{21}{x}_{1}^{0}+{a}_{21}\sum _{k=1}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}{b}_{1k\ell}{t}^{k{\alpha}_{1}+\ell {\alpha}_{2}}+{a}_{22}{x}_{2}^{0}+{a}_{22}\sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =1}^{\mathrm{\infty}}{b}_{2k\ell}{t}^{k{\alpha}_{1}+\ell {\alpha}_{2}}}\\ {\displaystyle =\sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}{b}_{2,k,\ell +1}\frac{\mathrm{\Gamma}(k{\alpha}_{1}+(\ell +1){\alpha}_{2}+1)}{\mathrm{\Gamma}(k{\alpha}_{1}+\ell {\alpha}_{2}+1)}{t}^{k{\alpha}_{1}+\ell {\alpha}_{2}}.}\end{array}$$

A comparison of coefficients of *t*^{kα1+ℓα2} then yields the equations
$$\begin{array}{}{\displaystyle {b}_{110}=\frac{1}{\mathrm{\Gamma}({\alpha}_{1}+1)}({a}_{11}{x}_{1}^{0}+{a}_{12}{x}_{2}^{0}),}\end{array}$$(2.9a)
$$\begin{array}{}{\displaystyle {b}_{201}=\frac{1}{\mathrm{\Gamma}({\alpha}_{2}+1)}({a}_{21}{x}_{1}^{0}+{a}_{22}{x}_{2}^{0}),}\end{array}$$(2.9b)
$$\begin{array}{}{\displaystyle {b}_{1,k+1,0}=\frac{\mathrm{\Gamma}(k{\alpha}_{1}+1)}{\mathrm{\Gamma}((k+1){\alpha}_{1}+1)}{a}_{11}{b}_{1k0}\phantom{\rule{2em}{0ex}}(k=1,2,\dots ),}\end{array}$$(2.9c)
$$\begin{array}{}{\displaystyle {b}_{1,1,\ell}=\frac{\mathrm{\Gamma}(\ell {\alpha}_{2}+1)}{\mathrm{\Gamma}({\alpha}_{1}+\ell {\alpha}_{2}+1)}{a}_{12}{b}_{20\ell}\phantom{\rule{2em}{0ex}}(\ell =1,2,\dots ),}\end{array}$$(2.9d)
$$\begin{array}{}{\displaystyle {b}_{1,k+1,\ell}=\frac{\mathrm{\Gamma}(k{\alpha}_{1}+\ell {\alpha}_{2}+1)}{\mathrm{\Gamma}((k+1){\alpha}_{1}+\ell {\alpha}_{2}+1)}({a}_{11}{b}_{1k\ell}+{a}_{12}{b}_{2k\ell})\phantom{\rule{1em}{0ex}}(k,\ell =1,2,\dots ),}\end{array}$$(2.9e)
$$\begin{array}{}{\displaystyle {b}_{2,0,\ell +1}=\frac{\mathrm{\Gamma}(\ell {\alpha}_{2}+1)}{\mathrm{\Gamma}((\ell +1){\alpha}_{2}+1)}{a}_{22}{b}_{20\ell}\phantom{\rule{2em}{0ex}}(\ell =1,2,\dots ),}\end{array}$$(2.9f)
$$\begin{array}{}{\displaystyle {b}_{2,k,1}=\frac{\mathrm{\Gamma}(k{\alpha}_{1}+1)}{\mathrm{\Gamma}(k{\alpha}_{1}+{\alpha}_{2}+1)}{a}_{21}{b}_{1k0}\phantom{\rule{2em}{0ex}}(k=1,2,\dots ),}\end{array}$$(2.9g)
$$\begin{array}{}{\displaystyle {b}_{2,k,\ell +1}=\frac{\mathrm{\Gamma}(k{\alpha}_{1}+\ell {\alpha}_{2}+1)}{\mathrm{\Gamma}(k{\alpha}_{1}+(\ell +1){\alpha}_{2}+1)}({a}_{21}{b}_{1k\ell}+{a}_{22}{b}_{2k\ell})\phantom{\rule{1em}{0ex}}(k,\ell =1,2,\dots ).}\end{array}$$(2.9h)

Formally introducing the quantities
$$\begin{array}{}{b}_{10\ell}& =0& \text{for\hspace{0.17em}}\ell =1,2,\dots \phantom{\rule{1em}{0ex}}& \text{and}& {b}_{2k0}& =0& \text{for\hspace{0.17em}}k=1,2,\dots ,& \\ {b}_{100}& ={x}_{1}^{0}& & \text{and}& {b}_{200}& ={x}_{2}^{0},& \end{array}$$(2.10a)

we see that the system (2.9) can be simplified to
$$\begin{array}{}{\displaystyle {b}_{1,k+1,\ell}=\frac{\mathrm{\Gamma}(k{\alpha}_{1}+\ell {\alpha}_{2}+1)}{\mathrm{\Gamma}((k+1){\alpha}_{1}+\ell {\alpha}_{2}+1)}({a}_{11}{b}_{1k\ell}+{a}_{12}{b}_{2k\ell})\phantom{\rule{2em}{0ex}}(k,\ell =0,1,2,\dots ),}\end{array}$$(2.10b)
$$\begin{array}{}{\displaystyle {b}_{2,k,\ell +1}=\frac{\mathrm{\Gamma}(k{\alpha}_{1}+\ell {\alpha}_{2}+1)}{\mathrm{\Gamma}(k{\alpha}_{1}+(\ell +1){\alpha}_{2}+1)}({a}_{21}{b}_{1k\ell}+{a}_{22}{b}_{2k\ell})\phantom{\rule{2em}{0ex}}(k,\ell =0,1,2,\dots ).}\end{array}$$(2.10c)

A brief inspection of these formulas reveals that, given the initial values ${x}_{1}^{0}\phantom{\rule{thinmathspace}{0ex}}\text{\hspace{0.17em}and\hspace{0.17em}}{x}_{2}^{0},$
they can indeed be used to compute all coefficients that appear in the representation (2.8) in a recursive manner. Specifically, the coefficients *b*_{1kℓ} and *b*_{2kℓ} for *k* + *ℓ* = *μ* can be computed via eqs. (2.10b) and (2.10c), respectively, and this computation only requires the knowledge of *b*_{1kℓ} and *b*_{2kℓ} with *k* + *ℓ* = *μ* − 1. Thus one can first compute all *b*_{1kℓ} and *b*_{2kℓ} with *k* + *ℓ* = 1, then with *k* + *ℓ* = 2, etc.

A closer look at the recurrence relations (2.10) allows us to prove that the series from (2.8) converge for all *t* ≥ 0. To this end we first state a preliminary result.

#### Lemma 2.5

*Let the values b*_{1kℓ} *and b*_{2kℓ} *be defined as in* (2.10) *with arbitrary* ${x}_{1}^{0},{x}_{2}^{0}$
∈ ℂ. *Then*, *for j* ∈ {1, 2} *the series*
$$\begin{array}{}{\displaystyle {s}_{j}(z):=\sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}|{b}_{jk\ell}|{z}^{k+\ell}}\end{array}$$

*is convergent for all z* ∈ ℂ.

Actually, it is immediately clear that the desired convergence property is a consequence of this lemma, since the series
$$\begin{array}{}{\displaystyle \sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}|{b}_{jk\ell}|\cdot |t{|}^{k{\alpha}_{1}+\ell {\alpha}_{2}}}\end{array}$$

is, on the one hand, a majorant for *x*_{j}(*t*) and is, on the other hand, convergent for all *t* > 0 according to
$$\begin{array}{}{\displaystyle \sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}|{b}_{jk\ell}|\cdot |t{|}^{k{\alpha}_{1}+\ell {\alpha}_{2}}}\\ \le \left\{\begin{array}{}{\displaystyle \sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}|{b}_{jk\ell}|\cdot |t{|}^{(k+\ell )max\{{\alpha}_{1},{\alpha}_{2}\}}={s}_{j}({t}^{max\{{\alpha}_{1},{\alpha}_{2}\}})}& \text{for\hspace{0.17em}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}t\ge 1,\\ {\displaystyle \sum _{k=0}^{\mathrm{\infty}}\sum _{\ell =0}^{\mathrm{\infty}}|{b}_{jk\ell}|\cdot |t{|}^{(k+\ell )min\{{\alpha}_{1},{\alpha}_{2}\}}={s}_{j}({t}^{min\{{\alpha}_{1},{\alpha}_{2}\}})}& \text{for\hspace{0.17em}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}t<1.\end{array}\right.\end{array}$$

#### Proof of lemma 2.5

Since the series in question does not have any negative summands, we may rearrange the terms according to powers of *z*; this yields
$$\begin{array}{}{\displaystyle {s}_{j}(z)=\sum _{k=0}^{\mathrm{\infty}}\sum _{\mu =0}^{k}|{b}_{j,\mu ,k-\mu}|{z}^{k}.}\end{array}$$

It is therefore evident that, in order to investigate the convergence radius of this series, we need to estimate expressions of the form
$$\begin{array}{}{\displaystyle {\beta}_{jk}:=\sum _{\mu =0}^{k}|{b}_{j,\mu ,k-\mu}|.}\end{array}$$

In fact, we shall demonstrate that for sufficiently large *k*
$$\begin{array}{}{\displaystyle 0\le {\beta}_{1k}+{\beta}_{2k}\le \frac{{c}_{1}{c}_{2}^{k}}{\mathrm{\Gamma}(k{\alpha}^{\ast}+1)},}\end{array}$$(2.11)

where *c*_{1} and *c*_{2} are certain positive constants and
$$\begin{array}{}{\alpha}^{\ast}:=min\{{\alpha}_{1},{\alpha}_{2}\}.\end{array}$$

Equation (2.11) tells us that the classical power series for the Mittag-Leffler function *E*_{α*} — that is well known to be convergent on the entire complex plane — evaluated at *c*_{2} |*z*| is a majorant for the series *s*_{1} and *s*_{2} that we are interested in, and hence the series expansions for *s*_{1}(*z*) and *s*_{2}(*z*) also converge for all *z* as required.

Thus, it only remains to prove (2.11). The left inequality is clear by definition. To prove the right inequality, we employ the relations (2.10a), (2.10b) and (2.10c) and see, using the notation *a* : = max_{i,j ∈ {1, 2}} |*a*_{ij}|, that we have for *k* ≥ 2 the following chain of inequalities:
$$\begin{array}{}{\displaystyle \sum _{\mu =0}^{k}\left(|{b}_{1,\mu ,k-\mu}|+|{b}_{2,\mu ,k-\mu}|\right)}\\ {\displaystyle \le \sum _{\mu =1}^{k}|{b}_{1,\mu ,k-\mu}|+\sum _{\mu =0}^{k-1}|{b}_{2,\mu ,k-\mu}|}\\ {\displaystyle \le \overline{a}\sum _{\mu =1}^{k}\frac{\mathrm{\Gamma}((\mu -1){\alpha}_{1}+(k-\mu ){\alpha}_{2}+1)}{\mathrm{\Gamma}(\mu {\alpha}_{1}+(k-\mu ){\alpha}_{2}+1)}(|{b}_{1,\mu -1,k-\mu}|+|{b}_{2,\mu -1,k-\mu}|)}\\ {\displaystyle \phantom{\le}+\overline{a}\sum _{\mu =0}^{k-1}\frac{\mathrm{\Gamma}(\mu {\alpha}_{1}+(k-\mu -1){\alpha}_{2}+1)}{\mathrm{\Gamma}(\mu {\alpha}_{1}+(k-\mu ){\alpha}_{2}+1)}(|{b}_{1,\mu ,k-\mu -1}|+|{b}_{2,\mu ,k-\mu -1}|)}\\ ={\displaystyle \overline{a}\sum _{\mu =1}^{k}\frac{\mathrm{\Gamma}((\mu -1){\alpha}_{1}+(k-\mu ){\alpha}_{2}+1)}{\mathrm{\Gamma}(\mu {\alpha}_{1}+(k-\mu ){\alpha}_{2}+1)}(|{b}_{1,\mu -1,k-\mu}|+|{b}_{2,\mu -1,k-\mu}|)}\\ {\displaystyle \phantom{\le}+\overline{a}\sum _{\mu =1}^{k}\frac{\mathrm{\Gamma}((\mu -1){\alpha}_{1}+(k-\mu ){\alpha}_{2}+1)}{\mathrm{\Gamma}((\mu -1){\alpha}_{1}+(k-\mu +1){\alpha}_{2}+1)}(|{b}_{1,\mu -1,k-\mu}|+|{b}_{2,\mu -1,k-\mu}|)}\\ {\displaystyle =\overline{a}\sum _{\mu =1}^{k}{w}_{k,\mu}({\alpha}_{1},{\alpha}_{2})(|{b}_{1,\mu -1,k-\mu}|+|{b}_{2,\mu -1,k-\mu}|)}\end{array}$$

with
$$\begin{array}{}{\displaystyle {w}_{k,\mu}({\alpha}_{1},{\alpha}_{2})\phantom{\rule{negativethinmathspace}{0ex}}=\phantom{\rule{negativethinmathspace}{0ex}}\frac{\mathrm{\Gamma}((\mu \phantom{\rule{negativethinmathspace}{0ex}}-\phantom{\rule{negativethinmathspace}{0ex}}1){\alpha}_{1}\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}(k\phantom{\rule{negativethinmathspace}{0ex}}-\phantom{\rule{negativethinmathspace}{0ex}}\mu ){\alpha}_{2}\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}1)}{\mathrm{\Gamma}(\mu {\alpha}_{1}\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}(k\phantom{\rule{negativethinmathspace}{0ex}}-\phantom{\rule{negativethinmathspace}{0ex}}\mu ){\alpha}_{2}\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}1)}+\frac{\mathrm{\Gamma}((\mu \phantom{\rule{negativethinmathspace}{0ex}}-\phantom{\rule{negativethinmathspace}{0ex}}1){\alpha}_{1}\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}(k\phantom{\rule{negativethinmathspace}{0ex}}-\phantom{\rule{negativethinmathspace}{0ex}}\mu ){\alpha}_{2}\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}1)}{\mathrm{\Gamma}((\mu \phantom{\rule{negativethinmathspace}{0ex}}-\phantom{\rule{negativethinmathspace}{0ex}}1){\alpha}_{1}\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}(k\phantom{\rule{negativethinmathspace}{0ex}}-\phantom{\rule{negativethinmathspace}{0ex}}\mu \phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}1){\alpha}_{2}\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}1)}.}\end{array}$$

Both fractions on the right-hand side have the same numerator but their denominators differ by *α*_{2} – *α*_{1}; the well known monotonicity of the Gamma function thus allows us to conclude that, for sufficiently large *k*, we have
$$\begin{array}{}{\displaystyle {w}_{k,\mu}({\alpha}_{1},{\alpha}_{2})\le 2\frac{\mathrm{\Gamma}((\mu -1){\alpha}_{1}+(k-\mu ){\alpha}_{2}+1)}{\mathrm{\Gamma}((\mu -1){\alpha}_{1}+(k-\mu ){\alpha}_{2}+{\alpha}^{\ast}+1)}}\\ {\displaystyle \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}=2\frac{\mathrm{\Gamma}(u+\mu ({\alpha}_{1}-{\alpha}_{2}))}{\mathrm{\Gamma}(u+\mu ({\alpha}_{1}-{\alpha}_{2})+{\alpha}^{\ast})}}\end{array}$$(2.12)

with *u* := −*α*_{1} + *k α*_{2} + 1. For *γ* > 0 and *z* → ∞, Stirling’s formula yields the asymptotic relation Γ(*z*) / Γ(*z* + *γ*) = *z*^{−γ} (1 + *o*(1)) which is monotonically decreasing in *z*. Hence, for sufficiently large *k*, the quotient on the right-hand side of (2.12) is monotonically decreasing with respect to *μ* for *α*_{1} ≥ *α*_{2} and monotonically increasing with respect to *μ* if *α*_{1} < *α*_{2}. Therefore, the maximum of this expression over all admissible values of *μ* is attained at *μ* = 1 if *α*_{1} ≥ *α*_{2} and at *μ* = k if *α*_{1} < *α*_{2}. These observations may be summarized in the form
$$\begin{array}{}{\displaystyle {w}_{k,\mu}({\alpha}_{1},{\alpha}_{2})\le 2\frac{\mathrm{\Gamma}(\frac{k-1}{2}({\alpha}_{1}+{\alpha}_{2}-|{\alpha}_{1}-{\alpha}_{2}|)+1)}{\mathrm{\Gamma}(\frac{k-1}{2}({\alpha}_{1}+{\alpha}_{2}-|{\alpha}_{1}-{\alpha}_{2}|)+{\alpha}^{\ast}+1)}\phantom{\rule{negativethinmathspace}{0ex}}=\phantom{\rule{negativethinmathspace}{0ex}}2\frac{\mathrm{\Gamma}((k\phantom{\rule{negativethinmathspace}{0ex}}-\phantom{\rule{negativethinmathspace}{0ex}}1){\alpha}^{\ast}+1)}{\mathrm{\Gamma}(k{\alpha}^{\ast}+1)},}\end{array}$$

and this implies
$$\begin{array}{}{\displaystyle {\beta}_{1k}+{\beta}_{2k}\le 2\overline{a}\frac{\mathrm{\Gamma}((k-1){\alpha}^{\ast}+1)}{\mathrm{\Gamma}(k{\alpha}^{\ast}+1)}\sum _{\mu =1}^{k}(|{b}_{1,\mu -1,k-\mu}|+|{b}_{2,\mu -1,k-\mu}|)}\\ {\displaystyle \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{thinmathspace}{0ex}}=2\overline{a}\frac{\mathrm{\Gamma}((k-1){\alpha}^{\ast}+1)}{\mathrm{\Gamma}(k{\alpha}^{\ast}+1)}\sum _{\mu =0}^{k-1}(|{b}_{1,\mu ,k-1-\mu}|+|{b}_{2,\mu ,k-1-\mu}|)}\\ {\displaystyle \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{thinmathspace}{0ex}}=2\overline{a}\frac{\mathrm{\Gamma}((k-1){\alpha}^{\ast}+1)}{\mathrm{\Gamma}(k{\alpha}^{\ast}+1)}({\beta}_{1,k-1}+{\beta}_{2,k-1}),}\end{array}$$

if *k* is large enough. Thus, for a sufficiently large and fixed constant *N* and arbitrary *k*, by induction, we deduce the estimate
$$\begin{array}{}{\displaystyle {\beta}_{1,N+k}+{\beta}_{2,N+k}\le \frac{(2\overline{a}{)}^{k}\mathrm{\Gamma}(N{\alpha}^{\ast}+1)}{\mathrm{\Gamma}((N+k){\alpha}^{\ast}+1)}({\beta}_{1,N}+{\beta}_{2,N}),}\end{array}$$

which shows (2.11) and completes the proof of Lemma 2.5. □

The same ideas and methods can be applied if the dimension of the fractional differential equation system is greater than 2. This leads to the following result:

#### Theorem 2.6

*Let* *α* = (*α*_{1}, …, *α*_{d}) ∈ (0, 1]^{d} *and A* ∈ ℂ^{d × d}. *Then*, *for each* *x*_{0} ∈ ℂ^{d}, *the initial value problem*
$$\begin{array}{}{D}_{\ast}^{\alpha}x(t)=Ax(t),\phantom{\rule{2em}{0ex}}x(0)={x}_{0},\end{array}$$(2.13)

*has a uniquely determined solution in* *C*([0,∞); ℂ^{d}). *The components of this solution can be expressed in the form*
$$\begin{array}{}{\displaystyle {x}_{j}(t)=\sum _{k=0}^{\mathrm{\infty}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\sum _{{\ell}_{1},{\ell}_{2},\dots ,{\ell}_{j-1},{\ell}_{j+1},\dots ,{\ell}_{d}=1}^{\mathrm{\infty}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}{b}_{k,{\ell}_{1},{\ell}_{2},\dots ,{\ell}_{j-1},{\ell}_{j+1},\dots ,{\ell}_{d}}{t}^{k{\alpha}_{j}+\sum _{\mu =1,\mu \ne j}^{d}{\ell}_{\mu}{\alpha}_{\mu}},}\end{array}$$(2.14)

*and the series in eq*. (2.14) *converges for all t* ≥ 0.

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.