Show Summary Details
More options …

# Open Mathematics

### formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

1 Issue per year

IMPACT FACTOR 2016 (Open Mathematics): 0.682
IMPACT FACTOR 2016 (Central European Journal of Mathematics): 0.489

CiteScore 2016: 0.62

SCImago Journal Rank (SJR) 2016: 0.454
Source Normalized Impact per Paper (SNIP) 2016: 0.850

Mathematical Citation Quotient (MCQ) 2016: 0.23

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …

# Performance and stochastic stability of the adaptive fading extended Kalman filter with the matrix forgetting factor

Cenker Biçer
/ Levent Özbek
/ Hasan Erbay
• Corresponding author
• Computer Engineering Department, Engineering Faculty, Kırıkkale University, Yahşhan, 71450 Kırıkkale, Turkey
• Email
• Other articles by this author:
• De Gruyter OnlineGoogle Scholar
Published Online: 2016-11-17 | DOI: https://doi.org/10.1515/math-2016-0083

## Abstract

In this paper, the stability of the adaptive fading extended Kalman filter with the matrix forgetting factor when applied to the state estimation problem with noise terms in the non–linear discrete–time stochastic systems has been analysed. The analysis is conducted in a similar manner to the standard extended Kalman filter’s stability analysis based on stochastic framework. The theoretical results show that under certain conditions on the initial estimation error and the noise terms, the estimation error remains bounded and the state estimation is stable.

The importance of the theoretical results and the contribution to estimation performance of the adaptation method are demonstrated interactively with the standard extended Kalman filter in the simulation part.

MSC 2010: 15A51; 37Hxx

## 1 Introduction

The Kalman filter (KF) and the standard extended Kalman filter (EKF) are two most popular methods used for the state estimation in linear and non-linear systems, respectively. They have maintained their popularity from their discovery to present day since they can be easily applied to the estimation problem in many diverse areas including natural and physical sciences, military and economics. The KF yields the optimum state estimation when the system dynamics is fully known and the system noise processes is Gaussian white noise [15]. On the other hand, both the KF and the EKF might give biased estimates and diverge when the initial estimates are not sufficiently good or the arbitrary noise matrices have not been chosen appropriately or any changes occur in the system dynamics [6, 7]. To overcome these problems, sevaral adaptive filtering techniques [818] are proposed. Among them is the adaptive fading extended Kalman filter with the matrix forgetting factor (AFEKF) [8]. The AFEKF is based on scalling the error covariance of the prediction with the diagonal matrix forgetting factor. The calculation of the diagonal entries are described in [8, 19]. The AFEKF compensates the effects of poor initial information or any changes in system parameters.

As the EKF, any adaptive EKF can be used for state estimation in non–linear systems. However, it is crutial to decide which filter to use because the filter estimates are desired to be close to the true values during the filtering processes, in other words, the estimation error should be the smallest and the estimates should be stable. To address the importance of this issue the stability and convergence analysis of the discrete–time EKF are studied [7, 8, 2024].

Hence, the stability analysis as well as the determination of the stability conditions of the AFEKF are very important. The convergence and stability properties of the AFEKF, without noise terms, can be found in [8] where it is shown that the AFEKF is exponentially stable for deterministic non–linear systems, namely, the estimation error is bounded.

With this study, we extend the results of the article [8] by eliminating the restriction on the noise terms. Then, using the direct method of Lyapunov, it has been proved that under certain conditions the AFEKF is still an exponential observer i.e., the dynamics of the estimation error is exponentially stable. It is an important result as the real–life systems are usually not noise free.

Troughout the manuscript, ‖ … ‖ denotes the Euclidean norm of a real vector or the spectral norm of a real matrix.

The rest of the manuscript is structured as follows. We review the state estimation problem for non–linear stochastic discrete–time systems and present some auxiliary results from the stochastic stability theory in Section 2. In Section 3, the AFEKF is introduced and its boundedness of the error is proved. The numerical simulation is given in Section 4. The conclusions are discussed in Section 5.

## 2 Review: state estimation and stochastic boundedness

This section overviews some definitions and fundamental results on the stochastic theory. Recall that a non–linear discrete time stochastic system is given by the equations: $xn+1=f(xn,un)+Gnwn,$(1) $yn=h(xn)+Dnvn,$(2)

where n ∈ ℕ0 is the discrete time point, xn ∈ ℝq is the state vector, un ∈ ℝq is the input vector and yn ∈ ℝm is the output vector. Moreover, vn ∈ ℝk, wn ∈ ℝl are uncorrelated zero-mean white noise process with identity covarience and ${D}_{n}\in {\mathbb{R}}^{m×k},\text{\hspace{0.17em}}{G}_{n}\in {\mathbb{R}}^{q×l}$ are time varying matrices. The functions f and h are assumed to be of class C1 i.e. continuously differentiable functions.

The state estimator for the system is $x^n+1=f(x^,un)+Kn(yn−h(x^n))$(3)

where Kn ∈ ℝq × m changes in time, is called the observer gain. ${\stackrel{^}{x}}_{n}$ represents the estimated states.

We define $An=∂f∂x(x^,un),$(4)

$Cn=∂h∂x(x^n).$(5)

We also define the estimate error vector as $ζn=xn−x^n.$(6)

By subtracting (3) from (1) and taking equations (2), (4)-(5) into account we get $ζn+1=(An−KnCn)ζn+rn+sn,$(7)

where $rn=φn(xn,x^n,un)−Knχn(xn,x^n),$(8) $sn=Gnwn−KnDnvn.$(9)

To analyze the error dynamics given in the equation (7) we recall the following lemma on the boundedness of stochastic processes.

Let Vnn) be a stochastic process and $\underset{_}{v},\overline{v},\text{\hspace{0.17em}}\mu >0\text{\hspace{0.17em}and\hspace{0.17em}}0<\alpha <1$ be real numbers such that the inequalities $v_∥ζn2∥≤Vn(ζn)≤v¯∥ζn2∥$(10)

and $E{Vn+1(ζn+1)|ζn}−Vn(ζn)≤μ−αVn(ζn)$(11)

are carried out by every solutions of the equation (7). Then the stochastic process is exponentially bounded in mean square, that is, $E∥ζn∥2≤v¯v_E∥ζ0∥2(1−α)n+μv_∑i=1n−1(1−α)i$(12)

for every n ∈ ℕ0. Moreover, the stochastic process is bounded with probability one.

See [25].

## 3 3 Error bounds for the AFEKF

A discrete-time adaptive fading extended Kalman filter with the matrix forgetting factor is given by the following coupled difference equations $x^n+1=f(x^n,un)+Kn(yn−h(x^n))$(13)

and Riccati difference equation: $Pn+1=AnΛnPnΛnTAnT+ΛnQnΛnT−Kn(CnΛnPnΛnTCnT+Rn)KnT,$(14)

where Kn is the Kalman gain given by $Kn=AnΛnPnΛnTCnT(CnΛnPnΛnTCnT+Rn)−1.$(15)

Moreover, ${\mathrm{\Lambda }}_{n}=\text{\hspace{0.17em}diag\hspace{0.17em}}\left({\lambda }_{1},{\lambda }_{2},\dots ,{\lambda }_{q}\right)$ is a time varying q × q dimensional diagonal matrix forgetting factor with ${\lambda }_{i}\ge 1\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}i=1,2,\dots ,q;$ (see [8, 19] for the computation of Λn) . Furthermore, Qn and Rn are positive define, symmetric matrices with dimensions q Λ q and m Λ m, respectively, and the covariances matricesfor the currupting noise terms in (1)-(2).

Consider a nonlinear stochastic system given by (1)-(2) and an extended Kalman filter as stated in Definition 3.1. Let the following assumptions hold.

1. There are real numbers $\overline{a},\overline{c},\text{\hspace{0.17em}}\underset{_}{p},\overline{p}>0\text{\hspace{0.17em}and\hspace{0.17em}}\underset{_}{\lambda },\overline{\lambda }\ge 1$ such that the following bounds hold for every n ∈ ℕ0 $∥An∥≤a¯,$(16a) $∥Cn∥≤c¯,$(16b) $p_I≤Pn≤p¯I,$(16c) $q_I≤Qn,$(16d) $r_I≤Rn,$(16e) $λ_I≤Λn≤λ¯I,$(16f)

where $\underset{_}{q}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\underset{_}{r}$ are the smallest eigenvalues of the matrices Qn and Rn, respectively. Moreover, $\underset{_}{\lambda }\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\overline{\lambda }$ are the smallest and the largest diagonal entries of Λn, respectively

2. An is nonsingular matrix for every n ∈ ℕ0

3. There are positive real numbers ${\epsilon }_{\phi },{\epsilon }_{\chi },{\kappa }_{\phi },{\kappa }_{\chi },>0$ such that the nonlinear functions φ, χ in (8) are bounded via $∥φ(x,x^,u)∥≤κφ∥x−x^∥2,$(17) $∥χ(x,x^)∥≤κχ∥x−x^∥2.$(18)

Then the estimation error ζn given by (6) is exponentially bounded in mean square and bounded with probability one, provided that the initial estimation error satisfies $∥ζ0∥≤∈$(19)

and the covariance matrices of the noise terms are bounded via $GnΛnΛnTGnT≤δI,$(20) $DnDnT≤δI$

for some δ, Ȉ > 0.

To prove Theorem 3.2 we need the following auxiliary results.

Under the conditions of Theorem 3.2 there is a real number 0 < α < 1 such that $1−α=1λ_21+λ2_q_λ¯2p¯a¯+a¯λ¯2p¯c¯1r_2$

and $Πn=(ΛnPnΛnT)−1$

satisfies the inequality $(An−KnCn)TΠn+1(An−KnCn)≤(1−α)Πn$(22)

for n ≤ 0 with the Kalman gain Kn given in (15).

The proof mimics Lemma 3.1 in [25]. Substituting (15) in (14) and rearranging the resulting equation yields $Pn+1=(An−KnCn)ΛnPnΛnT(An−KnCn)T+ΛnQnΛnT+ KnCnΛnPnΛnT(An−KnCn)T.$(23)

Multiplying the factor $\left({A}_{n}-{K}_{n}{C}_{n}\right){\mathrm{\Lambda }}_{n}{P}_{n}{\mathrm{\Lambda }}_{n}^{T}\text{\hspace{0.17em}by\hspace{0.17em}}{A}_{n}^{-1}$ from left and using Equation (15) yields $An−1(An−KnCn)ΛnPnΛnT=ΛnPnΛnT−ΛnPnΛnTCnT(CnΛnPnΛnTCT+Rn)−1CnΛnPnΛnT.$(24)

Note that the right side of the equation (24) is a symmetric matrix. Thus, applying matrix inversion lemma in [26] we obtain $An−1(An−KnCn)ΛnPnΛnT=(ΛnPnΛnT+CnRn−1CnT)−1≥0.$(25)

Furthermore, $An−1KnCn=ΛnPnΛnTCnT(CnΛnPnΛnTCnT+Rn)−1Cn≥0.$(26)

Due to above equations (25) and (26) along with ${\mathrm{\Lambda }}_{n}{P}_{n}{\mathrm{\Lambda }}_{n}^{T}=\left({\mathrm{\Lambda }}_{n}{P}_{n}{\mathrm{\Lambda }}_{n}^{T}{\right)}^{T}$ we obtain $KnCnΛnPnΛnT(An−KnCn)T=AnAn−1KnCnAn−1(An−KnCn)ΛnPnΛnTTAnT≥0.$(27)

From the equations (23) and (26) we have $Pn+1≥(An−KnCn)ΛnPnΛnT(An−KnCn)T+ΛnQnΛnT.$(28)

The inequality (25) implies that $\left({A}_{n}-{K}_{n}{C}_{n}{\right)}^{-1}$ exists, so we obtain $Pn+1≥(An−KnCn)ΛnPnΛnT+(AnKnCn)−1+ΛnQnΛnT(An−KnCn)T−1(An−KnCn)T.$(29)

From (15) and (16a)-(16f) we have $∥Kn∥≤∥An∥∥Λn∥∥Pn∥∥ΛnT∥∥CnT∥∥(CnΛnPnΛnTCT+Rn)−1∥≤a¯λ¯p¯c¯1r_.$(30)

Substituting the inequalities (16a)-(16f) into (29) we obtain $Pn+1≥(An−KnCn)ΛnPnΛnT+λ2_q_a¯+a¯λ¯p¯c¯1r_2I(An−KnCn)T.$(31)

Multiplying both sides of (31) from left and right with ${\mathrm{\Lambda }}_{n+1}\text{\hspace{0.17em}and\hspace{0.17em}}{\mathrm{\Lambda }}_{n+1}^{T},$ respectively, and using the inequalty (16f) gives$Λn+1Pn+1Λn+1T≥λ2_(An−KnCn)ΛnPnΛnT+λ2_q_a¯+a¯λ¯2p¯c¯1r_2I(An−KnCn)T.$(32)

Taking the inverse of both sides of (32) and multiplying from left and right with $\left({A}_{n}-{K}_{n}{C}_{n}{\right)}^{T}\text{\hspace{0.17em}and\hspace{0.17em}}\left({A}_{n}-{K}_{n}{C}_{n}\right)$ we have,$An−KnCnTΠn+1(An−KnCn)≥1λ_21+λ2_q_λ¯2p¯a¯+a¯λ¯2p¯c¯1r_2−1Πn.$(33)

Then the result follows. $(1−α)=1λ_211+λ2_q_λ¯2p¯a¯+a¯λ¯2p¯c¯1r_2.$(34)

Let the conditions of Theorem 3.2 be fulfilled, let ${\mathrm{\Pi }}_{n}=\left({\mathrm{\Lambda }}_{n}{P}_{n}{\mathrm{\Lambda }}_{n}^{T}{\right)}^{-1}\text{\hspace{0.17em}and\hspace{0.17em}}{K}_{n},\text{\hspace{0.17em}}{r}_{n}$ given in (15),(8). Then there are positive real numbers ε′, κnonl such that

$rnTΠn2(An−KnCn)(xn−x^n)+rn≤κnonl∥xn−x^n∥3$(35)

holds for $\parallel {x}_{n}-{\stackrel{^}{x}}_{n}\parallel \le {\epsilon }^{\prime }.$

From (15), (16a)-(16f) and ${C}_{n}{\mathrm{\Lambda }}_{n}{P}_{n}{\mathrm{\Lambda }}_{n}^{\prime }{C}_{n}^{\prime }>0,$ we have $∥Kn∥≤a¯λ¯2p¯c¯1r_$(36)

and using in (8) gives $∥rn∥≤∥φ(xn,x^n,un)∥+a¯λ¯2p¯c¯1r_∥χ(xn,x^n)∥.$(37)

By choosing ${\epsilon }^{\prime }=\text{min}\left({\epsilon }_{\phi },{\epsilon }_{\chi }\right)$ and using (17), (18) we obtain $∥rn∥≤κφ∥xn−x^n∥2+a¯λ¯2p¯c¯1r_κχ∥(xn,x^n)∥2.$(38)

Since $\parallel {x}_{n}-{\stackrel{^}{x}}_{n}{\parallel }^{2}\le {\epsilon }_{n}^{\prime },$ we have $∥rn∥≤κφ∥xn−x^n∥2.$(39)

Define $κ′=κφ+(a¯λ¯2p¯c¯1r_)κχ.$(40)

Then, for $\parallel {x}_{n}-{\stackrel{^}{x}}_{n}{\parallel }^{2}\le {\epsilon }^{\prime },$ from (38) by taking ${\mathrm{\Pi }}_{n}=\left({\mathrm{\Lambda }}_{n}{P}_{n}{\mathrm{\Lambda }}_{n}^{T}{\right)}^{-1}$ and using (16a)-( 16f) we obtain $rnTΠn2(An−KnCn)(xn−x^n)+rn≤κ′∥xn−x^n∥21λ_2p_2a¯+a¯λ¯2p¯c¯1r_×∥xn−x^n∥+κ′ε′∥xn−x^n∥.$(41)

Rearranging (41) gives $rnTΠn2(An−KnCn)(xn−x^n)+rn≤κ′1λ_2p_2a¯+a¯λ¯2p¯c¯21r_+κ′ε′∥xn−x^n∥3=κnonl∥xn−x^n∥3$(42)

where $κnonl=κ′1λ_2p_2a¯+a¯λ¯2p¯c¯1r_+κ′ε′.$(43)

Let the conditions of Theorem 3.2 be fulfilled, let ${\mathrm{\Pi }}_{n}=\left({\mathrm{\Lambda }}_{n}{P}_{n}{\mathrm{\Lambda }}_{n}^{T}{\right)}^{-1}$ given in (15), (9). Then there is a positive real number κnoise independent of δ such that $EsnTΠn+1Sn≤κnoiseδ$(44)

holds.

Using the equation (9) and after matrix distribution we obtain $snTΠn+1Sn=(Gnwn−KnDnvn)TΠn+1(Gnwn−KnDnvn)$(45)

$=(Gnwn)TΠn+1(Gnwn)−(Gnwn)TΠn+1(KnDnvn)−(KnDnvn)TΠn+1(Gnwn)+(KnDnvn)TΠn+1(KnDnvn).$(46)

Recall that the vectors wn and vn are uncorrelated, the terms containing both vanish so we have $snTΠn+1Sn={(Gnwn)TΠn+1(Gnwn)+(KnDnvn)TΠn+1(KnDnvn)}.$(47)

By the group equations (16) and the inequality ${C}_{n}{\mathrm{\Lambda }}_{n}{P}_{n}{\mathrm{\Lambda }}_{n}^{T}{C}_{n}^{T}>0$ we have $∥Kn∥(48)

This inequality yields $snTΠn+1sn≤1λ_2p_wnTGnTGnwn+a¯2p¯2c¯2λ¯2p_r_2vnTDnTDnvn.$(49)

Taking the trace of the above inequality we get $snTΠn+1sn≤1λ_2p_ tr wnTGnTGnwn+a¯2p¯2c¯2λ¯2p_r_2 tr vnTDnTDnvn.$(50)

Since $\text{tr\hspace{0.17em}}\left(\mathrm{\Gamma }\mathrm{\Delta }\right)=\text{\hspace{0.17em}tr\hspace{0.17em}}\left(\mathrm{\Delta }\mathrm{\Gamma }\right),$ using (50) we obtain $snTΠn+1sn≤1λ_2p_ tr (GnwnwnTGnT)+a¯2p¯2c¯2λ¯2p_r_2 tr (DnvnvnTDnT),$(51)

where Dn and Gn are deterministic matrices. Remember that wn and vn are vector valued white noise process, thus, $E{vnvnT}=I$(52)

and $E{wnwnT}=I$(53)

hold. Thus we have $EsnTΠn+1Sn≤1λ_2tr(GnΛnΛnTGnT)p_+a¯2p¯2c¯2λ¯2p_r_2 tr DnvnvnTDnT.$(54)

From the equations (20) and (21) we have $tr (GnΛnΛnTGnT)≤δtr(I)=qδ$(55)

and $tr (DnDnT)≤δ tr (I)=mδ,$(56)

where q and m are the number of rows Gn and Dn, respectively. Defining $κnoise=qλ_2p_+a¯2c¯2p¯2λ¯2mp_r_2$(57)

yields $EsnTΠn+1Sn≤κnoiseδ,$(58)

This completes the proof.

We are now ready to prove the main result stated in Theorem 3.2 of the paper.

There exists a function depending on error estimate $Vn(ζn)=ζnTΠnζn$(59)

with ${\mathrm{\Pi }}_{n}=\left({\mathrm{\Lambda }}_{n}{P}_{n}{\mathrm{\Lambda }}_{n}^{T}{\right)}^{-1}$ since Pn is positive definite. From the inequalities (16c)-(16f) we have $1p¯λ¯2∥ζn∥2≤Vn(ζn)≤1p_λ2_∥ζn∥2,$(60)

which is similar to (10) with $\underset{_}{v}=\frac{1}{\overline{p}{\overline{\lambda }}^{2}}\text{\hspace{0.17em}and\hspace{0.17em}}\overline{v}=\frac{1}{\underset{_}{p}{\underset{_}{\lambda }}^{2}}.$ We need an upper bound on $E\left\{{V}_{n+1}\left({\zeta }_{n+1}\right)|{\zeta }_{n}\right\}$ as stated in (11) to meet the requirements of Lemma 2.1. From (7) we obtain $Vn(ζn+1)=[(An−KnCn)ζn+rn+sn]TΠn+1[(An−KnCn)ζn+rn+sn].$(61)

Using Lemma 3.3 we obtain $Vn(ζn+1)≤(1−α)Vn(ζn)+rnTΠn+1(2(An−KnCn)ζn+rn)+2snTΠn+1((An−KnCn)ζn+rn)+snTΠn+1Sn.$(62)

Taking the conditional expectation $E\left\{{V}_{n+1}\left({\zeta }_{n+1}\right)|{\zeta }_{n}\right\}$ and considering the white noise property it can be seen that the term $E\left\{{s}_{n}^{T}{\mathrm{\Pi }}_{n+1}\left(\left({A}_{n}-{K}_{n}{C}_{n}\right){\zeta }_{n}+{r}_{n}\right)|{\zeta }_{n}\right\}$ vanishes since neither ${\mathrm{\Pi }}_{n+1}\text{\hspace{0.17em}nor\hspace{0.17em}}{A}_{n},\text{\hspace{0.17em}}{K}_{n},\text{\hspace{0.17em}}{C}_{n},\text{\hspace{0.17em}}{r}_{n},\text{\hspace{0.17em}}{s}_{n},\text{\hspace{0.17em}}{\zeta }_{n}\text{\hspace{0.17em}depend on\hspace{0.17em}}{v}_{n}\text{\hspace{0.17em}or\hspace{0.17em}}{w}_{n}.$ The remaining terms are estimated by Lemma 3.4 and Lemma 3.5 as $E{Vn+1(ζn+1)|ζn}−Vn(ζn)≤−αVn(ζn)+κnonl∥ζn∥3+κnoiseδ$

for $\parallel {\zeta }_{n}\parallel \le {\epsilon }^{\prime }.$ We define$ε=minε′,α2p¯λ¯2κnoise$(64)

Then from (59), (60) under condition $\parallel {\zeta }_{n}\parallel \le \epsilon$ we obtain $κnonl∥ζn∥∥ζn∥2≤α2p¯λ¯2∥ζn∥2≤α2Vn(ζn).$(65)

Substituting into (63) yields $E{Vn+1(ζn+1)|ζn}−Vn(ζn)≤−αVn(ζn)+ κnonl∥ζn∥3⏟⩽α2Vn(ζn)+κnoiseδ≤−α2Vn(ζn)+κnoiseδ$(66)

for $\parallel {\zeta }_{n}\parallel \le \epsilon .$ Therefore we are able to apply Lemma 2.1 with $\parallel {\zeta }_{0}\parallel \le \epsilon ,\text{\hspace{0.17em}}\underset{_}{v}=\frac{1}{\overline{p}\overline{\lambda }2},\overline{v}=\frac{1}{\underset{_}{p}{\underset{_}{\lambda }}^{2}}\text{\hspace{0.17em}and\hspace{0.17em}}\mu ={\kappa }_{noise}\delta .$ However, with some $\stackrel{~}{\epsilon }\le \epsilon \text{\hspace{0.17em}for\hspace{0.17em}}\stackrel{~}{\epsilon }\le \parallel {\zeta }_{n}\parallel \le \epsilon$ we have to guarantee the inequality $E{Vn+1(ζn+1)|ζn}−Vn(ζn)≤−α2Vn(ζn)+κnoiseδ≤0.$(67)

Choosing with the aid of (64) $δ=αε~22p¯λ¯2κnoise$(68)

with some $\stackrel{~}{\epsilon }\le \epsilon \text{\hspace{0.17em}we have for\hspace{0.17em}}\parallel {\zeta }_{n}\parallel \underset{_}{>}\stackrel{~}{\epsilon }$ $κnoise≤α2p¯λ¯2∥ζn∥2≤α2Vn(ζn),$(69)

which says that (67) holds. In result we conclude that the estimation error remains bounded if the initial error and noise terms are bounded as stated in (19)-(21).

## 4 Simulation study

In the previous section it is shown that the estimation error of the discrete–time AFEKF is bounded under two conditions: (1) sufficiently small initial estimation error (2) sufficiently small noise assumptions. Here, we run simulations to illustrate numerically the significance of these assumptions and to show the numerical behaviour of the theory we obtained. For this purpose we consider the Lotka-Volterra (prey-predator) model in which the population growth of two interactive species is described. The model consists of a pair of non–linear differential equations $dx1(t)dt=ax1(t)−bx1(t)x2(t),$(70)

$dx2(t)dt =−mx2(t)−rx1(t)x2(t),$(71)

where x1 (t) is the number of the first species (prey) in time t, x2 (t) is the number of the second species (predator) in time t, a is the reproduction rate of preys, m is the death rate of predators and parameters b and r describe the interaction of the two species.

The state–space notation with perturbed gaussian white noise for the differential system is $xt+1=x1,t+1x2,t+1=1+(a−bx2,t)Δt001+(−m+rx1,t)Δtx1,tx2,t+Gnwt,$ $yt=01xt+Dtvt,$(72)

where yt is the measurements in time t, Δt is integration time interval subdivider. Also, wt; vt are uncorrelated system and measurement noise terms with a mean of zero and Q, R covariance matrices, respectively [27].

We compare the EKF and the AFEKF with the initial estimates and the noise terms given in Table 1 over 250 replicated samples. The exact values of the parameters used in the simulations are given in Table 2.

Table 1

The initial state estimates and noise terms used in the simulation

Table 2

True values of the unknown parameters in Simulation

The simulations results are displayed in Figures 1-5. Figure 1 describes the estimation error during the simulation process. It is obvious that if the conditions in (18) to (20) are satisfied, then the estimation error remains bounded for both the EKF and the AFEKF. In Figure 2, the sum of the squared estimation errors are shown. The estimation error in the AFEKF at time t is smaller than that of the EKF, thus, the AFEKF converges to true value faster than the EKF. On the other hand, when the conditions defined by (18) to (20) are violated, the state estimates of the EKF diverge from true states as seen in Figure 3. Hence, the estimation error of the EKF grows without bound. However, under the same conditions, the AFEKF’s state estimates by using forgetting factors in Figure 4 converge to the true state values and the estimation errors remain bounded. Finally, Figure 5 demonstrates the performance improvement in the sum of the squared estimation errors when the stability conditions are violated.

Figure 1

Estimation error for State 1 and State 2 (Stability conditions are met)

Figure 2

Sum of the squared estimation errors (Stability conditions are met)

Figure 3

State 1 and State 2 estimations (Stability conditions are violated)

Figure 4

Forgetting factors

Figure 5

Sum of the squared estimation errors (Stability conditions are violated)

## 5 Conclusion

In this study, we have analyzed the error behavior of the AFEKF when it is applied to the general estimation problems for non–linear stochastic discrete–time systems. The results show that the estimation error remains bounded in the mean square sense under certain conditions. This includes small initial estimation error, small disturbing noise terms, positive definite and bounded Ricatti difference equations. We have presented some numerical simulations to prove the importance of the stability conditions as well as to evaluate the performance of the AFEKF compared to the standart the EKF. The simulations presented state that small initial estimation error results in bounded estimation error in both the EKF and the AFEKF. However, when the initial estimation error is not small enough, the estimation error in the EKF is much bigger than the AFEKF which shows that the forgetting factors prevent from the filtering estimation to diverge.

## References

• [1]

Anderson, B. D. O. and Moore, J. B. Optimal Filtering. Prentice Hall, USA, (1979). Google Scholar

• [2]

Chen, G. Approximate Kalman Filtering. World Scientific, New York, USA.Google Scholar

• [3]

Grewal, S. and Andrews, A. P. Kalman Filtering Theory and Practice Using Matlab. John Wiley & Sons Inc., USA, (2008).Google Scholar

• [4]

Jazwinski, A. H. Stochastic Processes and Filtering Theory. Academic Press Inc., New York, USA, (1970).Google Scholar

• [5]

Őzbek, L. Kesikli Zaman Durum-Uzay Modelleri ve .Indirgemeli Tahmin ve Yak.nsama Problemleri. PhD thesis, Ankara Űniversitesi Fen Bilimleri Enstitűsű, Ankara, TUKŰIYE, (1998). Google Scholar

• [6]

Mehra, R. K. Approaches to adaptive filtering. IEEE Trans. Auto. Control AC-17, 693–698 (1972). Google Scholar

• [7]

Őzbek, L., Babacan, E. K., and Efe, M. Stochastic stability of the discrete-time constrained extended kalman filter. Turkish J. Elec.Eng. & Comp. Sci 18-2, 211–223 (2010). Google Scholar

• [8]

Biçer, C., Babacan, E. K., and Őzbek, L. Stability of the adaptive fading extended kalman filter with the matrix forgetting factor.Turkish Journal of Electrical Engineering & Computer Sciences 20-5, 819–833 (2012). Google Scholar

• [9]

Geng, Y. and Wang, J. J. Adaptive estimation of multiple fading factors in kalman filter for navigation applications. GPS Solutions 12, 273–279 (2008).Google Scholar

• [10]

Gustafsson, F. Estimation Of Discrete Parameters In Linear Systems. PhD thesis, Department Of Electrical Engineering, Linkoping University, Sweden, (1992).Google Scholar

• [11]

Hashlamon, I. and Erbatur, K. An improved real-time adaptive kalman filter with recursive noise covariance. Turkish Journal of Electrical Engineering & Computer Sciences 24, 524–540 (2016).Google Scholar

• [12]

Jwo, D. and Weng, T. An adaptive sensor fusion method with applications in integrated navigation. The Journal of Navigation 61,705–721 (2008).Google Scholar

• [13]

Kim, K. H., Lee, J. G., and Park, C. G. Adaptive two stage kalman filter in the presence of unknown random bias. Int. J. Adapt.Control Signal Process 20, 305–319 (2006).Google Scholar

• [14]

Mohamed, A. H. and Schwarz, K. P. Adaptive Kalman Filtering for INS/GPS. Journal of Geodesy 73, 193–203 (1999).Google Scholar

• [15]

Őzbek, L. and Aliev, F. A. Comments on Adaptive Fading Kalman Filter with an Applications. Automatica 34-12, 1163–1164 (1998).Google Scholar

• [16]

Őzbek, L. and Efe, M. Communication in Statistics, Simulation and Computation 3, 145–158 (2004).Google Scholar

• [17]

Xia, Q., Rao, M., Ying, Y., and Shen, X. Adaptive fading kalman filter with an application. Automatica 30-8, 1333–1338 (1994).Google Scholar

• [18]

Yang, J. N., Lin, S., Huang, H., and Zhou, L. An adaptive extended kalman filter for structural damage identification. Struct.Control and Health Monit.13, 849–867 (2006). Google Scholar

• [19]

Biçer, C. Uyarl. Kalman Filtresinin BaşCsar.m ve Kararl.l.k Analizi. PhD thesis, Ankara Universitesi Fen Bilimleri Enstitűsű, Ankara, TURKŰIYE, (2011). Google Scholar

• [20]

Babacan, E. K., Őzbek, L., and Efe, M. Stability of the Extended Kalman Filter When the States are Constrained. IEEE Transactions on Automatic Control 53-11, 2707–2711 (2008). Google Scholar

• [21]

Boutayeb, M., Rafaralahy, H., and Darouach, M. Convergence Analysis of the Extended Kalman Filter Used as an Observer for Nonlinear Deterministic Discrete-Time Systems. IEEE Transactions on Automatic Control 42-4, 581–586 (1997).Google Scholar

• [22]

Kim, K. H., Lee, J. G., Park, C. G., and Jee, G. I. The Stability Analysis of the Adaptive Fading Extended Kalman Filter. In: 16th IEEE International Conference on Control, Singapore, October (2007).Google Scholar

• [23]

Kim, K. H., Jee, G. I., Park, C. G., and Lee, J. G. The Stability Analysis of the Adaptive Fading Extended Kalman Filter Using the Innovation Covariance. International Journal of Control, Automation and Systems 7-1, 49–56 (2009).Google Scholar

• [24]

Reif, K. and Unbehauen, R. The Extended Kalman Filter as an Exponential Observer for Nonlinear Systems. In: IEEE Trans.Signal Processing, volume 47-8, 2324–2328,(1999). Google Scholar

• [25]

Reif, K., Günther, S., Yaz, E., and Unbehauen, R. Stochastic Stability of the Discrete-Time Extended Kalman Filter. IEEE Transactions on Automatic Control 44-4, 714–728 (1999). Google Scholar

• [26]

Lewis, F. L. Optimal Estimation. John Wiley & Sons Inc., New York, USA, (1986). Google Scholar

• [27]

Őzűrk, F. and Őzbek, L. Matematiksel Modelleme ve Siműlasyon. Gazi Kitabevi, Ankara, TŰRKIYE, (2004). Google Scholar

## About the article

Published Online: 2016-11-17

Published in Print: 2016-01-01

Citation Information: Open Mathematics, Volume 14, Issue 1, Pages 934–945, ISSN (Online) 2391-5455,

Export Citation