The Hidden Markov model consist of two processes, one is the Markov process, the other one is a general stochastic process(which may be discrete or continuous), and therefore has a double stochastic [11–13]. A typical hidden Markov model (HMM) can be described by the following five parameters:

Number of states *N*. The number of states the system may be in during the Markov process. Suppose *a*_{ij} = *P*(*q*_{t+1} = *S*_{j} | *q*_{t} = *S*_{i}). *q*_{t} is the state at time *t*. *a*_{ij} = *P*(*q*_{t+1} = *S*_{j} | *q*_{t} = *S*_{i}). The state of the system can only be expressed by the observed value.

Possible number of observations *M*. *o*_{t} is the observed value at time *t* during the discrete process. Assuming t*o*_{t} is finite, so the observed value of the system at time t must be an element in {*O*}.

State transition matrix *A*. The transition probability from state *S*_{i} to state *S*_{j} is *a*_{ij} = *P*(*q*_{t+1} = *S*_{j} | *q*_{t} = *S*_{i}), 1 ≤ *i*, *j* ≤ *N*. Because the total number of transfer states for one system is *N*, the number of value *a*_{ij} can reach *N* × *N*, expressed as a matrix:
$$\begin{array}{}{\left[\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\begin{array}{ccc}{a}_{11}& \cdots & {a}_{1n}\\ \vdots & \ddots & \vdots \\ {a}_{n1}& \cdots & {a}_{nn}\end{array}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\right]}_{N\times N}\end{array}$$

And
$\begin{array}{}\sum _{j=1}^{n}\end{array}$
*a*_{ij} = 1.

Confusion matrix *B*. The observed probability of the observed value *O*_{t} = *O*_{k} in the state *S*_{j} is *b*_{jk} = *P*(*o*_{t} = *O*_{k}|*q*_{t} = *S*_{j}), where 1 ≤ *j* ≤ *N*, 1 ≤ *k* ≤ *M*.

In the state *S*_{j}, the number of random process can be *M*, then the number of value *b*_{jk} can reach *N* × *M*, expressed as a matrix
$$\begin{array}{}{\left[\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\begin{array}{ccc}{b}_{11}& \cdots & {b}_{1m}\\ \vdots & \ddots & \vdots \\ {b}_{n1}& \cdots & {b}_{nm}\end{array}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\right]}_{N\times M}\end{array}$$

And
$\begin{array}{}\sum _{k=1}^{n}\end{array}$
*b*_{jk} = 1.

Vector of the initial state probabilities *π*. The probability that the first state *q*_{1} take *S* = {*S*_{1}, *S*_{2}, …, *S*_{N}}, and if *π*_{i} = *P*(*q*_{i} = *S*_{I}), then *π* = {*π*_{1}, *π*_{2}, …, *π*_{N}}, is a 1 × *N* vector.

There are three main problems in HMM application, the first two are pattern recognition problems, the last one is parameter training [14]:

Evaluation problem. To calculate the probability of an observed sequence based on the known HMM. This problem is assumed to have a series of HMM solutions to describe the different systems, and need to know which system has the greatest probability of the current observation sequence.

Decoding problem. To observe the sequence of states that is most likely to occur in the specified model based on the observed sequence.

Learning problem. That is the parameter training problem. HMM was obtained from the observation sequence, and the hidden state of its representation generated a three tuple (*A*, *B*, *π*) to describe the phenomena law.

In practice, the majority of cases are continuous signals and data. For example, as the vehicle is operating, signal data, which can reflect the vehicle speed, roll, and lateral acceleration, are continuous in the time axis. This continuity exists in a general stochastic process where the probability of occurrence of observations in each state can be expressed as a function of the probability density function. One of the most common methods is to use a linear combination of multiple Gaussian functions to estimate the probability density function of the observed value. Setting *O* as the observation sequence, the probability density function can be expressed as *b*_{j} (*O*) =
$\begin{array}{}\sum _{m=1}^{M}\end{array}$
*c*_{jm} *N*(*O*, *μ*_{jm}, *COV*_{jm}), 1 ≤ *j* ≤ *N*.

Where *μ*_{jm}, *COV*_{jm}, *c*_{jm} represent mean vector, covariance matrix, and weight coefficient of the m-th Gaussian component, the state *S*_{j}. *c*_{jm} satisfies the constraint condition:
$$\begin{array}{}{c}_{jm}\ge 0,{\displaystyle \sum _{m=1}^{M}{c}_{jm}=1}\end{array}$$

Since the representation structure of *b*_{j}(*O*) is changed, the method of parameter updating in model training process also changes. For *b*_{j}(*O*), it is mainly determined by three parameters, so the update for *b*_{j}(*O*) is actually the update for *μ*_{jm}, *COV*_{jm}, *c*_{jm}. The introduced parameter *γ*_{t}(*j*, *m*) represents the probability of the k-th Gaussian component function *o*_{t}. The state *S*_{J} represents *o*_{t} at time *t*, namely:
$\begin{array}{}{\gamma}_{t}\left(j,m\right)=\frac{{\alpha}_{t}\left(j\right){\beta}_{t}\left(j\right)}{\sum _{j=1}^{N}{\alpha}_{t}\left(j\right){\beta}_{t}(j)}\times \frac{{c}_{jm}N(O,{\mu}_{jm},CO{V}_{jm})}{\sum _{m=1}^{M}{c}_{jm}N(O,{\mu}_{jm},CO{V}_{jm})}.\end{array}$

The core concept of the Markov prediction method, is the state transition. The vehicle state can be *S*_{1}, *S*_{2}, …, *S*_{N}, and each time it can only be in one state. Each state has *n* steering directions (including itself), and state transition is stochastic. State transition probability *a*_{ij} = *P*(*S*_{j} | *S*_{i}) is used to describe the size of state transition possibility, which is the origin of state transition matrix *A* in the Hidden Markov Model.
$$\begin{array}{}A={\left[\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\begin{array}{ccc}{a}_{11}& \cdots & {a}_{1n}\\ \vdots & \ddots & \vdots \\ {a}_{n1}& \cdots & {a}_{nn}\end{array}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\right]}_{N\times N}\end{array}$$

The state transition probability matrix has the following characteristics:

0 ≤ *a*_{ij} ≤ 1 (*i*, *j* = 1, 2, ⋯, *N*),

$\begin{array}{}\sum _{j=1}^{N}{a}_{ij}=1(i,j=1,2,\cdots ,N).\end{array}$

The state transition probability matrices completely describe the change process of the objects under study. The above matrix *A*, may be a one step transition probability matrix. For a multi-step transition probability matrix, the following definitions can be interpreted:

If the system is in the Si state at time *t*_{0}, it changes to be in the *S*_{j} state at time *t*_{n} after the n-step transfer. Then, the description of the number of possibilities of such a transition is called the *n*step transition probability. Recorded as:
$\begin{array}{}{a}_{ij}^{(n)}\end{array}$
= *P*(*S*_{j} | *S*_{i}).

There are:
$$\begin{array}{}{A}^{\left(n\right)}={\left[\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\begin{array}{ccc}{a}_{11}^{\left(n\right)}& \cdots & {a}_{1n}^{\left(n\right)}\\ \vdots & \ddots & \vdots \\ {a}_{n1}^{\left(n\right)}& \cdots & {a}_{nn}^{\left(n\right)}\end{array}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\right]}_{N\times N}\end{array}$$

Let *A*^{(n)} be an n-step transition probability matrix, and have:

*A*^{(n)} = *A*^{(n−1)}*A*;

*A*^{(n)} = *A*^{n}.

For a Markov process, if the initial probability *a*_{i}(*n*) = *P*(*q*_{n} = *S*_{i}) and transition probability matrix are known, the state after *n* steps can be predicted. That is the state of the vehicle at any time within the effective range of the vehicle system.

For *n* ≥ 1, as *a*_{i}(*n*) = *P*(*q*_{n} = *S*_{i}), so that:
$$\begin{array}{}{a}_{i}(n)={\displaystyle \sum _{j=1}^{N}{a}_{j}(0){a}_{ji}^{\left(n\right)},i=1,2,\cdots ,N}\end{array}$$

It is a long-term process to monitor and pre-alarm the running state of a vehicle. Considering the influence of the state transition matrix, the definition of stationary distribution is introduced: Such as the existence of non-zero vector *X* = (*x*_{1}, *x*_{2}, ⋯, *x*_{n}), so that : *X* = (*x*_{1}, *x*_{2}, ⋯, *x*_{n}).

A is a probability matrix and X is a fixed probability vector.

In particular, let *X* = (*x*_{1}, *x*_{2}, ⋯, *x*_{n}) be a state probability vector and *A* be a state transition probability matrix.

If *XA* = *X*, then
$\begin{array}{}\sum _{i=1}^{N}\end{array}$
*x*_{i} *a*_{ij} = *x*_{j}, *i* = 1, 2, ⋯, *N*.

Then *X* is a stationary distribution of the Markov chain. If the state probability vector *X* at some time of the stochastic process is a stationary distribution, the process is said to be in an equilibrium state. And once the process is in equilibrium, the state probability distribution remains unchanged even after many step transitions, that means once the process is in equilibrium it will always be in equilibrium.

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.