Wavelet-Based Elman Neural Network with the Modiﬁed Diﬀerential Evolution Algorithm for Forecasting Foreign Exchange Rates

It is challenging to forecast foreign exchange rates due to the non-linear characters of the data. This paper applied a wavelet-based Elman neural network with the modiﬁed diﬀerential evolution algorithm to forecast foreign exchange rates. Elman neural network has dynamic characters because of the context layer in the structure. It makes Elman neural network suit for time series problems. The main factors, which aﬀect the accuracy of the Elman neural network, included the transfer functions of the hidden layer and the parameters of the neural network. We applied the wavelet function to replace the sigmoid function in the hidden layer of the Elman neural network, and we found there was a “disruption problem” caused by the non-linear performance of the wavelet function. It didn’t improve the performance of the Elman neural network, but made it get worse in reverse. Then, the modiﬁed diﬀerential evolution algorithm was applied to train the parameters of the Elman neural network. To improve the optimizing performance of the diﬀerential evolution algorithm, the crossover probability and crossover factor were modiﬁed with adaptive strategies, and the local enhanced operator was added to the algorithm. According to the experiment, the modiﬁed algorithm improved the performance of the Elman neural network, and it solved the “disruption problem” of applying the wavelet function. These results show that the performance of the Elman neural network would be improved if both of the wavelet function and the modiﬁed diﬀerential evolution algorithm were applied integratedly.

which was applied with the modified differential evolution algorithm integratedly, is proposed

Replacing the Sigmoid Function with the Wavelet Function
The WNN was proposed by Zhang [16] in 1992. WNN is modified from the BP neural network by replacing the sigmoid function with wavelet function in the hidden layer. In the hidden layer of BP neural network, the sigmoid function is usually used as the transfer function. The wavelet function has a better non-linear performance than the sigmoid function, and the performance of WNN improves a lot. Inspired by WNN, we proposed the wavelet-based Elman neural network (WENN), which replaced the sigmoid function of ENN with wavelet function as the transfer function in the hidden layer. We expected to take advantage of the wavelet function and improve the performance of ENN as WNN.

Input Layer
In the input layer, the input vector is of the pth sample defined as (x p 1 , x p 2 , · · · , x p m ) and the pureline function is the transfer function of the nodes. So, the output vector of the input layer is equal to the input vector: (y , · · · , y (1)p m ) = (x p 1 , x p 2 , · · · , x p m ). (1)

Hidden Layer
The input of the node in the hidden layer is from two parts: One is the output of the input layer, and the other is the context layer. The kth node input can be represented by: x (2) where, M is the number of nodes in the input layer; K is the number of nodes in the hidden layer; H(H = K) is the number of nodes in the context layer; w mk is the connection weight of the input layer to the hidden layer; r hk is the connection weight of the context layer to the hidden layer; y (3)p h = ϕ h is the output of the hth node to the context layer, which is the previous value of the hidden layer.
In WENN, the transfer function of the hidden layer is a wavelet function. There are many types of the wavelet function, and Morlet function is chosen in this paper [17] : The output of the node in the hidden layer is given by where a k is the dilation coefficient and b k is the translation coefficient to the node of the hidden layer.

Context Layer
In the context layer, the input and the output nodes are represented by

Output Layer
In the output layer, the transfer function is pureline function. So, the input and output nodes are represented by

Learning Algorithm of WENN
To create a WENN, the node number of the input layer M , hidden layer K, and output layer N should be defined. M and N are determined by the researching problems. K = √ M + N + c [33,34] , c is an integer between 1 and 10.
Once the WENN is initialized, supervised learning is used to adjust the parameters of the system. The gradient descent with momentum (GDM) algorithm is in common use to adjust the parameters of the network. To describe the parameter learning algorithm, the energy function is expressed as where T p n is the nth expected value of the pth sample. The main steps of GDM can be described as follows: Step 1 Calculating the energy function. In the GDM algorithm, the recursive application of the chain rule is used to achieve backpropagation. Error is calculated according to (1)∼(7).
Step 2 Adjust the learning rate. The learning rate η can be adjusted as follows. If E(t) < E(t − 1), it seems the training process is moving towards optimization. In this term, the learning rate η should be increased η(t) = αη(t − 1), α > 1. If E(t) > 1.04E(t − 1), it seems the training process is getting bad. The learning rate η should be decreased η(t) = βη(t − 1), β < 1.
Step 3 Adjust the parameters of the network. We defined: In the output layer, the updated law for v kn is The weight v kn is updated according to the equation: In the hidden layer, the updated law for w mk is The weight w mk is updated according to the equation: In the context layer, the updated law for r hk is The weight r hk is updated according to the equation: The update amounts for the translation b k and dilation a k are given by The translation and dilation are updated as follows: Step 4 Repeat Step 1 to Step 3 continuously, and output the result when the termination condition is met.

The Modified Differential Evolution Algorithm
Differential evolution (DE) converges fast in the optimization problems. To improve the optimizing performance of DE, the crossover probability and crossover factor are modified with adaptive strategies, and the local enhanced operator is added to the algorithm. With those improving strategies, the new differential evolution algorithm is called ADLEDE for short.

Differential Evolution algorithm
Differential evolution algorithm, which inherits the idea of survival of the fittest, is a kind of evolution algorithm [35] . For each individual in the population, 3 points are randomly selected from the population. One point is taken as the basis, and the other 2 points are taken as the reference to make a disturbance. New points are generated after crossing, and the better one is retained by natural selection to achieve population evolution. Suppose the problem to be optimized is min(f (x)), the main steps of the algorithm are as follows: Step 1 Initialization. Set the population size N , the number of variables m, cross probability P c , and cross factor P m . When the evolutional generation t = 0, initialize the low/up bound of the vector lb/ub and the initial population vector Step 3 Mutation. For individual vector X i (t) in the population, three indices r 1 , r 2 , r 3 ∈ {1, 2, · · · , N} and an integer j r ∈ {1, 2, · · · , m} are randomly chosen. otherwise. (20) Step 4 Selection.X Step 5 Terminal condition. If the individual vector X i (t + 1) satisfies the termination condition, then X i (t + 1) is the optimal solution, otherwise, turn to Step 2.

Self-Adaptive Strategies to P c and P m
The crossover probability P c and the crossover factor P m are constant values in DE. When the optimization problems are complex, the optimizing efficiency is not efficient enough [36] . In the adaptive improvement, the crossover probability P c and the crossover factor P m are adapted according to the individual fitness values. When the population tends to fall into the local optimal solution, it increases the P c and P m value accordingly. When the population tends to diverge, it reduces the P c and P m value. For individuals, whose fitness values are higher than the average fitness of the population, are corresponding to the lower P c and P m value, and the solutions are protected to enter the next generation. For individuals, whose fitness values are lower than the average fitness of the population, are corresponding to the higher P c and P m value, and the solutions are eliminated. The individual crossover probability P c and crossover factor P m are adapted according to where P c1 is the higher crossover probability 0.7∼0.9; P c2 is the lower crossover probability 0.4∼0.6; P m1 is the higher crossover factor 0.08∼0.1; P m2 is the lower crossover factor 0.01∼0.05; f max is the maximum fitness value in the population; f avg is the average fitness value in the population; f is the higher fitness value of X r2 (t) and X r3 (t); f is the fitness value of X r1 (t). According to (22) and (23), P c and P m can be adaptively adjusted, which improves the optimization performance of the algorithm.

Local Enhancement Strategy
Because DE generates a new intermediate individual through random deviation perturbation, the local search ability of it is weak. While approximating the optimal solution, it still needs to iterates several generations to get the optimal value, which affects the convergence speed of the algorithm [37] . Therefore, the local enhancement operator LE (0 < LE < 1) is introduced to DE. After obtaining a new population, some individuals in the new population (excluding the current optimal individual) are reassigned with probability LE. In this way, the individuals are distributed by the optimal individual of the current population. LE could enhance the greediness of the individuals and speed up the convergence of DE.
where X i,t+1 is the enhanced new individual; X r3,t+1 and X r4,t+1 are the original individuals; X best,t+1 is the best individual of the current population; P l is the perturbation factor. The indices r 3 and r 4 are mutually exclusive integers, which meet r 3 = r 4 = i. It is the essence of local enhancement to DE, which is to make some individuals seek the solution by the optimal vector of the current population. While keeping the diversity of the population, the greed of the good individuals is increased to ensure that the algorithm finds the global optimal solution quickly. The local searching ability of the algorithm is improved by the perturbation factor P l , which accelerates the convergence speed, especially when approximating the global optimal solution.
After the adaptive and local enhanced improvement of DE, the flow chart of ADLEDE is shown in Figure 2.

ADLEDE-WENN Forecasting Model
In ENN, gradient descent algorithm is usually chosen as the parameter learning algorithm, including the gradient descent (GD), the GDM, the Levenberg-Marquardt (LM) etc. But there is a vital defect of those algorithms, it is easy to fall into local optimum. Therefore, it is essential to find a new parameter adjustment method of the ENN. In this paper, we applied the ADLEDE algorithm to adjust the parameters of ENN. Based on the analysis mentioned above, the main steps of ADLEDE-WENN are described in Figure 2.

Comparative Models
To evaluate the effectiveness of the ADLEDE-WENN model in forecasting the closing price of the foreign exchange rates, other comparative models including ENN, WENN and ADLEDE-ENN are selected. ENN (described in Subsection 2.1) is the basic model, and other models are modified based on it. WENN was replaced the sigmoid function with wavelet function in the hidden layer based on ENN, and it was introduced in Subsection 2.3. ADLEDE-WENN refers to ADLEDE-ENN, which was researched in Subsection 4.1. To the transfer function of the hidden layer, the sigmoid function was used in ADLEDE-ENN, and the wavelet function was used in ADLEDE-WENN.
First, by comparing WENN with ENN, the effect of wavelet function to the Elman neural network could be studied. Second, by comparing ADLEDE-ENN with ENN, the feasibility and effectiveness of the ADLEDE algorithm could be evaluated in adjusting the parameters of the Elman neural network. Third, by comparing ADLEDE-ENN with ADLEDE-WENN, the effect of wavelet function could be researched on the use of ADLEDE to the Elman neural network. At last, in order to evaluate the comprehensive effects of the ADLEDE algorithm and the wavelet function to the Elman neural network, ADLEDE-WENN and ENN could be selected.

Performance Measure
In this part, we comparatively evaluate the prediction effects of wavelet function and ADLEDE algorithm applied to ENN. For further analyzing the forecasting performance of ENN, WENN, ADLEDE-ENN and ADLEDE-WENN, we choose several measures to value error and trend performance, including mean square error (MSE), mean absolute error (MAE), mean absolute percentage error (MAPE) and error limit proportion (ELP). Those are all the error-type measures of the deviation between predicted values and actual data, and those indexes reflect the prediction of global error. The corresponding definitions are given as follows: Mean square error (MSE): Mean absolute error (MAE): Mean absolute percentage error (MAPE): Error limit proportion (ELP): whereŷ i is the predict value and y i is the real value; N denotes the number of the evaluated data; α% is the limited level. MSE, MAE and MAPE are the negative indexes. The smaller MSE, MAE and MAPE value show the less deviation of the forecasting results from the actual values. ELP is the positive index. The value of ELP is higher, and the accuracy is more precise.

The Foreign Exchange Rate Data
In order to study the validity of the models, we selected 4 kinds of foreign exchange rate closing price. They were EURUSD, USDCNH, GBPUSD and GBPCNY, and the data was from Wind. The closing prices of EURUSD, USDCNH, GBPUSD were from the International Foreign Exchange Market (IFEM), and the closing price of GBPCNY was from the China Foreign Exchange Trade System (CFETS). The information of the foreign exchange rate was shown in Table 1. For each foreign exchange rate, 240 days of closing price were chosen. The data set was divided into the training and testing data set. The training data set was composed of 170 days in front, and it was accounting for 71% of the total data. The testing data set was composed of the rest 70 days accounting for 29% of the total data.

Normalization Preprocessing
The observed foreign exchange rate closing price is the non-normal data. Before we use the ANNs to predict the price, the price data should be normalized. In the data set, the minimum and the maximum values are used to normalize the data: After the forecasting, anti-normalization need be used to obtain the true value by the formula: where x k is the observed (anti-normalized) closing price; x min and x max are the minimum and the maximum prices of the data; x k is the normalized (predicted) data. In this paper, MSE is also applied to measure the performance of the ANNs models. We defined MSE and MSE* for different use. MSE was used to mark the result, in which the data was normalized. MSE* was used to mark the result, in which the data was anti-normalized.

Experimental Tools and Configuration of System
In this paper, Matlab was selected to implement those 4 models. To the Elman neural network, the neural network toolbox of MATLAB R2016a was used in the experiment, and the main parameters described in Table 2. While the ADLEDE algorithm was used in the experiment, the parameters of ADLEDE were set in Table 3.

Results and Discussion
The data of the foreign exchange rate was described in Subsection 5.1, including EURUSD, GBPCNY, GBPUSD, and USDCNH. To study the performances of the models, which were ENN, WENN, ADLEDE-ENN and ADLEDE-WENN, all the 4 models were researched in the experiments to each group of the foreign exchange rate. There was a fatal defect to the ANNs -The randomness, which meant we couldn't get the same neural network exactly. The reason for the randomness was that: When the neural network was trained, the parameters to each of the neural network (w mk , r hk , v nk , a k , b k , etc) were different. To reduce the impact of the randomness, each model was simulated 20 times in the experiment, and the result was the average value of all the 20 times. The results were shown in Table 4. Not all the figures of the experiments were shown in this paper, and part figures of EURUSD were shown in Figures 3∼6.

The Impact of Wavelet Function to ENN
If the wavelet (Morlet) function was used in ENN, the efficiency of the traditional learning method (GDM in this paper) was decreased a lot. In the experiments of ENN and WENN, the training function was set as the "traingdm" which referred to the GDM. According to the result in Table 4, all the negative indexes (MSE*, MAE and MAPE) values of WENN were higher and the positive index (ELP-0.5%) was lower than the results of ENN to the same foreign exchange rate. The MSE of ENN and WENN was described in Figure 3(c) and Figure 4(c). MSE decreased smoothly in Figure 3(c), and it got the minimum value 2.7284×10 −3 at the maximum time 15,000. However, MSE was fluctuating in Figure 4(c), and it got the minimum value 5.6847×10 −3 at 8,669 times. Therefore, we could conclude that the wavelet function in the WENN disrupted the convergence of the traditional learning algorithm. The "disruption problem" was caused by the non-linear performance of the wavelet function. Therefore, in the experiments of WENN, it didn't improve the performance of ENN. However, the results were getting worse in reverse.
To evaluate the effect of a wavelet function, the ADLEDE algorithm was applied to both of ENN (ADLEDE-ENN) and WENN (ADLEDE-WENN). The performance of ADLEDE was described in Subsection 6.2. According to Table 4, the negative indexes value (MSE*, MAE and MAPE) of ADLEDE-ENN was higher and the positive index (ELP) was lower than the value of ADLEDE-WENN to the same foreign exchange rate. Comparing the experiments of ADLEDE-ENN and ADLEDE-WENN, the only difference was the transfer function in the hidden layer. The transfer function of ADLEDE-WENN was the wavelet (Morlet) function, and the sigmoid function was for ADLEDE-ENN. So, we could conclude that the differences in the result of the experiment were caused by the wavelet function. According to the testing result of EURUSD, the MSE, MAE and MAPE values of ADLEDE-WENN decreased by 12.64%, 6.36% and 6.34%, and the value of ELP-0.5% increased by 1.85%.

The Performance of the ADLEDE Algorithm
Based on the analysis of Subsection 6.1, it was necessary to adopt a new parameter training method in the WENN, which would give full play to the non-linear performance of wavelet function. The ADLEDE algorithm, which had the characters of fast convergence and global optimization capability, was applied to train the parameters of the neural network in the experiments.
The performance of the ADLEDE algorithm was researched by comparing ENN and ADLEDE-ENN. According to Table 4, the negative indexes value (MSE*, MAE and MAPE) of ENN was higher and the positive index (ELP-0.5%) was lower than the value of ADLEDE-ENN to the same foreign exchange rate. Comparing the experiments of ENN and ADLEDE-ENN, the difference was that the ADLEDE algorithm was used to train the parameters of the neural network in ADLEDE-ENN. According to the testing result of EURUSD, the MSE*, MAE and MAPE values of ADLEDE-ENN decreased by 15.92%, 7.80% and 7.82%, and the value of ELP-0.5% increased by 1.92%.
The performance of the ADLEDE algorithm was also researched by comparing ADLEDE-ENN and ADLEDE-WENN. In the training process of ADLEDE, the fitness values (MSE) of ADLEDE-ENN and ADLEDE-WENN were shown in Figure 7. In the experiment of ADLEDE-ENN, the ADLEDE algorithm terminated at 12 generations and got the optimal result 2.523×10 −3 . In the experiment of ADLEDE-WENN, the ADLEDE algorithm terminated at 11 generations and got the optimal result 2.507×10 −3 . The parameters were trained by the ADLEDE algorithm at first. Then, the parameters were sent to the neural network and trained by the traditional algorithm (GDM). According to Figure 5(c) and Figure 6(c), the MSE of ADLEDE-ENN was 2.5226×10 −3 , and the MSE* of ADLEDE-WENN was 2.5046×10 −3 . After 1,000 times trained by the traditional learning algorithm, the MSE of both neural networks had limited improvement. Therefore, we concluded that it was effective to apply the ADLEDE algorithm to train the parameters of the neural network. After the training process of the ADLEDE algorithm, it almost got the global optimal solution. The traditional algorithm of the Elman neural network had limited performance to the forecasting result. ADLEDE algorithm could be applied to train the parameters of the neural network. In our experiments, it not only solved the "disruption problem" caused by the wavelet function, but also could take advantage of the non-linear characters belonging to wavelet function. By applying the ADLEDE algorithm to WENN, the forecasting performance of the neural network improved a lot. According to the testing results of EURUSD, GBPCNY, GBPUSD and USDCNH, all the indexes show that: ADLEDE-WENN ADLEDE-ENN ENN (" " means better than). ADLEDE-ENN ENN meant that the ADLEDE algorithm was an effective method to train the parameters of the neural network. ADLEDE-WENN ADLEDE-ENN meant that the wavelet (Morlet) function of the hidden layer improved the performance of the neural network.

The Structure of the Models and the Over-Fitting Problem
The structure of ENN had a significant impact on the performance of ENN, including the number of the input layer nodes, the hidden layer nodes and the context layer nodes and so on. In the experiments, the same structure was applied to forecast different foreign exchange rates, and the parameters were shown in Table 2. According to the testing results of ELP-0.5%, the structure suited for EURUSD (89.2308) GBPUSD (85.8462) USDCNH (84.1372) GBPCNY (81.4625). Therefore, the experiments show that the same neural network structure had a different performance to forecast the foreign exchange rate. In other words, we should select a certain structure, which could reflect the low of fluctuation to the foreign exchange rate, to predict the close price. In the ADLEDE-WENN experiment of GBPCNY (marked ADLEDE-WENN* in Table 4), if the numbers of the hidden and context layer nodes increased from 10 to 12, all the indexes including MSE*, MAE, MAPE and ELP-0.5% improved a lot.
The over-fitting problem is common in the neural network. It means the neural network performs well in the training process, but performs badly in the testing process. In the experiments of USDCNH, the over-fitting problems were very serious to ENN and ADLEDE-ENN. According to the results of MSE*, MAE, MAPE and ELP-0.5%, the performance of the neural network was very bad in the testing process. It meant that the trained neural network was failed. By comparing the D-value of ADLEDE-ENN and ADLEDE-WENN, if the testing value was less than training value, the absolute D-value decreased; if the testing value was greater than training value, the absolute D-value increased. According to the results, we could conclude that the over-fitting problem in ADLEDE-WENN was less serious than that in ADLEDE-ENN, and the "improvement" was also brought about by the wavelet function in the hidden layer of WENN.

Conclusion
In the present paper, the ADLEDE-WENN predicting model is established aim to forecast the fluctuations of the foreign exchange rate. Based on the experiments of EURUSD, GBPCNY, GBPUSD and USDCNH, the follows could be concluded: Firstly, if the wavelet function was applied to be the transfer function in the hidden layer of ENN, the non-linear character of the wavelet function could improve the performance of ENN in forecasting. But it would decrease the efficiency of the traditional learning algorithm at the same time. So, a new parameter training algorithm was needed under the circumstances.
Secondly, it was a feasible and effective way to train the parameters of the neural network with the ADLEDE algorithm. When the ADLEDE algorithm was applied in WENN, it could solve the "disruption problem" and take advantage of the non-linear character belonging to wavelet function. In this way, the performance of the neural network could be improved a lot.
Thirdly, the structure of the neural network had a significant impact on the performance of the neural network. Different structures were needed to forecast the foreign exchange rates.
At last, the over-fitting problem was common in the application of neural networks. The application of wavelet function in the neural network was conducive to weaken the problem of over-fitting.
In the present paper, other problems needed to be studied. There were many other wavelet functions, and the Morlet function was studied in this paper. It was necessary to study the performance of other wavelet functions applied to the neural network. Considering the impact of the structure on the neural network, it was a new problem about how to find a suitable structure for each foreign exchange rate.