Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter December 14, 2013

An Adaptive Fuzzy Wavelet Network with Gradient Learning for Nonlinear Function Approximation

  • Yusuf Oysal EMAIL logo and Sevcan Yilmaz

Abstract

In this article, a new adaptive fuzzy wavelet neural network (AFWNN) model is proposed for nonlinear function approximation problems. The AFWNN model is based on the traditional Takagi-Sugeno-Kang fuzzy system. Specifically, this model replaces the membership functions of fuzzy rules with wavelet basis functions, which are known to have time and frequency localization properties, i.e., they can approximate patterns both in the time and frequency domains. The structure of the AFWNN model is derived from that of the adaptive neuro-fuzzy inference system (ANFIS). However, the AFWNN improves over the ANFIS by replacing Gaussian functions in the hidden layer with wavelet basis functions. The AFWNN model is trained using a gradient-based optimization algorithm. The AFWNN is then tested on three function approximation problems of time series prediction. For certain types of nonlinear time series, for instance fractal processes, the AFWNN is found to be substantially more accurate than alternative methods.

1 Introduction

Fuzzy systems are an effective tool for dealing with certain types of nonlinear processes or time series. Forecasting time series that exhibit nonlinear variability is known to be difficult, with many of the existing modeling methods yielding only equivocal results. To develop a fuzzy system, human experts use IF-THEN rules based on the existing knowledge. In the case of complex processes, however, it can be difficult for human experts to test all the input-output data and generate the necessary rules. One recent methodological innovation, which appears highly promising for forecasting complex systems, has been to combine neural networks with fuzzy math. The resulting “neuro-fuzzy systems” offer the advantage that they learn their rules directly from the input-output data [12, 13, 16]. Neuro-fuzzy systems have fast and accurate learning properties, and are also able to accommodate expert knowledge.

A further innovation has been to combine neuro-fuzzy systems with wavelets. Wavelets are known to have good modeling properties over a range of frequencies, and for this reason have been used as activation functions in neuro-fuzzy systems, yielding fuzzy wavelet neural networks (FWNN) [1, 11, 27, 37, 38]. These models – neural networks incorporating both wavelets and fuzzy math – have been used successfully in various applications, for instance function approximation in Ho et al. [11] and control of nonlinear systems in Zekri et al. [38]. In Srivastavaa et al. [27], the inputs first enter into a discrete wavelet transform block; then, the output of this block is fuzzified and it forms the input to a single neural network. Finally, this model is used for system identification and control problems. The FWNN proposed by Abiyev and Kaynak [1] uses wavelets in developing fuzzy rules for system identification and control. The model that will be introduced in this article gets its idea from our previous study [37] in which three new FWNN models are presented for prediction and identification of nonlinear dynamical systems. The proposed FWNN models are obtained from the traditional Takagi-Sugeno-Kang fuzzy system by replacing the THEN part of fuzzy rules with wavelet basis functions that have the ability to localize both in time and frequency domains. The first and last models use summation and multiplication of dilated and translated versions of single-dimensional wavelet basis functions, respectively, and in the second model, the THEN parts of the rules consist of radial function of wavelets. Gaussian-type activation functions are used in the IF part of the fuzzy rules. The proposed FWNN models in our previous study [37] have impressive generalization ability.

This study extends the earlier work to develop a more advanced type of model. The basis for the model is the adaptive neuro-fuzzy inference system (ANFIS) [13]. However, the configuration proposed here incorporates wavelets to create a new type of network model, which we call an adaptive FWNN (AFWNN). In a standard ANFIS, the activation functions in the hidden layer (the antecedent part of the rule base) are usually Gaussian functions, whereas the rule output (the consequent part) is a polynomial function of the input variables. In the proposed AFWNN model, the ANFIS neuron activation functions are replaced with wavelet basis functions. The consequent rule output functions are developed using weighted summations of dilated and translated versions of wavelet functions of the input variables as in the FWNN-S model introduced in our previous study [37]. The model parameters are estimated using a gradient-based training algorithm.

The rest of the article is organized as follows. The properties of wavelet basis functions and the structure of the proposed AFWNN are set out in Section 2. The training algorithm and parameter update rules for the AFWNN are introduced in Section 3. Section 4 compares the performance of the AFWNN to other methods commonly used in the time series literature, using three data sets. Section 5 concludes.

2 AFWNN Model Architecture

Reduced to the essentials, the AFWNN integrates wavelet functions into the adaptive structure of ANFIS networks. Figure 1 shows the ANFIS architecture. In prior work, we integrated wavelet activation functions to the second layer as input membership functions. We termed the resulting model an adaptive wavelet network (AWN) and obtained successful results in nonlinear function approximation problems [23]. In our second study [37], instead of changing the form of the membership functions of ANFIS in the second layer, the polynomial functions of the input variables in the fifth layer of the ANFIS are replaced with wavelet basis functions that have the ability to localize both in time and frequency domains. The main aim of this study is to combine these pieces together and obtain an effective model for nonlinear function approximation. The difference between ANFIS and the fuzzy wavelet models of our previous works are illustrated in Figure 1.

Figure 1 The Modifications Done on an ANFIS.FWNN [37] and AWN [23] (solid line) and AFWNN (dashed line).
Figure 1

The Modifications Done on an ANFIS.

FWNN [37] and AWN [23] (solid line) and AFWNN (dashed line).

As noted above, in an ANFIS, the activation functions of hidden neurons are Gaussian. A deficiency of Gaussian-based ANFIS networks is their limited ability to localize in the frequency domain. Therefore, it is very difficult to use Gaussian-based functions in some applications [26]. By comparison, the AFWNN should be superior, in that it has the ability to localize in both the time and frequency domains. The wavelet functions used here are of the form

(1)ϕi(xiμiσi)=(1(xiμiσi)2)exp(12(xiμiσi)2). (1)

Here, μi and σi are translation (center) and dilation (standard deviation) parameters, respectively. If the dilation parameter is changed, the support region width of the wavelet function changes, but the number of cycles does not change. However, when the dilation parameter decreases, the peak point of the spectrum shifts to higher frequency. By implication, the AFWN has the capacity to model cyclical behavior over a wider range of frequencies.

The six-layer computational structure of the AFWN is shown in Figure 1. This structure will be explained layer by layer:

  • Layer 1: This layer is the input layer. Neurons in this layer simply transmit the input signals x1, x2, …, xn to the second layer.

  • Layer 2: This layer is the fuzzification layer, and neurons in this layer represent the fuzzy sets used in the antecedent parts of the fuzzy rules

    (2)IF x1isA1i1andx2isA2i2andandxnisAninTHENΨl(x) (2)

    where x1, x2, …, xn are input variables; Ajij is the ijth Mexican hat membership function (MF) for the jth input, which is given by

    (3)Aji=(1(xjμijσij)2)exp(12(xjμijσij)2)j 1, 2,,nandij 1, 2,,lj. (3)

    A fuzzification neuron receives an input and determines the degree to which this input belongs to in the neuron’s fuzzy set. The outputs of this layer are the values of wavelet membership functions for the input values. There are total l1 membership functions for the first input, l2 for the second input, and so on.

  • Layer 3: This layer is the firing strength layer. Each node in this layer calculates each rule’s effect on the consequent part by using product T-norm operator for AND connections of antecedent parts of fuzzy rules. The output of the lth node in this layer is

    (4)ηl=j=1nAjij(xj)(l=i1i2in,i1=1,,l1,i2=1,,l2,,in=1,,ln). (4)

    Each possible combination of membership functions of all inputs represents a fuzzy rule. For example, the lth rule in Figure 1 comprises second membership functions of the first and second input and the last membership function of the last input variable. Thus, the total number of rules m is given by

    (5)m=i=1nli. (5)
  • Layer 4: This layer is the normalization layer. Each neuron in this layer calculates the normalized activation strength of each rule by

    (6)η¯l=ηli=1mηi(l=1,,m). (6)

    The normalized activation strength is the ratio of the activation strength of a given combination to the sum of activation strengths of all combinations. It represents the contribution of a given combination to the final result.

  • Layer 5: This layer calculates the weighted consequent value of a given rule as follows:

    (7)fl=η¯lΨl(l=1,,m). (7)

    The AFWN proposed here uses summation of dilated and translated versions of wavelet functions in consequent part of fuzzy rules given by

    (8)Ψl=i=1nwil(1(xibilcil)2)exp(12(xibilcil)2)(l=1,,m). (8)
  • Layer 6: This layer contains only a single summation node and it computes the overall output, which is given by

    (9)y=l=1mfl. (9)

The AFWNN model introduced here has several advantages, notably the high resolution of the wavelets, the inference mechanism inherent in fuzzy systems, and the learning potential of neural networks. More specifically

  • The wavelet scaling functions in the fuzzy rules have the multiresolution property, which allows capturing of fast transients in function approximation problems. The wavelets can capture both the global (low frequency) and local (high frequency) behavior of any function, and can do so quite easily.

  • As in the ANFIS, the accommodation of experts’ knowledge as rules is possible according to the problem under consideration.

  • The neural network representation of the AFWNN model presents many choices because of its flexible architecture. For example, as a T-norm operator for the AND connection of the rule’s antecedent parts, we have many alternatives such as the product operator used here in the third layer. Also, many other forms of membership functions and rules can be selected in the fuzzy system with a suitable inference mechanism, depending on the problem or data set. Finally, the AFWNN also provides us more choices over output wavelet scaling functions.

  • Depending on the initial conditions, the AFWNN model can converge rapidly, due both to hybrid learning and the ability to construct reasonably good input membership functions.

3 AFWN Training

The AFWN network parameters are generally obtained through a learning process. The unknown parameters of the AFWN model are the translation parameters (μ) and dilation parameters (σ) of wavelet membership functions in the antecedent part of the rules; translation (b) and dilation (c) parameters of the wavelet functions; and weight (w) and bias (p) parameters in the consequent part of the rules. Training involves minimizing a cost function, or performance index (PI), generally some measure of the error. Here, the mean square error (MSE) is selected as the performance index

(10)E=1Nk=1N(yyd)2, (10)

where N is the number of input-output pairs of the function to be approximated, yd is the desired output, and y is the AFWN network output. The hybrid learning mechanism used in ANFIS training can also be used in the AFWN. Initially, the antecedent parameters can be selected as small random values and then the initial consequent layer parameters can be determined by least square methods.

In this study, however, training is done using a popular “gradient descent” method known as conjugate gradient algorithm [9]. More specifically, we use a scaled conjugate gradient method (SCGM). The steps of the training algorithm for AFWNN are as follows:

  1. Initialize the activation parameters of the AFWN model.

  2. Load the data for training.

  3. While (termination condition is not satisfied) do

    1. Calculate the output (y) of the AFWNN model (forward pass).

    2. Calculate the error e = (yyd).

    3. Compute the gradients of the MSE with respect to all AFWNN network parameters:

      • gpk=Ep (backward pass).

    4. Update the activation parameters and output layer parameters by

      • (11)pk+1=pk+akdk(parameter Update), (11)
        (12)αk=minαJ(pk+αdk)>0(optimal step length determination). (12)

        Search direction update:

        (13)3.4.1dk+1=Sigk+1+βkdk,po=Sigo,So=I, (13)
        (14)3.4.2βk=(Δgk1)TSigk(gk1)TSigk1,   Δgk-1=gkgk1, (14)
        (15)3.4.3Si+1k+1=Si+1k+ηkdk(dk)T,Si+10=0, (15)
        (16)3.4.4ηk=αk(gk)TSigk, (16)
        (17)3.4.5Si+1=Si+1r,p0i+1=pri. (17)
    5. Check if the epoch is finished or not. If the answer is no, go to step 4.1.

      End

  4. Load data for testing.

Here, i is the index of the internal cycle that can be increased to r parameter value. Si is an rxr symmetric matrix. Si matrix is updated when i reaches r’s value. This method requires only the gradient of the objective function at each iteration. In principle, quasi-Newton algorithms construct a model of the objective function by measuring the changes in the gradients that are important to produce fast and accurate convergence. The algorithm is robust, and its rate of convergence is fast enough for most practical purposes. A more important advantage of SCGM, of course, is that it does not require calculation of second derivatives.

This training algorithm for the AFWNN consists of three main steps at each epoch: the model output calculation, parameter gradient calculation, and adaptation: the neural network parameters update. When large numbers of parameters are involved, training can use considerable time and memory, slowing down the process. To mitigate this problem, the training set can be divided into parts, so that fuzzy system outputs and gradients can be calculated in parallel. This speeds up the learning process.

At each iteration of the training algorithm, gradients of the performance index with respect to all unknown parameters (q) of AFWNN, g=Eq, are computed. Gradients of the MSE [eq. (10)] with respect to unknown parameters of the AFWNN are calculated by the following formulas for i = 1, …, n and l = 1, …, m:

(18)Ewil=Eyη¯l(1(xibilcil)2)exp(12(xibilcil)2), (18)
(19)Ebil=Eyη¯lwil(xibil)cil2(3(xibilcil)2)exp(12(xibilcil)2), (19)
(20)Ecil=Eyη¯lwil(xibil)2cil3(3(xibilcil)2)exp(12(xibilcil)2), (20)
(21)Eμij=EyyAjij(xjμij)σij2(3(xjμijσij)2)exp(12(xjμijσij)2)(j=1,,n), (21)
(22)Eσij=EyyAjij(xjμij)2σij3(3(xjμijσij)2)exp(12(xjμijσij)2)(j=1,,n), (22)
(23)Ey=2Nk=1N(yyd). (23)

For the above calculations, the partial derivative of the output y with respect to membership functions of each input variable is needed. For example, for the first input variable, this can be calculated as

(24)yA1i1(x1)=i2=1l2in=1lnΨi1,i2,...,ink=2nAkik(xk)yi2=1l2in=1lnk=2nAkik(xk)i1=1l1i2=1l2in=1lnk=1nAkik(xj)(i1=1,,l1). (24)

In principle, in the AFWNN model, there are no restrictions on selecting the number of membership functions. However, in the simulation examples, the number of membership functions of each input variable was set in advance. The number of input variables can also be determined using clustering methods; however, in general, when there are many input variables, a large number of rules will be encountered in the AFWNN model. As a result, we may face the problem of “the curse of dimensionality” in the AFWNN. This argues for imposing some prior restrictions on the structure of the model, for instance by restricting the number of membership functions.

4 Simulation Examples

To illustrate the performance of the proposed AFWNN, three data sets are used. In each instance, the test problem involves predicting a time series. The results of the simulations are then compared with alternative models.

4.1 Prediction of a Gas Furnace Time Series

The gas furnace data originally used by Box and Jenkins [3] are benchmark data that have been used by many researchers for testing identification and prediction algorithms. This nonlinear time series data were taken from a combustion process of a methane-air mixture. The input u(t) of this process is the gas flow rate into the furnace and the output y(t) is the CO2 concentration in outlet gas. The sampling rate is 9 s. Following previous researchers, u(t – 4) and y(t – 1) are selected as inputs to an AFWNN to predict the output y(t). During the learning phase, the data were partitioned into the first 200 values, which were used as a training set, whereas the remaining 92 points were used as a test set for evaluating the performance of the model.

An AFWNN was trained and tested using both two and three membership functions for each input, which form four and nine fuzzy rules, respectively. The training of the AFWNN model was carried out for 500 epochs. Figure 2 shows the actual time series with the output of the AFWNN with three membership functions for each input and the prediction error.

Figure 2 AFWNN Model Results for Gas Furnace Time Series.(A) Actual and prediction values. (B) Prediction error.
Figure 2

AFWNN Model Results for Gas Furnace Time Series.

(A) Actual and prediction values. (B) Prediction error.

To compare these results with existing models that have been applied to the same process, the root mean square error (RMSE) was used:

(25)RMSE=1Nk=1N(yyd)2. (25)

Table 1 shows a comparison of several different models. The AFWNN model with 66 parameters gives the third best prediction results. However, AFWNN models achieve a degree of accuracy that is only about the average of the other models. This may be caused by overtraining; the size of training data, which consist of only 200 observations, may be too small to represent the general characteristic of the gas furnace.

Table 1

Comparison of Different Models for Box Jenkins Time Series.

ModelNumber of ParametersRMSE TrainingRMSE Testing
Tong’s model [32]0.685
Pedrycz’s model [24]0.566
Xu’s model [36]0.573
Sugeno’s model [28]0.596
Surmann’s model [29]0.400
FuNN model [15]0.0226
HyFIS model [16]0.0205
Neural tree model [6]0.02580.0265
WNN [5] + gradient400.088310.084
WNN [5] + hybrid400.084850.081
LLWNN [5] + gradient560.015810.01643
LLWNN [5] + hybrid560.010950.01378
Recurrent ANFIS [31]0.0060.019
TNFIS [7]430.02450.0230
AWN [23]390.019090.03084
AFWNN320.01910.0318
AFWNN660.01870.0304

4.2 Prediction of Sunspot Number Time Series

In the second test, we apply an AFWNN model to the Wolfer sunspot data, which consist of 300 annual values from 1700 to 1999. The objective here is to use the AFWNN model to produce one-step-ahead predictions. As is common in the literature on sunspot forecasting, the data set is divided into three parts. The data points between 1700 and 1920 were used for training the models. The data points for years 1921–1955 and 1956–1979 form the first and the second test sets, respectively. The y(t – 4), y(t – 3), y(t – 2), and y(t – 1) were used as inputs to our models in order to predict the output y(t). Two membership functions were selected for each input, so there are a total of 16 rules in each model. These models were trained for 1000 epochs. The normalized mean square error (NMSE), given by

(26)NMSE=k=1N(yyd)2k=1N(ydy¯d)2 (26)

is used to compare the proposed AFWNNs with other models, where

(27)y¯d=1Nk=1Nyd. (27)

Table 2 shows the comparison of different models for the sunspot prediction problem. Figure 3 depicts the actual time series, the prediction results of an AFWNN, and the prediction error values. It is clearly seen that the AFWNN model provides an excellent representation for the sunspot time series.

Figure 3 AFWNN Model Results for Sunspot Time Series.(A) Actual and prediction values. (B) Prediction error.
Figure 3

AFWNN Model Results for Sunspot Time Series.

(A) Actual and prediction values. (B) Prediction error.

Table 2

Comparison of Different Models for Sunspot Time Series Prediction.

ModelNumber of ParametersNMSE TrainingNMSE Testing 1NMSE Testing 2
Tong and Lim [33]160.0970.0970.28
Weigend [35]430.0820.0860.35
Svarer [30]12–160.0900.0820.35
Transversal net [22]140.09870.09710.3724
Recurrent net [22]220.10060.09720.4361
RFNN [2]0.0740.21
ANFIS [13]1040.05500.19150.4068
AFWNN2080.07120.08490.2537

As seen from Table 2, the performance of the proposed AFWNN model for the first test sample is worse than some of the existing models. However, it performs better in the second test sample. Although the AFWNN model was derived from the conventional ANFIS, it is also clear that the AFWNN model is more accurate than ANFIS, even though it involves more free parameters. This is because the wavelet scaling functions capture the transients in the sunspot data more successfully than the Gaussian functions used in the ANFIS.

4.3 Prediction of Mackey Glass Time Series

Chaos and fractal theory have been extensively studied in the past decades [18, 21], so it is of interest to test the predictability of fractal time series. Accordingly, the Mackey Glass time series was chosen as a last simulation example. The Mackey Glass time series is a benchmark chaotic time series generated by the following differential equation:

(28)dxdt=0.2x(tτ)1+x10(tτ)0.1x(t). (28)

For τ = 17, the systems response is chaotic and simulation data are obtained using the initial conditions x(0) = 1.2. As in Jang [13], 1000 {xd, yd} input-output data pairs were extracted from the Mackey Glass time series x(t), where t = 118 to 1117, xd = [x(t – 18) x(t – 12) x(t – 6) x(t)], and yd = x(t + 6). The first 500 data points were used for training the AFWNN model, and the remaining 500 points were used for validating the identified model. In all, 16 fuzzy rules were generated using two membership functions for each input variable. The models were trained for 5000 epochs. Table 3 shows the performance comparison of AFWNN model with different models. Figure 4 shows the actual time series with the output of the AFWNN and the prediction error.

Figure 4 AFWNN Model Results for Mackey Glass Time Series.(A) Actual and prediction values. (B) Prediction error.
Figure 4

AFWNN Model Results for Mackey Glass Time Series.

(A) Actual and prediction values. (B) Prediction error.

Table 3

Comparison of the Training and the Testing Performance of Different Models for Mackey Glass Time Series.

ModelNumber of ParametersRMSE TrainingRMSE Testing
Auto-regressive modela0.19
Cascade correlation NNa0.06
Backpropagation NNa0.02
Sixth-order polynomiala0.04
Linear prediction methoda0.55
Product T-norm [34]0.09
Classical RBF (with 23 neurons) [8]0.0114
PG-RBF network [25]0.0028
Genetic algorithm and fuzzy system [17]0.049
Neural tree model [6]0.0069
WNN [5] + gradient900.00670.0071
WNN [5] + hybrid900.00560.0059
LLWNN [5] + gradient1100.00380.0041
LLWNN [5] + hybrid1100.00330.0036
IT2FNN-3 [4]0.0020
MSBFNN [10]1780.0024
SEIT2FNN [14]0.0034
FLNFN-CCPSO [20]0.008270.00842
NFIS-SEELA [19]0.007450.00747
Recurrent ANFIS [31]0.0013
ANFIS [13]1040.001600.00156
AWN [23]960.001830.00178
AFWNN2080.001330.00113

aResults are taken from Chen et al. [5].

The first column shows the number of parameters. The number of parameters in the AFWNN is high, compared with alternative models. Nevertheless, as demonstrated in columns 2 and 3, the AFWNN is also significantly more accurate. For this time series, the improvement in accuracy is in the range of 80%.

5 Conclusion

In the present article, we introduced an AFWNN model and used it for three well-known time series prediction problems. In all three cases, the AFWNN demonstrated both high approximation accuracy and, at the same time, good generalization performance. In principle, the AFWNN should be able to approximate any nonlinear process. The impressive generalization capability of AFWNN models is derived primarily from the use of wavelets, and their ability to localize both in the time and frequency domains. Because of these capabilities, the AFWNN should be able to achieve greater accuracy in training over existing data sets, and also reduce the impact of random disturbances in forecasts. Our ongoing research will run further tests on time series to determine whether this is the case. It would be fruitful for further researches to investigate the effects of other T-norm operators and fuzzy inference mechanisms on the performance of these models. Also another research topic is the application of the proposed FWNN models to popular problems of speech and image processing, financial data analysis and prediction, and other system identification and control applications.


Corresponding author: Yusuf Oysal, Faculty of Engineering, Department of Computer Engineering, Anadolu University, 26555 Eskişehir, Turkey, e-mail:

Bibliography

[1] R. H. Abiyev and O. Kaynak, Fuzzy wavelet neural networks for identification and control of dynamic plants – a novel structure and comparative study, IEEE Trans. Ind. Electron. 55 (2008), 3133–3140.10.1109/TIE.2008.924018Search in Google Scholar

[2] R. A. Alieva, B. G. Guirimov, B. Fazlollahi and R. R. Aliev, Evolutionary algorithm-based learning of fuzzy neural networks. Part 2: Recurrent fuzzy neural networks, Fuzzy Sets Syst. 160 (2009), 2553–2566.10.1016/j.fss.2008.12.018Search in Google Scholar

[3] G. E. P. Box and G. M. Jenkins, Time series analysis, forecasting and control, Holden Day, San Francisco, 1976.Search in Google Scholar

[4] J. R. Castro, O. Castillo, P. Melin and A. Rodríguez-Díaz, A hybrid learning algorithm for a class of interval type-2 fuzzy neural networks, Inf. Sci. 179 (2009), 2175–2193.10.1016/j.ins.2008.10.016Search in Google Scholar

[5] Y. Chen, B. Yang and J. Dong, Time-series prediction using a local linear wavelet neural network, Neurocomputing 69 (2006), 449–465.10.1016/j.neucom.2005.02.006Search in Google Scholar

[6] Y. Chen, B. Yang and J. Dong, Nonlinear system modeling via optimal design of neural trees, Int. J. Neural Syst. 14 (2004), 125–137.10.1142/S0129065704001905Search in Google Scholar PubMed

[7] E.-Y. Cheu, H.-C. Quek and S.-K. Ng, TNFIS: tree-based neural fuzzy inference system, IEEE Int. Joint Conf. Neural Netw. (2008), 398–405.Search in Google Scholar

[8] K. B. Cho and B. H. Wang, Radial basis function based adaptive fuzzy systems their application to system identification and prediction, Fuzzy Sets Syst. 83 (1995), 325–339.Search in Google Scholar

[9] P. E. Gill, W. Murray and M. H. Wright, Practical optimization, Academic Press Ltd., London, 1993.Search in Google Scholar

[10] S. Hengjie, M. Chunyan, S. Zhiqi, M. Yuan and B.-S. Lee, A fuzzy neural network with fuzzy impact grades, Neurocomputing 72 (2009), 3098–3122.10.1016/j.neucom.2009.03.009Search in Google Scholar

[11] D. W. C. Ho, P.-A. Zhang and J. Xu, Fuzzy wavelet networks for function learning, IEEE Trans. Fuzzy Syst. 9 (2001), 200–211.10.1109/91.917126Search in Google Scholar

[12] S. Horikawa, T. Furuhashi and Y. Uchikawa, On fuzzy modeling using fuzzy neural networks with the back-propagation algorithm, IEEE Trans. Neural Netw. 3 (1992), 801–806.10.1109/72.159069Search in Google Scholar PubMed

[13] J. S. R. Jang, ANFIS: adaptive-network-based fuzzy inference systems, IEEE Trans. Syst. Man Cybern. 23 (1993), 665–685.10.1109/21.256541Search in Google Scholar

[14] C. F. Juang and Y.-W. Tsao, A self-evolving interval type-2 fuzzy neural network with online structure and parameter learning, IEEE Trans. Fuzzy Syst. 16 (2008), 1411–1424.10.1109/TFUZZ.2008.925907Search in Google Scholar

[15] N. K. Kasabov, J. Kim, M. J. Watts and A. R. Gray, FuNN/2 – a fuzzy neural network architecture for adaptive learning and knowledge acquisition, Inf. Sci. 101 (1997), 155–175.10.1016/S0020-0255(97)00007-8Search in Google Scholar

[16] J. Kim and N. Kasabov, HyFIS: adaptive neuro-fuzzy inference systems and their application to nonlinear dynamical systems, Neural Netw. 12 (1999), 1301–1319.10.1016/S0893-6080(99)00067-2Search in Google Scholar

[17] D. Kim and C. Kim, Forecasting time series with genetic fuzzy predictor ensembles, IEEE Trans. Fuzzy Syst. 5 (1997), 523–535.10.1109/91.649903Search in Google Scholar

[18] M. Li, Fractal time series – a tutorial review, Math. Prob. Eng. 2010 (2010), Article ID 157264, 26 pages. doi: 10.1155/2010/157264.10.1155/2010/157264Search in Google Scholar

[19] C.-J. Lin, C.-H. Chen and C.-T. Lin, Efficient self-evolving evolutionary learning for neurofuzzy inference systems, IEEE Trans. Fuzzy Syst. 16 (2008), 1476–1490.10.1109/TFUZZ.2008.2005935Search in Google Scholar

[20] C.-J. Lin, C.-H. Chen and C.-T. Lin, A hybrid of cooperative particle swarm optimization and cultural algorithm for neural fuzzy networks and its prediction applications, IEEE Trans. Syst. Man Cybern. Pt. C Appl. Rev. 39 (2009), 55–68.10.1109/TSMCC.2008.2002333Search in Google Scholar

[21] Z. Liu, Chaotic time series analysis, Math. Prob. Eng. 2010 (2010), Article ID 720190, 31 pages. doi: 10.1155/2010/720190.10.1155/2010/720190Search in Google Scholar

[22] J. R. McDonnell and D. Waagen, Evolving recurrent perceptrons for time-series modeling, IEEE Trans. Neural Netw. 5 (1994), 24–38.10.1109/72.265958Search in Google Scholar

[23] Y. Oysal and S. Yilmaz, An adaptive wavelet network for function learning, Neural Comput. Appl. 19 (2010), 383–392.10.1007/s00521-009-0297-4Search in Google Scholar

[24] W. Pedrycz, An identification algorithm in fuzzy relational systems, Fuzzy Sets Syst. 13 (1984), 153–167.10.1016/0165-0114(84)90015-0Search in Google Scholar

[25] I. Rojas, H. Pomares, J. L. Bernier, J. Ortega, B. Pino, F. J. Pelayo and A. Prieto, Time series analysis using normalized PG-RBF network with regression weights, Neurocomputing 42 (2002), 167–285.10.1016/S0925-2312(01)00338-1Search in Google Scholar

[26] R. Sanner and J.-J. E. Slotine, Gaussian networks for direct adaptive control, IEEE Trans. Neural Netw. 13 (1992), 837–863.Search in Google Scholar

[27] S. Srivastavaa, M. Singha, M. Hanmandlub and A. N. Jha, New fuzzy wavelet neural networks for system identification and control, Appl. Soft Comput. 6 (2005), 1–17.10.1016/j.asoc.2004.10.001Search in Google Scholar

[28] M. Sugeno and T. Yasukawa, Linguistic modeling based on numerical data, in: Proc. IFSA’91, pp. 234–247, Belgium, 1991.Search in Google Scholar

[29] H. Surmann, A. Kanstein and K. Goser, Self-organizing and genetic algorithm for an automatic design of fuzzy control and decision systems, in: Proc. FUFIT’s 93, pp. 1079–1104, Aachen, German, 1993.Search in Google Scholar

[30] C. Svarer, L. K. Hansen and J. Larsen, On design and evaluation of tapped-delay neural network architectures, in: Proc. IEEE Int. Conf. Neural Netw., San Francisco, 1992.Search in Google Scholar

[31] H. Tamura, K. Tanno, H. Tanaka, C. Vairappan and Z. Tang, Recurrent type ANFIS using local search technique for time series prediction, in: IEEE Asia Pacific Conference on Circuits and Systems, pp. 380–383, Macao, 2008.10.1109/APCCAS.2008.4746039Search in Google Scholar

[32] R. M. Tong, The evaluation of fuzzy models derived from experimental data, Fuzzy Sets Syst. 4 (1980), 1–12.10.1016/0165-0114(80)90059-7Search in Google Scholar

[33] H. Tong and K. S. Lim, Threshold autoregression, limit cycles and cyclical data, J. R. Stat. Soc. B 42 (1980), 245–292.Search in Google Scholar

[34] L. X. Wang and J. M. Mendel, Generating fuzzy rules by learning from examples, IEEE Trans. Syst. Man Cybern. 22 (1992), 1414–1427.10.1109/21.199466Search in Google Scholar

[35] A. S. Weigned, D. E. Rumelhart and B. A. Huberman, Predicting the future: a connectionist approach, Techn. Rep. Stanford-PDP-90-01 or PARC-SSL-90-20, 1990.10.1142/S0129065790000102Search in Google Scholar

[36] C. W. Xu, Fuzzy model identification and self-learning for dynamic systems, IEEE Trans. Syst. Man Cybern. 17 (1987), 683–689.10.1109/TSMC.1987.289361Search in Google Scholar

[37] S. Yilmaz and Y. Oysal, Fuzzy wavelet neural network models for prediction and identification of dynamical systems, IEEE Trans. Neural Netw., 21 (2010), 1599–1609.10.1109/TNN.2010.2066285Search in Google Scholar PubMed

[38] M. Zekri, S. Sadri and F. Sheikholeslam, Adaptive fuzzy wavelet network control design for nonlinear systems, Fuzzy Sets Syst. 159 (2008), 2668–2695.10.1016/j.fss.2008.02.008Search in Google Scholar

Received: 2013-9-6
Published Online: 2013-12-14
Published in Print: 2014-6-1

©2014 by Walter de Gruyter Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 29.11.2023 from https://www.degruyter.com/document/doi/10.1515/jisys-2013-0068/html
Scroll to top button