558
Views
7
CrossRef citations to date
0
Altmetric
Discussions and Replies

Discussion of “Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction”Footnote*

Pages 1453-1454 | Published online: 29 Nov 2010

Pramanik & Panda (Citation2009) examined the potential of the neural network and adaptive neuro-fuzzy approaches in prediction of river flows. The discusser would like to present the following important points of view, which the authors and potential researchers may wish to consider.

  1. The authors used the standard back-propagation algorithm, gradient descent (GD), for the adaptive neuro-fuzzy inference system (ANFIS) training. The GD is an old technique, which has shown its drawbacks (see Haykin, Citation1999). The Levenberg-Marquardt (LM), conjugate gradient (CG) and gradient descent with momentum and adaptive learning rate (GDX) algorithms are faster and more effective than the GD. The discusser wonders why the authors used the GD algorithm for the determination of premise parameters of the ANFIS. If the LM, CG or GDX algorithms were used, better flow estimates could probably have been obtained from the ANFIS.

  2. The authors have not provided any information about the epoch numbers used for the training of the ANN and ANFIS models. The control parameters of the GDX and CGF algorithms, e.g. momentum rate and learning rate increment, were also not given in the paper. Using an unnecessarily high number of iterations causes over-learning of the models and worsens the estimates in the testing period. However, the ANN is very sensitive to the selected initial weight values and may give a performance which differs significantly under different applications in MATLAB. The discusser wonders how the authors coped with this problem.

  3. It could be inferred from Tables 2 and 3 that the testing performances of the ANN and ANFIS models is better than their training performances in terms of the RMSE values. However, the ANN and ANFIS models were trained according to the RMSE criterion. For this reason, it would be expected that the models should have lower RMSE values in training/calibration period than those of the test period. The results which are given in Tables 2–3 imply that the ANN and ANFIS models could not learn the investigated phenomenon (river flow process) since they were not well-trained.

  4. In the second paragraph of p. 255, Pramanik & Panda (Citation2009) claim that: “… Model 4 produced better results during model training but failed to yield better results in testing. This may be due to over-fitting of the training data sets and poor generalization of the input–output data …” According to the discusser, this statement is not correct. It is obvious from Table 2(a) that Model 4 produced better results during model testing not training. The values of RMSE criterion in Table 4 indicate that Model 4 under-fitted (not over-fitted) the training data sets.

  5. Pramanik & Panda (Citation2009) have found that the CGF algorithm performs better than the LM. To the best knowledge of the discusser, however, the LM algorithm should be better than the CGF. This can also be seen from the related literature. Cigizoglu & Kisi (Citation2005) compared the accuracy of the LM, CGF and GD algorithms in daily river flow prediction and they found that the LM algorithm has a shorter training duration and more satisfactory performance than the CGF and GD. Kisi & Uncuoglu (Citation2005) compared the LM, resilient back-propagation (RB) and CGF algorithms in two case studies (one of them is daily streamflow prediction) and found that the LM performs better than the CGF. Kisi (Citation2006) indicated that LM algorithm is faster and more powerful than the CGF algorithm in evapotranspiration estimation. Kisi (Citation2007) investigated the ability of four different ANN training algorithms, LM, cascade correlation (CC), CGF and GD, in daily flow prediction and found that the LM algorithm performs better than the other three algorithms.

  6. The critical issue in training an ANN or ANFIS model is avoiding over-fitting as it reduces the capacity of generalization. If too many parameters are used, the network may over-fit the data. In contrast, if too few parameters are included in the network, it might not be possible to fully detect the signal and variance of a complex data set. In the Pramanik & Panda (Citation2009), the data for the monsoon periods of the years 1997–2001 (450 daily flow values) were used for the training of ANN and ANFIS models. Two generalized bell-shaped membership functions (MFs) were used for each input of the most complex ANFIS model (Model 5) which has seven inputs as stated by the authors in the section “Consideration of tributary inflow on model performance” on p. 258. Each MF has three parameters as given in equation (4). Thus, the model has 7 × 2 × 3 = 42 premise parameters. In addition, the model typically has 27 = 128 rules and corresponding linear equations. Each equation comprises eight parameters. Thus, the model has 128 × 8 = 976 consequent parameters. In total, 42 + 976 = 1018 parameters are included in the most complex ANFIS model. The training data which is less than the calibrated parameters does not seem to be enough to avoid over-fitting. This needs some explanation.

  7. In the fourth paragraph of p. 256, Pramanik & Panda (Citation2009) state that: “… The overestimation of minimum values may be due to the fact that the minimum value of the output training vector was higher than the corresponding value in the validation data set …” This problem could probably have been overcome using a more convenient normalization range. The input data were normalized to fall in the range [–1, 1] for the ANN models and [0, 1] for the ANFIS in the study of Pramanik & Panda (Citation2009). Cigizoglu (Citation2003) showed that scaling input data between 0.2 and 0.8 gives the ANNs the flexibility to predict beyond the training range. Using the [0.2, 0.8] or [–0.8, 0.8] normalization ranges, the ANN and ANFIS models could probably have given better estimates for the peak and low river flows.

  8. In the last paragraph of the section “Model Development and Testing”, Pramanik & Panda (Citation2009) say that “A marginal improvement in the values of RMSE was noticed for the ANN training algorithms, particularly in GDX and CGF cases; slightly better results were obtained in the case of ANFIS.” However, from Fig. 7, a marginal improvement in the values of RMSE is noticed for the ANN_LM (not for the GDX or CGF) and ANFIS models.

Notes

*Pramanik, N. & Panda, R. K. (Citation2009) Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction. Hydrol. Sci. J. 54(2), 247–260.

REFERENCES

  • Cigizoglu , H. K. 2003 . Estimation, forecasting and extrapolation of flow data by artificial neural networks . Hydrol. Sci. J. , 48 ( 3 ) : 349 – 361 .
  • Cigizoglu , H. K. and Kisi , O. 2005 . Flow prediction by three back propagation techniques using k-fold partitioning of neural network training data . Nordic Hydrol. , 36 ( 1 ) : 49 – 64 .
  • Haykin , S. 1999 . Neural Networks – A Comprehensive Foundation , 2nd , Upper Saddle River, NJ : Prentice-Hall Inc .
  • Kisi , O. 2006 . Evapotranspiration estimation using feed-forward neural networks . Nordic Hydrol. , 37 ( 3 ) : 247 – 260 .
  • Kisi , O. 2007 . Streamflow forecasting using different artificial neural network algorithms . J. Hydrol. Engng ASCE , 12 ( 5 ) : 532 – 539 .
  • Kisi , O. and Uncuoglu , E. 2005 . Comparison of three backpropagation training algorithms for two case studies . Indian J. Engng Mater. Sci. , 12 : 443 – 450 .
  • Pramanik , N. and Panda , R. K. 2009 . Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction . Hydrol. Sci. J. , 54 ( 2 ) : 247 – 260 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.