403
Views
1
CrossRef citations to date
0
Altmetric
Discussions and Replies

Reply to the Discussion of “Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction” by O. Kisi

&
Pages 1455-1456 | Published online: 29 Nov 2010

The authors wish to thank Dr Kisi for his discussion paper (Kisi, Citation2010) showing interest in the results of the research methodologies of our article (Pramanik & Panda, Citation2009). In this reply, the authors have tried to elucidate the comments given by the discusser in a systematic manner, paragraph numbers corresponding to those in the discussion.

  1. The parameters of the membership functions of the fuzzy interference system (FIS) are optimized using the hybrid learning algorithm, which is efficient and commonly used training algorithm, reported to produce promising results in several previous studies (Chang & Chang, Citation2006; Firat & Gungor, Citation2007; Elabd & Schlenkhoff, Citation2009). The hybrid learning algorithm is a combination of the gradient descent technique and the least-squares technique. The gradient descent technique was used to optimize the nonlinear input parameters (ai , bi , ci ), whereas the least-square method was employed to identify the linear output parameters (pi , qi , ri ). Moreover, the hybrid learning approach is beneficial due to the fact that the convergence is much faster since the dimensions of the search space is reduced in the original pure back-propagation method used in ANN technique. In addition to the gradient decent algorithm, many other learning algorithms like LM, CG and CGX can be used to update the ANFIS parameter values for better results as suggested by the discusser. But the authors have not attempted these algorithms for ANFIS modelling in the present study.

  2. In Pramanik & Panda (Citation2009), ANN model training was carried out using the trial-and-error method. About one hundred epochs were found sufficient to train the ANN models to attain a close agreement between the computed error and the present error goal. Since the results of the ANN models are not reproducible due to random initialization of the weights, the best results were selected based on the highest modelling efficiency and the lowest RMSE value. The learning rate and momentum constant used in our study to produce the best results are 0.005 and 0.9, respectively.

  3. The authors do not agree with the discusser's view that the RMSE obtained during testing cannot be lower than the RMSE of training for the training done using RMSE criterion. There are quite a good number of ANN applications in past studies showing that RMSE obtained during testing is lower than the training RMSE, even though the RMSE criterion was used to optimize the ANN architectures (Nayak et al., Citation2004; Chen et al., Citation2006; Jaksa et al., Citation2008). The authors wish to express that the values of these RMSEs obtained during model training and testing depends on the nature of the fitting data sets. In Pramanik & Panda (Citation2009), lower values of RMSE during testing may have been due to satisfactory training attained for the kind of training data sets used and thus it cannot be said that the trained network failed to investigate the river flow processes.

  4. In the paper under discussion, the comparisons among all the models were based on the RMSE values. In this regard, according to the discusser, the performance of Model 4 is the best during testing but not during training. However, in the published paper, the authors mentioned that the performance of Model 4 during testing is better than training based on the modelling efficiency. However, Model 4 can be said to be well-fitted or merely under-fitted, but cannot be called over-fitted as pointed out by the discussers. We agree with the discusser's remark on the statement “Model 4 produced better results during model training but failed to yield better results in testing. This may be due to over-fitting of the training data sets and poor generalization of the input-output data …” and would request the readers to refer the discussion paper.

  5. The performance of the CGF algorithm in the study is slightly better than LM; this is due to the capability of the algorithm's search technique, which performs along the conjugate direction with a faster rate of convergence. For this case study, the CGF yielded comparable results with LM yielding the modelling efficiency slightly better than LM. We agree with the discusser's view that in most of the applications but not all, the LM algorithm has shown the best performance (Cigizoglu & Kisi, Citation2005; Kisi, Citation2005, Citation2007).

  6. The present study of using ANFIS for river flow prediction was the first study for the authors on the application of ANFIS. The authors tried to obtain the best results by training the ANFIS architecture, deleting certain rules in each run. Over-fitting is a common issue in data-driven modelling and attempts were made to overcome it by using only the selected rules. In each run, the authors used less than 128 rules by deleting a few rules. In the Matlab GUI, we used the option to delete rules in each run and finally fewer than 128 rules would have been used in the ANFIS parameter optimization. Therefore, we think that the presented results are the optimum outcome of the ANFIS model training, and it is expected that the ANFIS architecture is least affected by over-fitting.

  7. We strongly agree with the discussers' view that normalizing the data sets in the range 0.2 to 0.8 could improve model training. The study being the first attempt for the authors to use ANN in river flow prediction, the existing Matlab function “premnx” was used to scale the data in the range –1 to +1. However, in subsequent studies, the authors have investigated the effect of different scales of the data sets on model prediction accuracy (Singh et al., Citation2009; Panda et al., Citation2010).

  8. The authors used the term “marginal” to indicate least improvement in the value of RMSE in the case of GDX and CGF. Improvement in the RMSE in the discussion of the original published paper was used in a positive sense, meaning that there is slight decrease in the values of RMSEs in the case of GDX and CGF, whereas the decrease is more significant in case of LM and ANFIS.

REFERENCES

  • Chang , F. J. and Chang , Y. T. 2006 . Adaptive neuro-fuzzy inference system for prediction of water level in reservoir . Adv. Water. Resour. , 29 ( 1 ) : 1 – 10 .
  • Chen , S. H. , Lin , Y. H. , Chang , L. C. and Chang , F. J. 2006 . The strategy of building a flood forecast model by neuro-fuzzy network . Hydrol. Processes , 20 : 1525 – 1540 .
  • Cigizoglu , H. K. and Kisi , O. 2005 . Flow prediction by three back propagation techniques using k-fold partitioning of neural network training data . Nordic Hydrol. , 36 ( 1 ) : 49 – 64 .
  • Elabd , S. and Schlenkhoff , A. 2009 . ANFIS and BP neural network for travel time prediction . World Academy of Science, Engineering and Technology , 57 : 116 – 121 .
  • Firat , M. and Güngör , M. 2007 . River flow estimation using adaptive neuro fuzzy inference system . Mathematics and Computers in Simulation , 75 ( 3-4 ) : 87 – 96 .
  • Jaksa , M. B. , Maier , H. R. and Shahin , M. A. Future challenges for artificial neural network modelling in geotechnical engineering . 12th International Conference of the International Association for Computer Methods and Advances in Geomechanics (IACMAG) . October 1–6 , Goa, India.
  • Kisi , O. 2005 . Suspended sediment estimation using neuro-fuzzy and neural network approaches . Hydrol. Sci. J. , 50 ( 4 ) : 683 – 696 .
  • Kisi , O. 2007 . Streamflow forecasting using different artificial neural network algorithms . J. Hydrol. Engng ASCE , 12 ( 5 ) : 532 – 539 .
  • Kisi , O. 2010 . Discussion of “Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction . Hydrol. Sci. J. , 55 ( 8 ) : 1453 – 1454 .
  • Kisi , O. and Uncuoglu , E. 2005 . Comparison of three backpropagation training algorithms for two case studies . Indian J. Eng. Mater. Sci. , 12 : 443 – 450 .
  • Nayak , P. C. , Sudheer , K. P. , Rangan , D. M. and Ramasastri , K. S. 2004 . A neuro-fuzzy computing technique for modeling hydrological time series . J. Hydrol. , 291 ( 1-2 ) : 52 – 66 .
  • Panda , R. K. , Pramanik , N. and Bala , B. 2010 . Simulation of river stage using artificial neural network and MIKE 11 hydrodynamic model . Comput. Geosci. , 36 ( 6 ) : 735 – 745 .
  • Pramanik , N. and Panda , R. K. 2009 . Application of neural network and adaptive neuro-fuzzy inference systems for river flow prediction . Hydrol. Sci. J. , 54 ( 2 ) : 247 – 260 .
  • Singh, A., Panda, R. K. & Pramanik, N. (2009) Appropriate data normalization range for daily river flow forecasting using an artificial neural network. In: Hydroinformatics in Hydrology, Hydrogeology and Water Resources (I. D. Cluckie, Y. Chen, V. Babovic, L. Konikow, A. Mynett, S. Demuth & D. Savic, eds), 51–57. Wallingford: IAHS Press, IAHS Publ. 331. http://iahs.info/redbooks/331.htm (http://iahs.info/redbooks/331.htm)

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.