1,200
Views
9
CrossRef citations to date
0
Altmetric
Research Article

A decomposition and multi-objective evolutionary optimization model for suspended sediment load prediction in rivers

ORCID Icon, , ORCID Icon, ORCID Icon, ORCID Icon, , & ORCID Icon show all
Pages 1811-1829 | Received 19 Aug 2020, Accepted 02 Oct 2021, Published online: 18 Nov 2021

Abstract

Suspended sediment load (SSL) estimation is essential for both short- and long-term water resources management. Suspended sediments are taken into account as an important factor of the service life of hydraulic structures such as dams. The aim of this research is to estimat SSL by coupling intrinsic time-scale decomposition (ITD) and two kinds of DDM, namely evolutionary polynomial regression (EPR) and model tree (MT) DDMs, at the Sarighamish and Varand Stations in Iran. Measured data based on their lag times are decomposed into several proper rotation components (PRCs) and a residual, which are then considered as inputs for the proposed model. Results indicate that the prediction accuracy of ITD-EPR is the best for both the Sarighamish (R2 = 0.92 and WI = 0.96) and Varand (R2 = 0.92 and WI = 0.93) Stations (WI is the Willmott index of agreement), while a standalone MT model performs poorly for these stations compared with other approaches (EPR, ITD-EPR and ITD-MT) although peak SSL values are approximately equal to those by ITD-EPR. Results of the proposed models are also compared with those of the sediment rating curve (SRC) method. The ITD-EPR predictions are remarkably superior to those by the SRC method with respect to several conventional performance evaluation metrics.

1. Introduction

1.1. Background

Water is an essential substance for all creatures. Nowadays, the growing world population is directly dependent on water adequacy. Because of different water distribution systems in various parts of the world, the idea of using reservoirs is highlighted and has great significance. Thus, reservoir capacity is a topic of the utmost importance. Yet, problems related to reservoir capacity are serious challenges in this growing world (Cieśla et al., Citation2020). Suspended sediment load (SSL) in rivers, which is affected by hydrological and meteorological variables, is one of environmental and hydrological issues in watersheds (Adnan et al., Citation2019). The sedimentation rate is one watershed management tools, and the volume of sediments also has effects on water quality (contaminant transport), aquatic animals’ health, ecological impacts, geomorphology, channels’ and hydraulic structures’ design, river bed sustainability, and dams and reservoirs engineering (Khan et al., Citation2019; Kisi & Zounemat-Kermani, Citation2016). Furthermore, sediment rates have effects on the volume of dams, in which silting and erosion phenomena cause a reduction in capacity (due to the increase of dead storage). The hydrology of the basin can be used to estimate the concentration of suspended sediment carried by rivers in the watershed (Nourani et al., Citation2019).

In general, numerical environment modeling needs ubiquitous dynamic modeling systems in which the generated models can be very near to the observed historical data. Therefore, bearing in mind the architectures of hydrological data, which consists of both stochastic and deterministic behavior, hydrologists try to model hydrological variables in both physical applications and stochastic models. For this, there are many computational methods, such as data-driven models (DDM), which are gateways for accurate environmental forecasting. Consequently, DDM techniques have vast application areas in environmental modeling and have been used by many scholars in recent years. Modeling and prediction of SSL are one of the essential issues in water resources engineering, and decision-makers use watershed sediment modeling globally. Precise SSL modeling, like modeling in other fields, depends on the significance of data, input parameters, lag of input variables and data scale (hourly, daily, weekly, monthly and yearly). The rigorous estimation of SSL is fundamental in hydraulic and sediment engineering in a river basin (Zounemat-Kermani et al., Citation2020; Sharghi, Nourani, Najafi, & Gokcekus, Citation2019). There exists a sediment rating curve (SRC) that defines the relationship between the discharge of streamflow and sediment concentration. SSL time series are affected by different variables, including hydrological, morphological, and meteorological variables, which render SSL data complex (Kisi & Yaseen, Citation2019). Additionally, due to the complexity and nonlinearity of SSL time series, data-driven models (DDMs) have delivered more efficient results compared with those from physical and empirical models. DDMs have long been associated with SSL modeling. They are able to learn from the behavior of input data and result in fast computing and rigorous, flexible and accurate prediction results.

1.2. Literature review: forecasting models

There is a rich literature reporting on SSL modeling with DDMs over the past few decades. Various models, such as artificial neural networks (ANN) (Fard & Akbari-Zadeh, Citation2014; Kumar et al., Citation2019; Liu et al., Citation2019; Samet et al., Citation2019; Zounemat-Kermani et al., Citation2020), emotional ANNs (Sharghi, Nourani, Najafi, & Soleimani, Citation2019), adaptive neuro-fuzzy systems (ANFISs) (Adnan et al., Citation2019; Ehteram et al., Citation2019), genetic programming (GP) (Jaiyeola & Adeyemo, Citation2019; Khozani et al., Citation2020; Safari & Mehr, Citation2018), local weighted linear regression (LWLR) (Kisi & Ozkan, Citation2017), extreme learning machines (ELMs) (Ebtehaj, Bonakdari, & Shamshirband, Citation2016; Peterson et al., Citation2018; Roushangar et al., Citation2021), support vector machines (SVMs) (Ebtehaj, Bonakdari, Shamshirband, & Mohammadi, Citation2016; Meshram et al., Citation2020; Rahgoshay et al., Citation2019), multivariate adaptive regression splines (MARS) (Adnan et al., Citation2019), fuzzy c-means clustering techniques (Kisi & Zounemat-Kermani, Citation2016), etc., were able to model and predict SSL. Zounemat-Kermani et al. (Citation2020) employed hybrids of ANFIS and SVR with genetic algorithms (GA-ANFIS and GA-SVR) for SSL and bedload (BL) prediction, and their results were compared with those by two customary models, namely SRC and MLR, at Grande de Loíza River on Puerto Rico Island, USA. Utilizing discharge, SS and BL data on a daily scale, they found that GA-ANFIS and GA-SVR models performed better than standalone models. Rajaee (Citation2011) evaluated a pre-processing based DDM (termed WANN) for predicting daily SSL in Yadkin River at Yadkin College, New York City, USA using 30 years of data from 1957 to 1987. Their outcomes were compared with those of MLR and SRC models. Based on his results, WANN was selected as the best performing model among MLR and SRC models. Mirbagheri et al. (Citation2010) predicted SSL using three models, namely ANN, ANFIS and wavelet neuro-fuzzy (WNF), along with SRC. They utilized river discharge and SC data on a daily scale obtained from Rio Rosario Station located in Hormigueros, Puerto Rico, USA. They found that WNF was effective on hysteresis phenomena and it outperformed ANN and SRC. Alizadeh et al. (Citation2017) employed ensemble WANN for modeling and forecasting a one-time step ahead of SSC at Skagit River near Mount Vernon in Washington County, Pennsylvania, USA. They selected observed and forecasted time-series data of SSC as input variables. They found that each step ahead WANN performed better than the previous one. Martins and Poleto (Citation2017) tested maximum entropy in modeling SSC with two data series (Coleman, Citation1986). They compared the results of empirical equations of maximum entropy (Tsallis and Shannon entropy) with Prandtl von Karman methods and the Rouse equation. They found that Tsallis and Shannon entropy performed better in modeling SSC.

Owing to the nature of hydrological phenomena such as SSL, their behavior is generally characterized by high non-stationarity and nonlinearity changes. Therefore, creating an accurate predictive model is highly challenging owing to the existing high complexity issue. In this regard, DDMs that can create nonlinear relationships between inputs and outputs are considered. Although DDMs have been successfully applied in SSL prediction, they have some weaknesses. For example, some DDMs such as ANN, ANFIS and SVM techniques have unknown parameters that have remarkable influences on their accuracies. Most literature reviews commonly use classical training algorithms for training these models. Nevertheless, these models may be trapped in a local optimum. Moreover, some DDMs lack a comprehensive expression for use in practical tasks and may also produce uncertainties in terms of predicted values. These reasons lead to the development of equation-based models such as EPR and MT for the estimation of hydrological phenomena. In addition, owing to the nonlinearity and seasonality of time-series datasets, applying datasets directly to models may not provide significant insights for these phenomena. Hence, the employment of data pre-processing techniques, which can extract the embedded features of non-stationary and dynamical time-series signals, is highly recommended. Among various pre-processing approaches, intrinsic time-scale decomposition, as one of the noise-assisted data analysis approaches, is considered for decomposing input and output variables with a few PRCs, which can convert non-stationary to stationary signals. Hence, a novel beneficial approach coupling equation-based models and pre-processing techniques is developed in this study to create a robust predictive model (Bonakdari et al., Citation2019; Rezaie-Balf et al., Citation2019).

1.3. Problem statement

SSL consists of small particles of silt, sand, clay, gravel, etc. that are smaller than 63 µm in size. Under certain critical weather conditions (e.g. climate change), strong typhoons or stormwater runoff, extreme sediment transportation occurs and particles are carried extensively from upstream areas into downstream plains and watersheds (Huang et al., Citation2019). Once the flow velocity reduces, these particles will settle and gather in river bottoms or hydraulic structures like channels, reservoirs, dams, etc. For instance, sediment deposition in dams causes a reduction in their active storage capacities (Adnan et al., Citation2019). SSL can also diminish soil quality and agricultural crop yield. Therefore, understanding sediment behavior, modeling and prediction of SSL is a significant step in hydrological management.

All DDMs and regression models necessitate knowledge of the exact structure of input sediment data. Thus, attention has also been focused on input data (pre-processing techniques) prior to modeling. Meanwhile, sediment data, like other hydrological data, have some characteristics such as stochastic nature, time and frequency domain, anomalies, trends, seasonality, periodicities, etc. (Attar et al., Citation2020). Directly utilizing raw data for model generation causes some uncertainties and errors, and also diminishes the accuracy of the model. Over the past few years, a growing appeal of utilizing pre-processing techniques has been found in hydrological issues, which has successfully enhanced model performance. In recent years, several signal processing techniques have emerged, which can separate fluctuating signals into individual smaller sequences. In order to analyze nonlinear and complex data series (SSL time-series data), these decomposition methods are adaptable and practical tools. They try to decompose the original data into a limited number of residuals (Zeng et al., Citation2020). Principal component analysis (PCA) (Loska & Wiechuła, Citation2003), continuous wavelet transforms (CWTs) (Tiwari & Chatterjee, Citation2010), wavelet multi-resolution analysis (Nourani et al., Citation2014), maximum entropy (Singh & Krstanovic, Citation1987), singular spectrum analysis (Rocco S, Citation2013), and some noise control techniques of time-series data such as empirical mode decomposition (EMD) techniques (Kuai & Tsai, Citation2012), complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) (Adarsh & Reddy, Citation2015), intrinsic time-scale decomposition (ITD) (Zhang et al., Citation2019) are some algorithms that have been used by researchers. ITD, which was first presented by Frei et al. in 2006, practised new processes of constructing the baseline through a piecewise linear function (Frei & Osorio, Citation2007). This is a novel model based on adaptive decomposition techniques and can expose the signal decomposition in a better way. At the endpoints, ITD can rigorously control the endpoints effect. Because of the mono level iteration of the ITD algorithm, fast running is another advantage. In ITD, the original data is divided into several (more than five) monotonic PRCs from high to low frequencies. These PRCs can then be applied to analyze immediate information (Zeng et al., Citation2012).

1.4. Objectives and motivations

Real-world data such as SSL in rivers often have some complex characteristics such as non-stationary, nonlinear, noisy and limited dynamical information. Thus, high quality and novel techniques are entailed in modeling these data taking their characteristics into. Today, there is significant development in the use of DDMs in real-world applications. In some cases, results from standalone DDM models have shown lower accuracy compared with those from hybrid models (like ITD-DDMs) (Hassanpour et al., Citation2019). Recently, hybrid models (pre-processing-DDMs) have been found to be able to give an acceptable level of accuracy and havebecome the most essential ideas in the hydrological modeling literature, including SSL prediction. The objective of the present study is to enhance the accuracy of selected novel DDMs with innovative pre-processing techniques in SSL prediction and modeling at two different stations in Iran, namely Sarighamish Station on ZarrinehRoud River in the Saeengale sub-basin and Varand Station on Chahardangeh River in the Neka sub-basin. To the best knowledge of the authors, there is no related research on using ITD pre-processing techniques in hydrological problems, especially sediment transport problems.

2. Study area and data description

ZarrinehRoud and Chahardangeh Rivers are selected from two different sub-basins (Saeenghale and Sari) in north and north-western Iran, respectively, with one hydrometric station on each river. It is worth mentioning that these two stations (Sarighamish and Varand) are selected based on their area size for evaluating models from two important basins in Iran, namely the Urmia Lake basin (ULB) and the Mazandaran basin. Information about their geographical characteristics, including longitude, latitude, elevation, and their related sub-basins of Sarighamish and Varand Stations, are given in Table . Moreover, the locations of the selected stations and key information about input data (SSL and discharge), including their statistical characteristics such as mean, standard deviation, kurtisos, maximum, minimum, variance and skewness are illustrated in Figure . Based on an analysis of the input data from these two rivers, the mean values of streamflow discharge along with SSL for Sarigamish Station in both the training and testing stages are significantly greater than those of Varand Station.

Figure 1. Sarighamish and Varand Stations in selected sub-basins (Saeenghale and Sari).

Figure 1. Sarighamish and Varand Stations in selected sub-basins (Saeenghale and Sari).

Table 1. Geographical locations and characteristics of stations.

As stated above, the area of the Saeenghale sub-basin is 7160 km2, which is seven times greater than that of the Sari sub-basin with 1198.3 km2. ULB is located in northwestern Iran (44°7′ to 47°53′E and 35°40′ to 38°30′N) with 51,876 km2, which amounts to 3.15% of the area of all basins in Iran. ULB consists of 24% land and 11% Urmia Lake. ZarrinehRoud River is one of the biggest rivers in Urmia Lake basin, which debouches to Urmia Lake (24% of the total intake to Urmia Lake). It is 302 km long, with an average annual water flow of 1583 MCM. This river originates in Chehelcheshme Mountains, crosses Shahindezh, Miandoab and Keshavarz cities and finally reaches the southern part of Urmia Lake. Based on the literature, the flow rate of ZarrinehRoud River varies from 120 to 10 m3/s annually. This significant fluctuation in river flow rate depends on climatic conditions. There are also dissolved components in ZarrinehRoud River that lead to dissolved sediment loads. These data are measured by hydrological experts in Urmia environmental and water organizations. Sarighamish is a hydrometric station located on this river. This station is located in the Saeenghale sub-basin of ULB (Emdadi et al., Citation2016; Ghavidel & Montaseri, Citation2014). Sari City is the capital city of Mazandaran, Iran, which is located at 36° 34′ 4N, 53° 3′ 31E. Based on the literature, the climate of this city is mild, with much rain in winter (Vatanpour et al., Citation2020). The annual average temperature is reported to be 16.7°C. Moreover, on average, the month with the highest temperature is August at 25.2°C and February is defined as the coldest month with a temperature of 25.2°C. Additionally, the average precipitation of Sari City is 690 mm per year (Ghanbarpour et al., Citation2013). The lowest rainfall occurs in June (23 mm), while the highest rainfall (98 mm) occurs in December. Tajan River is known as the most important river of Sari City since it provides water supply to the whole watershed. Chahardangeh River is one of the principal tributaries of Tajan River, which initiates in Kiyasar region mountains. Chahardangeh River, of total length 95.73 km, is located in the Sari sub-basin of Mazandaran province with an area of 1198 km2. Moreover, Sari sub-basin is located between (86° 46′ 38.8 to 30° 55′ 19. 2N) and (52° 46′ 44.8 to 6° 56′ 30.1E); it also has some hydrometric stations. Hydrometric data from Varand Station on Chahardangeh River from July 1995 to August 2017 are employed in this study.

In the present study, for modeling discharge and SSL data, all data series are divided into two categories, namely 75% and 25% of the entire data series for training and testing, respectively. From Sarighamish Station, data for 15 years (0.75 × 244 = 184) are for training, and data for the remaining five years (0.25 × 244 = 61) are considered for testing. From Varand Station, data for 17 years (0.75 × 262 = 197) are for training, and data for the remaining six years (0.25 × 262 = 65) are for testing. Additionally, input data for modeling in this study are SSL and discharge data from the selected stations.

3. Methodology

3.1. Sediment rating curve (SRC)

The SRC is a mathematical equation relating water discharge and sediment concentration. The SRC is an essential tool when there is no manpower, regular sediment sampling or laboratory investigation. The SRC can be displayed as either an equation or a graph, relating sediment and discharge, which help to solve sediment related problems efficiently. SRC equations and curves are mainly used in sediment transport estimation in rivers (Demirci & Baltaci, Citation2013). The SRC often comes in the form of power equation (Kisi & Zounemat-Kermani, Citation2016). Additionally, a rating curve can help experts to estimate sediment loads using streamflow data. Equation (1) demonstrates the equation in detail, which is basically an exponential regression between SSL and discharge: (1) SSL=aQb(1) in which Q stands for discharge (m3/s), SSL denotes suspended sediment load (mg/L) and a and b are constant coefficients.

3.2. Intrinsic time-scale decomposition (ITD)

ITD, a time–frequency analysis method, was first presented by Frei & Osorio (Citation2007). ITD has the ability to decompose complex, nonlinear and non-stationary signals into a group of PRCs with few iterations. The principal purpose of ITD is to assimilate higher signals into several PRCs. ITD is developed with the following steps: (2) x(t)=Hx(t)+L(t)=(1L)x(t)(2) The operator L is considered to represent local extrema of Xt, which removes the baseline signal. The performance of the operator results in precise rotation. PRCs can be computed by Hx(t)=(1L)x(t). Furthermore, it is decomposed into signals of x(t). We consider H(t) like Hx(t) and L(t) like x(t)=Lx(t), i.e. the mean of signals.

The steps of ITD algorithms include the following.

  1. Defining the extreme points of x(t) as an input signal with the occurrence time of τk, where k = 0, 1, 2, … with τ0 = 0 as the first signal

  2. Defining x(t), L(t) and H(t) as input signal and operators on the interval [0, τk+ 2], respectively. It is noteworthy that operator L is considered to be a linear function on the interval [τk, τk+ 1] during the baseline extraction process. The baseline extraction operator is computed as follows: (3) Lx(t)=L(t)=Lk+(Lk+1Lkxk+1xk)(x(t)xk),t(τk,τk+1),(3) and (4) Lk+1=α[xk+(τk+1τk)τk+2τk(xk+1xk)]+(1α)xk+1(4) in which 0 < α < 1 is taken as a constant parameter (α = 1/2). The process of making monotonic original signal generation x(t) between the extrema points is an essential step in defining PRCs.

  3. H(t) can be defined as follows for extracting PRCs: (5) Hx(t)=(1L)x(t)=H(t)=x(t)L(t)(5)

PRCs can be computed by subtracting the baseline from the input signal.
  1. The process is repeated (Equations 9 and 10) until a monotonic function is constructed by baseline (L(t)), which is a single signal divided into PRCs: (6) x(t)=i=1pHi(t)+Lp(t)(6) in which p is the number of PRCs constructed. The critical concepts about the ITD model can be summarized as low computation time, free transient smoothing, smearing in time-scale space solving, iteratively generating optimum PRCs and constant shifting. Additionally, because of feature extraction and filtering, this method can perfectly adapt to real-time data (naturally accrued) while maintaining its initial structure. More details about the ITD method can be found in Frei and Osorio (Citation2007) and Wang et al. (Citation2021).

3.3. Multi-Objective genetic algorithm-based EPR (MOGA-EPR)

EPR is one kind of hybrid and nonlinear regression mathematical method that is hybridized with genetic algorithms (GAs) (Giustolisi & Savic, Citation2006). In other words, a GA is used for searching for exponents in a symbolic formula based on a regression technique. This method demonstrates the physical polynomial structure of a system. The main objective of polynomial models is to combine output variables with functions. EPR can be summarized in two main steps, namely (a) structure generation made by a GA, and (b) estimating constant variables using the least squares (LS) method. Based on the EPR model, the prevalent mathematical formula is shown as follows: (7) y=i=1mF(X,f(X),ai)+a0(7) where y is the independent variable (output), ai are parameters needing to be adjusted, F and f are processes-based user-defined functions, X is an input matrix and m is the number of terms excluding a0. The input matrix can be defined as (8) X=[x11x12x13x1kx21x22x23x2kx31x32x33x3kxN1xN2xN3xNK]=[x1x2x3x1k](8) It is worth mentioned that the performance of the EPR model has a direct relationship with certain factors, including the number of inputs, selected functions (e.g. logarithmic, hyperbolic, exponential, etc.) (Giustolisi & Savic, Citation2006). Based on the GA algorithm, a population of solutions as individual chromosomes is generated by the LS method (minimizing the sum of squared errors). Thus, the general EPR equation is expressed as follows: (9) Y^=a0+j=1maj(Xi)ES(j,1)(Xk)ES(j,k)f((Xi)ES(j,k+1)(Xk)ES(j,2k)(9) in which m is the maximum number of additive terms, Xi and Y^ are model inputs and outputs, respectively, f and ES are user-defined functions and k is the number of input variables. More details about the EPR method can be found in Bonakdari et al. (Citation2017) and Ghaemi et al. (Citation2021).

3.4. Model tree

The MT method was first introduced by Quinlan (Citation1992). It is a kind of machine learning method that deals with the classification of extensive data and, as a result, generates robust and rigorous models. MT is based on a binary decision tree; thus, it generates tree-based models and is used for continuous class learning. It is noteworthy that decision trees are suitable for categorical type data, but it is applicable for numerical type data too (Quinlan, Citation1992). It is used to find a proper relationship between input(s) and output variables by utilizing linear regression models to primary leaf nodes (parent nodes).

MT is based on the “divide-and-conquer” method, in which the data sets either connect to a leaf (parent node) or split into subsets in order to evaluate results within the process. In some cases, these divisions result in complex structures; thus, it prunes back by new sub-trees and leaves. Computing the standard deviation (sd) of data sets is the first step in the MT method. The data sets are then split to generate a decision tree. Eliminating over fitted outcomes (pruning) is the second step in the MT method. Pruning techniques, which are based on regression functions, are useful in omitting sub-trees.

The standard deviation (error) is computed as follows: (10) SDR=sd(T)|Ti||T|sd(Ti)(10) in which “sd” signifies the standard deviation, T is a set of examples that reach the primary node and Ti denotes the subset of patterns that possess the ith outcome of the potential data set. By inspecting all outcomes, the MT method selects a node that minimizes the error. The strong points of the MT method include error estimation in unseen cases, utilizing multivariate regression methods in each node, simplification of linear models to minimize error and smoothing predicted values. More information about the MT method can be found in Ghaemi et al. (Citation2019).

3.5. Hybrid ITD decomposition-based models

In nature, hydro-climatic time series often comprise many intrinsic mode functions with different frequencies and exhibit complex nonlinear characteristics. In addition, the results of climate change (such as extreme weather) and human activities (such as environmental pollution and deforestation) are becoming prominent, and these adverse hydrological parameters might lead to deviation from normal climatic patterns and thus accurate prediction could be difficult to achieve. Consequently, the performance of a single prediction model is often not advised if the original signal (or one resolution component) is directly adopted as the input variable.

Signal decomposition algorithms can decompose hydrological time series into a set of relatively stable sub-series and reduce modeling difficulty. Hence, sub-series decomposed by the pre-processing algorithm, ITD, include only a similar scale of hydrological variables and thus are easier to predict. ITD is firstly used to decompose suspended sediment load and river flow time series into a set of sub-series and then a standalone MT and MOGA-EPR are applied to build adequate models for the prediction of each sub-series according to their own characteristics. A schematic diagram of the proposed hybrid ITD-MT and ITD-EPR models is given in Figure .

Figure 2. Schematic diagram of hybrid ITD-based–MT/EPR model.

Figure 2. Schematic diagram of hybrid ITD-based–MT/EPR model.

The pre-processing-based models (i.e. ITD-MT and ITD-EPR) comprise the following main steps. In the first step, the original data is divided into two parts, namely the training and testing parts. Secondly, the ITD procedure is employed in order to decompose the original input and output time series E(t) into several PRC components H(t) (i = 1, 2, 3, … , n). In the next step, for each extracted PRC component (for example PRC1), MT and EPR models are established as SSL predicting tools to simulate the decomposed PRC components, and to compute each component by the same sub-series (PRC1) of input variables, respectively. Finally, the predicted values of all extracted PRC components using MT and EPR models are aggregated to generate the SSL, and then the error via the predicted data set is evaluated.

To summarize, hybrid ITD and MT/EPR predictive approaches are employed according to the “decomposition and ensemble” idea. The decomposition is used to simplify the forecasting skill, while the aim of the ensemble is to formulate a consensus prediction on the original data. In this study, for validating and making the pattern of the provided PRC components (e.g. PRC1, PRC2, PRC3) reflect the prediction model and enhancing the SSL prediction accuracy, data sets from Sarighamish and Varand Stations in Iran are selected.

3.6. Statistical performance evaluation measures

In order to determine the best model, five statistical criteria, namely the coefficient of determination (R2), the root mean square error (RMSE), Legates–McCabe’s Index (LMI), the ratio of RMSE to standard deviation (RSD), Willmott’s index of agreement (WI) and the Akaike information criterion (AIC) are implemented as shown in Equations (11) to (15) (Moeeni et al., Citation2017; Sun et al., Citation2021; Zeynoddin et al., Citation2019): (11) R2=(i=1N(SSLoSSLo¯)(SSLmSSLm¯)i=1N(SSLoSSLo¯)2i=1N(SSLmSSLm¯)2)2,idealvalue=1(11) (12) RMSE=1Ni=1N(SSLmSSLo)2,idealvalue=0(12) (13) LMI=1[i=1N|SSLoSSLm|i=1N|SSLoSSLo¯|],idealvalue=0(13) (14) RSD=RMSESTDEVobs,idealvalue=0(14) (15) WI=1[i=1N(SSL0SSLm)2i=1N(|SSLmSSLo¯|+|SSLoSSLo¯|)2],idealvalue=1(15) (16) AIC=Nln(MSE)+2K,idealvalue=minimumvalue(16) where SSLo and SSLm are the observed and modeled values of SSL, respectively. Moreover, SSLo¯ and SSLm¯ denote average values of the observed and modeled SSL data, respectively, N is the total number of data points and K indicates the number of parameters.

4. Results and discussion

The main aim of this study is to evaluate the accuracy of ITD-EPR/MT in predicting SSL at Sarighamish and Varand Stations. Thus, in this section, the performance of the models with different input combinations (some at different lag times) for SSL prediction is investigated. In the present study, for modeling the discharge and SSL data, all data series are divided into two categories, namely 75% for the training and 25% for the testing stages. For instance, for Sarighamish Station, 15 years of data (0.75 × 244 = 184) are for training, and the remaining five years of data (0.25 × 244 = 61) are for testing. For Varand Station, 17 years of data (0.75 × 262 = 197) are for training, and the remaining six years of data (0.25 × 262 = 65) are for testing. After that, the performance of standalone models of EPR and MT, and integrating them with ITD pre-processing, are investigated using some evaluation benchmarks (namely R2, RMSE, WI, LMI, RSD and AIC) in the training and testing stages at the two proposed stations.

4.1. Optimum input variable selection for SSL prediction

In this study, the proposed prediction models are developed in a MATLAB® environment. One of the most remarkable steps in the development of model architectures is to determine the best input variable for modeling. The original (non-ITD) dataset with its statistically substantial lagged variables, determined by cross-correlation functions (CCFs) and partial autocorrelation functions (PACFs) operating in a 95% confidence interval, are applied as inputs for developing the models.

For the time-series datasets of this present study, the time delays of input/output parameters (SSL and Q) are computed by the above-mentioned function. PACF and Partial CCF diagrams of Sarighamish and Varand Stations are shown in Figure , in which the vertical axis indicates the time delay (lag number) and the horizontal axis shows the PACF and CCF. Time delays applied to the models are marked in all diagrams. According to Figure , two lags of Q are important for modeling SSL at Sarighamish Station. Moreover, PACF is applied for lag time selection of the output variable (SSL). Clearly, the PACF value for Sarighamish Station equals two, whereas no time lag is determined for either the input or output variables at Varand Station.

Figure 3. Partial autocorrelation function (PACF) and cross-correlation function (CCF) between SSL and river discharge for Sarighamish and Varand Stations.

Figure 3. Partial autocorrelation function (PACF) and cross-correlation function (CCF) between SSL and river discharge for Sarighamish and Varand Stations.

After having considered special input variables for the proposed stations, the best models standing at an acceptable level of accuracy returned by EPR for Sarighamish and Varand Stations are expressed, respectively, as follows: (17) SSL(Sarighamish)=20.1805Q2(SSL(t2))×exp(0.5Q12Q2SSL(t2)2SSL(t1))+53.0092Q(t1)0.5QSSL(t1)1.5×exp(2Q(t1)1Q1.5SSL(t2)1.5SSL(t1))(17) (18) SSL(Varand)=2.1013QSSL(t1)×exp(2Q2SSL(t1))+1.0409Q2exp(2Q2SSL(t1))+0.30265Q2exp(Q2SSL(t1))(18)

4.2. Prediction results at Sarighamish Station

As seen in Table , the integration of ITD-EPR/MT gives preferable accuracy (i.e. generally the largest R2 and the lowest RMSE) compared with the other standalone models, which indicates that intrinsic time-scale decomposition has a high influence in increasing the accuracy of DDM models at Sarighamish and Varand Stations. On the other hand, regarding the performance comparison of the proposed methods at the training stage, it is obvious that, for SSL forecasting, the best predicted SSL values (based on the observed values) are yielded by ITD-EPR with statistical parameters (highest R2 = 0.95, lowest RMSE = 1385.4 and RSD = 0.257) compared with inferior results for ITD-MT (R2 = 0.93, RMSE = 1537.1 and RSD = 0.285), and standalone models such as EPR (R2 = 0.82, lowest RMSE = 2293.7 and RSD = 0.426).

Table 2. Performances of the proposed hybrid and standalone models at Sarighamish Station in training and testing stages.

In the testing stage, evaluation metrics in terms of RSD (0.35) and RMSE (291.81) by ITD-EPR outperform those by other methods such as SRC with higher RSD (40%) and RMSE (43.73%), which stands at the second rank. Despite of acceptable performance of ITD-MT for the training dataset, it has lower accuracy than standalone EPR (15.18% of R2 and 37.5% of LMI) for estimated SSL values. In addition, ITD-EPR is selected as the most appropriate model based on AIC (700.484) with the lowest error.

Furthermore, in order to achieve a thorough concept of the efficiency of the proposed models, scatter plots of the observed and estimated values in the training and testing stages are displayed in Figure . Scatterplots show a linear regression line (SSLfor = a·SSL obs + b, where a is the slope and b is the ordinate intercept) between the measured and estimated values. A greater R2 illustrates a better agreement between the observed and forecasted SSL values, and the ITD-EPR model outperforms other methods applied.

Figure 4. Scatterplots of training and testing results by conventional and hybrid models at Sarighamish Station.

Figure 4. Scatterplots of training and testing results by conventional and hybrid models at Sarighamish Station.

Peak SSL values observed and forecasted by different methods in the training and testing stages are presented in Figure . According to Figure , among the models, the difference between observed and predicted SSL by the SRC model is the highest. On the contrary, SSL predicted by ITD-EPR (the solid blue line) is the closet to the observed TDS time series (the dotted black line), which explains the best accuracy of this model compared with other models. It is also obvious from Figure  that the dispersion of relative errors between the observed and forecasted SSL by the ITD-EPR model is closest to zero.

Figure 5. Comparison of conventional and hybrid models for SSL prediction using time variation graphs at Sarighamish Station.

Figure 5. Comparison of conventional and hybrid models for SSL prediction using time variation graphs at Sarighamish Station.

4.3. Prediction results at Varand Station

Similarly, the evaluation metrics R2, RMSE, WI, LMI and RSD applied for SSL prediction at Varand Station for training and testing datasets and results are presented in Table . In terms of SSL prediction for the training dataset, integrating ITD and EPR (ITD-EPR) with the lowest error (RSD=0.35; RMSE=45.97; WI = 0.96) has significant superiority compared to other models such as ITD-MT (RSD=0.48; RMSE=62.48; WI = 0.92), which stands at the second rank, whereas the SRC model (with RSD=0.67; WI = 0.77; RMSE= 88.62) generates poor performance in SSL estimation (Table ).

Table 3. Performance of the proposed hybrid and standalone models at Varand Station in the training and testing stages.

Similar to the training stage results, comparison of the proposed models in Table  indicates ITD-EPR (RMSE=168.91; WI=0.93; LMI = 0.69) and SRC (RMSE=321.08; WI=064; LMI = 0.48) attain the highest and lowest SSL forecasting accuracy in the testing stage, respectively. Moreover, an evaluation of the efficiency of ITD in SSL prediction indicates that the integration of this pre-processing method can improve the accuracy of the proposed models. It is shown from Table  that, although the MT model estimates SSL values poorly (R2=0.69 and LMI=0.5), its integration with the ITD method increases R2 and LMI values by roughly 27.53% and 10%, respectively.

Additionally, scatter plots, relatively error and time series of estimated SSL values versus the observed ones are presented in Figures  and . In terms of scatter plots, the slope of SSL values for the ITD-EPR model is closest to the ideal line and a number of SSL estimated values are underestimated. Moreover, hybrid models have better performance than other models in estimating peak values. Taking the maximum SSL value as an example (1123.678), the proposed methods, namely EPR, MT, SRC, ITD-EPR and ITD-MT, underestimate by about 18.95%, 35.31%, 63.06%, 5.11% and 13.56%, respectively. In the case of the relative error for the testing dataset, the variations of relative error values are between −5 and 5, whereas those for the best standalone model (EPR) are between −8 and 8.

Figure 6. Scatterplots of training and testing results by conventional and hybrid models at Varand Station.

Figure 6. Scatterplots of training and testing results by conventional and hybrid models at Varand Station.

Figure 7. Comparison of conventional and hybrid models for SSL prediction using time variation graphs at Sarighamish Station.

Figure 7. Comparison of conventional and hybrid models for SSL prediction using time variation graphs at Sarighamish Station.

4.4. Discussion and study limitations

The SSL simulations in the present study reveal that there are obvious differences between the original DDMs and data decomposition-based models, indicating the significance of the training framework in prediction models. As stated above, for a standalone prediction model, it is often hard to fully and accurately reflect the formation and changing mechanisms of natural hydrological variables such as SSL since only one resolution component is used for establishing the predicting module. It indicates that other resolution sub-components in the original SSL time series cannot be separated effectively. To avoid this problem, decomposition algorithms are suggested to select various resolution intervals, and after that, the features of each sub-series can be separated. Therefore, the performance of hybrid methods (such as ITD-MT and ITD-EPR) are superior to those of the standard MT and EPR methods, which match the results of recent similar studies (Napolitano et al., Citation2011; Wu & Huang, Citation2009).

In addition, for further comparison, box plots are employed for evaluating the accuracy of the proposed models in SSL forecasting at two proposed stations. In this study, box plots indicate the spread of relative errors between observed and estimated SSL values based on quartiles so that the whiskers explain the variations outside the 25th and 75th percentiles (Figure ) (Prasad et al., Citation2018). Regarding the distribution of the relative errors of the models, the higher and lower capability of ITD-EPR and SRC are clear compared with other approaches. Furthermore, it is obvious that a huge number of SSLs estimated by models are underestimated.

Figure 8. Boxplots of relative predicted error (ton/day) by the hybrid ITD-EPR model compared with the standalone EPR, MT, SCR models and the hybrid ITD-MT model for both tested regions.

Figure 8. Boxplots of relative predicted error (ton/day) by the hybrid ITD-EPR model compared with the standalone EPR, MT, SCR models and the hybrid ITD-MT model for both tested regions.

Moreover, the prediction performances of various methods for peak SSL values are compared. The thresholds for sediment series for Sarighamish and Varand Stations are equal to 11164 and 564.721 ton/day, respectively. Therefore, 10 SSL observations are selected exceeding the threshold and their corresponding forecasted values by the proposed models are shown in Figure . In case of Sarighamish Station, except the SRC model, which has the lowest accuracy for all maximum SSL values, all other models have approximately the same error for SSL peak values with less than 10,000 ton/day. However, for other peak values, the efficiency of the ITD technique in SSL forecasting is undeniable. The dominant superiority of ITD-EPR in SSL forecasting is summarized with an average error percentage value (13.60%), which is significantly lower than the corresponding ITD-MT (26.79%), EPR (32.26%), MT (41.45%) and SRC (75.56%) for 10 sediment peaks. In the case of Varand Station, all models are roughly incapable of predicting SSL peak values, especially for values greater than 10,000 ton/day. Additionally, ITD-EPR has a good performance in predicting peak values of SSL with an average error percentage value of 25.88% compared to the corresponding ITD-MT (26.52%), EPR (47.51%), MT (45.19%) and SRC (67.16%) values for 10 sediment peak amounts.

Figure 9. Comparison of extreme value predictions of SSL using five conventional and hybrid models at Sarighamish and Varand Stations.

Figure 9. Comparison of extreme value predictions of SSL using five conventional and hybrid models at Sarighamish and Varand Stations.

Although the proposed model has an acceptable accuracy in SSL prediction, it is possible to employ other evolutionary DDMs and modern algorithms for integration with pre-processing methods, such as vibrational mode decomposition (VMD) and complete ensemble empirical mode decomposition (CEEMD), to create more accurate models in SSL prediction. With the aim of more accurate SSL estimation, more data samples or different input variables with daily or hourly timescales may be attempted. Consequently, for future work, they may be suggested as the potential measures for improving the accuracy of SSL for forecasting both data samples and different input variables.

5. Conclusions

In this study, the capability of integrating ITD and DDM methods is investigated in SSL forecasting at Sarighamish and Varand Stations in Iran. This study focuses on the effect of the time delay of input/output parameters in SSL predicting. A comparison of result indicates that the ITD data-decomposition method, via decomposing datasets and solving non-stationary behaviors of SSL time-series datasets, has a notable influence on increasing model accuracy. For instance, at Sarighamish Station, the estimated SSL using ITD-EPR has lower errors for RMSE (291.81) and RSD (0.35) compared with other methods. By comparing the performance of EPR and ITD-EPR, it can be seen that the computed value of RMSE decrease from 431.06 to 291.81 and the value of RSD also decreases, from 0.55 to 0.35. At Varand Station, the outcomes demonstrate that EPR and MT are accurate models with the help of the ITD algorithm. However, the results indicate that the MT model attains lower accuracy compared to other DDM models for SSL prediction in terms of R2 (0.73) and WI (0.91) at Sarighamish Station and 0.69 and 0.79 at Varand Station, respectively.

Additionally, the proposed models are compared with the sediment rating curve (SRC) empirical method. Outcomes illustrate that SRC provides the greatest average error percentage values for both Sarighamish (75.56%) and Varand (67.16%) Stations compared with those from DDMs.

The application of the decomposition method is thus highly recommended for SSL forecasting with the same scale of input/output variables and watershed properties for evaluating DDM generalization. Although other nonlinear programming simulation methods can be applied to find contributors to SSL in river bodies, it may typically be subjected to a prohibitory computational burden, especially for complex and large river systems estimation.

It should be noted that the selection of the number of training datasets in DDMs has a great impact on the estimation accuracy. It can be expected that the accuracy increases to a remarkable level by increasing the size of the data, since it leads to a better capability of models to predict SSL variability over different periods. The outcomes of this study can assist in obtaining a range of datasets to provide an optimum model for SSL estimation. It appears that the future of SSL prediction by different evolutionary DDMs and creating modern algorithms will be very bright and promising.

Disclosure statement

No potential conflict of interest was reported by the authors.

References