708
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Swarm Intelligence-Based Feature Selection for Amphetamine-Type Stimulants (ATS) Drug 3D Molecular Structure Classification

, &
Pages 914-932 | Received 15 May 2021, Accepted 06 Aug 2021, Published online: 20 Aug 2021

ABSTRACT

Swarm intelligence-based feature selection techniques are implemented by this work to increase classifier performance in classifying Amphetamine-type Stimulants (ATS) drugs. A recently proposed 3D Exact Legendre Moment Invariants (3D-ELMI) molecular descriptors as 3D molecular structure representational for ATS drugs. These descriptors are utilized as the dataset in this study. However, a large number of descriptors may cause performance degradation in the classifier. To complement this issue, this research applies three swarm algorithms with k-Nearest Neighbor (k-NN) classifier in the wrapper feature selection technique to ensure only relevant descriptors are selected for the ATS drug classification task. For this purpose, the binary version of swarm algorithms facilitated with the S-shaped or sigmoid transfer function known as binary whale optimization algorithm (BWOA), binary particle swarm optimization algorithm (BPSO), and new binary manta-ray foraging optimization algorithm (BMRFO) are developed for feature selection. Their performance is evaluated and compared based on seven performance criteria. Furthermore, the optimal feature subset was then evaluated with seven different classifiers. Findings from this study have revealed the dominance of BWOA by obtaining the highest classification accuracy with the small feature size.

Introduction

The introduction of new drugs of abuse on the illegal drug market presents analytical toxicologists with a steep challenge. Forensic drug analysis methods for the identification of existing and emerging ATS drugs are reviewed in the following works (Chung and Choe Citation2019; Harper, Powell, and Pijl Citation2017; Liu et al. Citation2018). Each of these techniques has several pros and cons associated with it that must be taken into consideration. Some of the drawbacks are its involves lengthy running time, complex testing processes, costly facilities with require well-trained skilled technicians, not update analytical methods (libraries), and inconsistency result from different test-kits. Despite the proven utility, current analytical methods are constantly being improved and optimized to detect and classify existing and emerging illicit substances to increase sensitivity and selectivity (Brandt and Kavanagh Citation2017; Drummer and Gerostamoulos Citation2013; Carroll et al. Citation2012; Reschly-Krasowski and Krasowski Citation2018). It’s also important to develop new methods for determining these new substances to keep up with recent developments in the illegal drug trade.

Molecular similarity analysis (Stumpfe and Bajorath Citation2011) is one of the alternative cheminformatics methods to forensic drug analysis presently available (Bero et al. Citation2017; Krasowski and Ekins Citation2014). The assumption made by the molecular similarity analysis approach is that molecules with similar structures are more likely to have the same experimental properties (Grisoni, Consonni, and Todeschini Citation2018). Furthermore, this approach requires informative and discriminative molecular descriptors (Todeschini and Consonni Citation2010) that provide information about the molecular features for the target candidate molecule in the chemical database (Grisoni, Consonni, and Todeschini Citation2018; Todeschini and Consonni Citation2010). One disadvantage related to cheminformatics is the high dimensionality of molecular descriptors (Lavecchia Citation2015). The principal steps of molecular descriptors generation are depicted in (Grisoni, Consonni, and Todeschini Citation2018). According to the figure, dimensionality reduction is required immediately after descriptors generation to remove redundant and irrelevant information in the original molecular descriptors. This is to provide the best subset of descriptors for computational models such as similarity search analysis (Krasowski et al. Citation2009), quantitative structure-activity relation (QSAR) (Cerruela García et al. Citation2019; Panwala et al. 2017) analysis, and machine learning approaches in cheminformatics (Khan and Roy Citation2018; Lo et al. Citation2018; Mitchell B.O. Citation2014; Vo et al. Citation2020).

Figure 1. Principal steps of molecular descriptors generation for computational models

Figure 1. Principal steps of molecular descriptors generation for computational models

Feature selection is one of the popular dimensionality reduction approaches. The goal of this approach is to select a small subset of relevant features by removing redundancy, irrelevant and noisy features (Idakwo et al. Citation2018; Shahlaei Citation2013). In cheminformatics, descriptor selection is essential for several reasons including (Goodarzi, Dejaegher, and Heyden Citation2012): (i) increase the computational model interpretability and understandability by fewer descriptors; (ii) avoid overfitting by eliminating noisy and redundant descriptors (iii) produce a fast and effective computational model, and (iv) prevents the activity cliff.

Swarm Intelligence (SI) algorithms undertook feature selection (Brezočnik, Fister, and Podgorelec Citation2018; Nayar, Ahuja, and Jain Citation2019; Nguyen-Tri et al. Citation2020) and proven as a technique that can solve NP-hard combinatorial search problems such as the selection of an optimal feature subset from high-dimensional features (Albrecht Citation2006). SI algorithms are gaining prominence in feature selection because of their ability to escape local optima, simplicity and, easier of implementation (Ismail Sayed et al. Citation2017).

The initial intention of the SI algorithm is to solve the continuous optimization problem. Researchers have taken advantage of the flexibility of this algorithm to implement it in feature selection problems by proposing the binary version of it. One common way to convert the continuous solution to a binary solution in the SI algorithm is to use a transfer function. Families of the transfer function in the literature include the S-shaped (Hussien et al. Citation2019), V-shaped (Hussien, Houssein, and Hassanien Citation2017), time-varying (M. Mafarja et al. Citation2018), and quadratic (Algamal et al. Citation2020; Too, Abdullah, and Saad Citation2019a) transfer functions.

In cheminformatics, the implementation of SI-based feature selection approach to molecular descriptors has been shown in several works using particle swarm optimization (PSO) (Khajeh, Modarress, and Zeinoddini-Meymand Citation2013), firefly algorithm (FA) (Fouad et al. Citation2018), salps swarm algorithm (SSA) (Hussien, Hassanien, and Houssein Citation2017), grasshopper optimization algorithm (GOA) (Algamal et al. Citation2020), etc.

No-Free-Lunch (NFL) theorem for search and optimization derived by Wolpert and Macready (Citation1997) has become the motivation of this research to confirm the universality of binary particle swarm optimization (BPSO) algorithm, binary whale optimization algorithm (WOA), and binary manta ray foraging optimization (MRFO) algorithm as descriptors selection approach for ATS drug classification problem. The performance of the algorithms is evaluated using seven performance evaluation criteria. The optimally selected feature subset was tested using the k-Nearest Neighbor (k-NN) classifier.

The remainder of this paper is organized as follows: the next section reveals the necessary material and methods used in the study. The section is comprised of several subsections that briefly describe the overview of the 3D-ELMI molecular descriptors dataset, followed by the theoretical explanations regarding BPSO, BWOA, and BMRFO algorithms and their application in feature selection. Section 3 displays and discusses the obtained empirical results of feature selection and classification by BPSO, BWOA, and BMRFO algorithms implementation. Finally, Section 4 concludes with some recommendations for future work.

Materials and Methods

The process flow of the proposed ATS drug classification system is presented in . Firstly, the existing 3D-ELMI molecular descriptors dataset is obtained. Next, feature selection methods, BPSO, BWOA, and BMRFO are used for selecting the optimal feature subset. The selected feature subset is then inputted to the k-Nearest Neighbor (k-NN) algorithm to perform the classification process.

Figure 2. Process flow of the proposed ATS drug classification system

Figure 2. Process flow of the proposed ATS drug classification system

Overview of 3D Exact Legendre Moment Invariants (3D-ELMI) Molecular Descriptors Dataset

Before performing the ATS drug classification, the molecular descriptors of ATS and non-ATS drugs must be generated before and as input to the feature selection techniques and the classifier. Though, this study directly utilized the available dataset produced using the novel 3D-ELMI molecular descriptors introduced by Pratama (Citation2017) on 7190 samples of drug molecules (3595 ATS drugs and 3595 non-ATS drugs). These descriptors generate a one-dimensional vector of 1185 independent features to describe the 3D molecular structure of each drug molecule. outlines the attributes contain in the dataset.

Table 1. Attributes description

The Binary Version of Swarm-Intelligence Algorithms

The solutions in the feature selection problem are restricted to the binary values of 0 and 1. Similar to native algorithms, in binary version algorithms, the search agents (solutions) repetitively updating their locations to any position in the search space following the leading search agent found so far. The transfer function is one way that can be applied to convert the real position of the search agent to binary values (Mirjalili and Lewis Citation2013). Search agents are forced to travel in a binary space by transfer function with probability definition which updates each element in the search agent to 1 (selected) or 0 (not selected). This study adopted an S-shaped transfer function, the sigmoid function that has been implemented in these studies (Al-Tashi et al. Citation2019; Eid Citation2018; Too, Abdullah, and Saad Citation2019b). EquationEquation 1 shows the mathematical formulation of the sigmoid transfer function (Al-Tashi et al. Citation2019; Panwala et al.):

(1) Sx=11+e10x0.5(1)

where x is the current position (continuos value) of the search agent. Then, x is updated as in EquationEquation 2 (Kennedy and Eberhart Citation1997) based on the probability value Sx obtained in EquationEquation 1:

(2) x=1,if Sxrand0,otherwise(2)

rand is a random number in the [0,1] interval.

Binary Particle Swarm Optimization Algorithm

An algorithm that simulates bird flocking was proposed by Kennedy and Eberhart, (Citation1995) named Particle Swarm Optimization (PSO). The population PSO is made of n particles with two properties speed (velocity) and position. Kennedy and Eberhart (Citation1997) introduced the initial binary PSO (BPSO) to solve the binary optimization problems. For finding the best solution, the particle moves around the search space finding the global maximum or minimum based on its own experience and knowledge (Gupta, Baghel, and Iqbal Citation2018). The optimal position of each particle is recognized as Pbest while Gbest is the global best solution in the population. The velocity of a particle is updated in each iteration t as in EquationEquation 3:

(3) vidt+1=wt×vidt+c1×r1×(Pbestidtxidt)+c2×r2×(Gbestidtxidt)(3)

where x, v and i represent the position, velocity, order of the particle in the population. d denotes the search space dimension, w indicates the inertia weight, c1 and c2 represent the acceleration coefficients, r1 and r2 are the random vectors in [0, 1], and t the iteration number.

For BPSO, the sigmoid transfer function is applied to the velocity to convert to a probability value:

(4) Svidt+1=11+e10vidt+10.5(4)

Finally, the new position is updated using EquationEquation 5.

(5) xidt+1=1,if Svidt+1rand0,otherwise(5)

Binary Whale Optimization Algorithm

An algorithm that is inspired by the hunting mechanism of humpback whales called bubble-net foraging was proposed by Mirjalili and Lewis (Citation2016) known as a whale optimization algorithm (WOA). The binary WOA (BWOA) is firstly proposed by Zamani and Nadimi-Shahraki (Citation2016) for feature selection in disease diagnosis. In the initial stage, the WOA algorithm will assume the target prey as the best search agent that is near to the optimum. Then, other whales (search agents) will update their positions based on the best search agent. WOA swarming behavior is simulated in mathematical formulations below:

(6) D=CXtXt(6)
(7) Xt+1=XtAD(7)

where t is the iteration number. Xtdenotes the candidate search agent at iteration number t and Xt indicate as the best search agent (prey) so far. A and C are coefficient numbers mathematically formulated by EquationEquation 8 and EquationEquation 9. D indicates the distance vector between whale (search agent) and prey (best search agent). In each iteration Xt is updated when there is a better solution.

(8) A=2ar+a(8)
(9) C=2r(9)

where r is a random vector in [0, 1]. The value of a linearly decreases from 2 to 0 over iterations. The bubble-net behavior of humpback whales in the exploitation phase is designed based on two mechanisms: (1) Shrinking encircling of prey: The humpback move in a shrinking encircling along a spiral-shaped path toward the prey by decreasing a variable value in EquationEquation 8. A is a random value in the interval a,a,

(10) a=2t2MaxIter(10)

where t indicates the iteration number and MaxIter is the maximum number of iterations. (2) Spiral updating position: A logarithmic spiral function is used to imitate the helix-shaped movement of humpback whales between the candidate whale (search agent) Xt, and the prey (best search agent), Xt so far. This procedure is mathematically expressed in EquationEquation 12.

(11) D=XtXt,(11)
(12) Xt+1=Deblcos2πl+Xt,(12)

where b is a constant and l is a random number in the range between −1 and 1.

During the optimization phase, an assumption of 50% probability is used to choose between these two mechanisms to update the whales’ position. The mathematical formulation to model this behavior is established as follows:

(13) Xt+1=XtAD,ifp0.5,Deblcos2πl+Xt,ifp0.5,(13)

where p is a random number in 0, 1.

In the exploration phase, the hunt for prey is conducted at random. Contradicting with the exploitation phase, a search agent position is updated following a randomly chosen search agent. A contains a random value that is either greater than 1 or less than −1. These values will urge the search agent to move far away from the best whale. With this mechanism and A>1, it allows WOA to perform a global search in overcoming the problem of the local optima. EquationEquation 15 describes the mathematical formulation:

(14) D=CXrandX(14)
(15) Xt+1=XrandAD(15)

where Xrand indicates a whale that is randomly chosen from the current population.

For BWOA, the sigmoid transfer function is applied to the solution position to convert to a probability value:

(16) SXt+1=11+e10Xt+10.5(16)

Finally, the new position is updated using EquationEquation 17.

(17) Xt+1=1,if SXt+1rand0,otherwise(17)

Binary Manta Ray Foraging Optimization Algorithm

Manta ray foraging optimization (MRFO) algorithm is recently proposed by Zhao, Zhang, and Wang (Citation2020) that inspired by manta ray foraging. MRFO comprises three foraging behaviors which are chain foraging, cyclone foraging, and somersault foraging. Manta rays dine on plankton, small fish, and small shrimp. The early binary MRFO (BMRFO) algorithm was proposed by Ghosh et al. (Citation2021) using transfer functions.

The three MRFO foraging strategies’ mathematical formulation is described in the following: (1) Chain foraging: Manta rays swim in an orderly line toward the position of the observed plankton. If former manta rays (search agents) missed plankton, other subsequent manta rays (search agents) will scoop it. Highly concentrated plankton at a respective position signifies a better position. The chain foraging mathematical modeling is represented in EquationEquation 12:

(18) xidt+1=xidt+rxbestdtxidt+αxbestdtxidtifi=1xidt+rxi1dtxidt+αxbestdtxidtifi=2,,N(18)
(19) α=2rlogr(19)

where xidt denotes the position of the search agent. i is the order of the manta ray, d denotes the search space dimension, t the iteration number, and r is a random vector in [0, 1]. α represents the weight coefficient. The position with the highest plankton concentration is denoted as xbestd and it is assumed as the best solution in MRFO. (2) Cyclone foraging: Manta ray (search agent) moves spirally toward plankton and swim to other manta rays (search agent) in the head-to-tail link. The mathematical model of the spiral-shaped movement is defined as follows:

(20) xidt+1=xbestdt+rxbestdtxidt+βxbestdtxidtifi=xbestdt+rxi1dtxidt+βxbestdtxidtifi=2,,N(20)
(21) β=2er1Tt+1Tsin2πr1(21)

where T is the maximum number of iterations, β is a weight coefficient and r1 is a random vector in [0, 1]. A new random position that is far from the current best one is assigned to each search agent to promote an extensive global search in MRFO. EquationEquation 23 express the mathematical model:

(22) xrandd=Lbd+rUbdLbd(22)
(23) xidt+1=xranddt+rxranddtxidt+βxranddtxidtifi=1xranddt+rxi1dtxidt+βxranddtxidtifi=2,,N(23)

where xranddindicates the search agent random position, Lbd and Ubd are lower and upper boundaries and d denotes the dimension of the search space. (3) Somersault foraging: The position of the best plankton found so far is used as a pivot. Each search agent swims back and forth around the pivot and somersault to a new position. EquationEquation 24 shows the mathematical model:

(24) xidt+1=xidt+Sr2xbestdr3xidt,i=1,2,,N(24)

where S is the somersault factor, r2 and r3 are random numbers in [0, 1].

For BMRFO, the sigmoid transfer function is applied to the solution position to convert to a probability value:

(25) Sxidt+1=11+e10xidt+10.5(25)

Finally, the new position is updated using EquationEquation 26.

(26) xidt+1=1,if Sxidt+1rand0,otherwise(26)

Application of BPSO, BWOA, and BMRFO for Feature Selection

Maximize the classification accuracy and minimizing the feature size are the two main goals of the feature selection technique (M. Mafarja and Mirjalili Citation2018). Since a wrapper-based feature selection technique is used, the evaluation process includes a learning algorithm for classification. The k-Nearest Neighbor (k-NN) algorithm (Altman Citation1992) with the Euclidean distance matric where k = 5 is used in this study (Eid Citation2018; M. Mafarja et al. Citation2019b). The k-NN algorithm is chosen because of its satisfactory results and speedy processing.

An optimal feature subset should have a minimal classification error rate and a small-size feature subset. A fitness function for feature selection is designed to balance the two criteria. The fitness function for evaluating the solutions is presented in EquationEquation 27:

(27) Fitness= αγRD+βRC(27)

where γRD is the classification error rate. R denotes the length of the selected feature subset, and C indicates a total number of features in the original dataset. Parameters α, and β correspond to the importance of classification quality and feature subset length where α  1, 0 and β=1α (Emary, Zawbaa, and Hassanien Citation2016a; Sharawi et al. Citation2017). In this study, the classification metric is the most important thus we set α to 0.99 (Hussien et al. Citation2019; Houssein et al. Citation2020; M. M. Mafarja and Mirjalili Citation2019).

Experimental Dataset Preparation

The molecule id attribute is excluded during the experiment (refer to ). In all experiments, the hold-out validation was employed where 80% of samples were chosen randomly as training set and the remaining 20% of samples are used as the testing set. This partitioning was also applied in several works in the literature (M. Mafarja et al. Citation2019a; M. Mafarja and Mirjalili Citation2018).

Parameter Settings

shows the specific parameter settings that are utilized in binary SI algorithms as feature selectors. For a fair comparison, this study has fixed the number of iteration (t) to 70 for all algorithms. On the other hand, the number of search agents (n) was chosen at 5. Problem dimension (d) is the same as the number of original features in the dataset, in this case, is 1185.

Table 2. BPSO, BWOA, and BMRFO parameters setting

Evaluation Criteria

The experimental results are viewed as the mean of metrics obtained from 15 independent runs (M) to obtain statistically valid results. To ensure the consistency and statistical significance of the obtained results, the data partitioning is repeated in each independent run. All algorithms are implemented and analyzed in Matlab R2019b and executed on an Intel Core i7-6700 machine, 3.40 GHz CPU with Windows 10 operating system, and 16 GB of RAM.

The following evaluation metrics are employed in (Emary, Zawbaa, and Hassanien Citation2016b; Hussien, Hassanien, and Houssein Citation2017) are implemented and recorded from the testing data in each run:

(28) Mean_Accuracy=1Mj=1M1Ni=1NMatchCi,Li,(28)

where M is the total runtime, N indicates the total instance in the testing set, Ci is predicted label by the classifier for instance i, Li is the actual class label, for instance, i, and Match is a function that validates whether Ci and Li are the same by outputting 1 if identical and 0 vice versa.

(29) Best_fitness=Mmini=1gi(29)
(30) Worst_fitness=Mmaxi=1gi,(30)
(31) Mean_fitness=1Mi=1Mgi,(31)

where M is the total runtime,gi has the optimal solution resulted from a runtime i. Best_fitness indicates the smallest fitness value achieved at the maximum iteration by each algorithm over runtime. Worst_fitness denotes the largest fitness value achieved by each algorithm over runtime. Mean_fitness signifies the average fitness value achieved by each algorithm over runtime. The algorithm that achieved the minimal value of Best_fitness, Worst_fitness, and Mean_fitness is considered as having good convergence.

(32) Standard_deviation=1M1giMean_fitness2,(32)
(33) Mean_Feature_Selected_Size=1Mi=1MsizegiD,(33)

where size (gi) is the size of the selected feature subset, and D is the number of features in the original data set.

(34) Mean_Computation_Time=1Mi=1MRuntimei,(34)

where Runtimei, is the computation time in second at runtime i.

Results and Discussion

presented the average of the minimum, maximum, mean, standard deviation of fitness values and their mean computation time to converge. The best result for each method is highlighted in bolded text. From the results, BPSO is seen to achieve the lowest fitness value. Instead, BWOA achieves the lowest maximum and mean fitness values. To reflect and compare the optimization accuracy and convergence rate of each algorithm more intuitively, the average convergence curves of the three algorithms are plotted, as shown in . Based on the curves in , it is observed that BMRFO shows the fastest convergence at early iteration and starts to remain stagnance at iteration 30 and onwards. Instead, BPSO and BWOA continue to converge, and leading BMRFO at iteration 40 resultant in the lowest fitness achieved by BPSO and BWOA in second.

Table 3. Results show the mean of minimum fitness (Min), maximum fitness (Max), mean fitness (Mean), standard deviation (Std), and computation time (CT) obtained by BPSO, BWOA, and BMRFO algorithms

Figure 3. Convergence curves of BPSO, BWOA, and BMRFO algorithms

Figure 3. Convergence curves of BPSO, BWOA, and BMRFO algorithms

Moreover, the results of the computation time of each binary algorithm displayed that BPSO was also the fastest algorithm to converge with the shortest computation time. BPSO was able to attain the lowest fitness within 34.51 seconds compared to BPSO and BMRFO with 402.51 seconds and 1015.03 seconds. By observing the standard deviation result in , the standard deviation obtained for all the algorithms is low and shows that the average fitness results deviate less. This suggests that these algorithms have provided consistent and robust performance over different runs. Despite that BMRFO has the lowest standard deviation, it also achieved the high minimum fitness that demonstrated BMRFO is suffered from premature convergence and stagnation behaviors.

The experimental results that quantify the mean accuracy and mean selected feature size attained by BPSO, BWOA, and BMRFO are listed in . By examining the result in , it can be seen that BWOA has obtained a comparable mean classification accuracy with BPSO, whereas BMRFO scored the lowest accuracy. On the other hand, BPSO is shown to have selected the smallest set of relevant features followed by BWOA and BMRFO.

Table 4. Results show the means of accuracy an selected feature size of BPSO, BWOA and BMRFO algorithms

also stated the mean classification accuracy of 62.63% is attained by k-NN when all features in the dataset were utilized. Mean classification accuracy increased approximately 30% after the feature selection technique is implemented by using BPSO, BWOA, and BMRFO. In terms of the number of features, shows the feature reduction of 96.88%, 75.25%, and 70.45% from the original dataset was obtained by BPSO, BWOA, and BMRFO. The smaller and optimal feature subset can enrich the learning and understandability of the classifier model to provide a good prediction. In addition, it may also accelerate the classifier learning and prediction processes.

outlines the results of mean classification accuracies from using different classifiers. Additionally, the time taken by each classifier to learn and predict the class label is also specified in . The results from and were averaged and displayed in . The results confirmed that BWOA has achieved a better classification performance of 77.12% with only utilizing 24.75% of selected relevant features from the original dataset among others. Besides, BMRFO is in second place with a comparable mean accuracy of 77.10%. BPSO is seen has gained the lowest mean accuracy from the table. It indicates that too small features may be caused information loss and disadvantages to some classifiers. Overall, it is proven that the feature selection technique can improve the classifier efficiency in terms of prediction and speed when significant features are provided.

Table 5. Mean classification accuracies with different classifiers

Table 6. Mean classification time in seconds with different classifiers

Figure 4. The average mean classification accuracies and times by all classifiers

Figure 4. The average mean classification accuracies and times by all classifiers

Our overall research finding reveals the importance of feature selection in molecular descriptors in the cheminformatics domain that always deals with an enormous volume of chemical data. Specifically, this research has recommended an alternative in drug forensic toxicology which combined the image processing technique as a feature extractor to form molecular descriptors from previous research and feature selection technique and machine learning classifiers in the current research that able to reduce time, cost, and effort in identifying existing and new ATS drugs through their 3D molecular structure. The authors believe that further improvement to this proposed method may yield more promising results in the future.

Conclusion and Future Works

This paper has proved the advantages of implementing BPSO, BWOA, and new BMRFO algorithms in the wrapper feature selection methods to improve the ATS drug classification task. The 3D-ELMI molecular descriptors dataset is utilized to validate the performance of these three algorithms in selecting significant features without degrading the classification accuracy. Experimental results quantified that BWOA is proficient as a feature selector where it manages to produce a small and relevant feature subset for different classifiers to provide good classification. In the future, this research plan to tune the BWOA parameters such as the number of search agents and the number of fitness iteration. Furthermore is to examine BWOA with other families of transfer functions. Finally is to evaluate the dataset with other available SI-based algorithms in the literature.

Acknowledgments

This work was supported by Fundamental Research Grant Scheme [FRGS/1/2020/FTMK-CACT/F00461] from the Ministry of Higher Education, Malaysia.

Disclosure statement

The authors do not have any conflict of interest.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.