6,551
Views
17
CrossRef citations to date
0
Altmetric
Review Article

An investigation in satellite images based on image enhancement techniques

ORCID Icon, &
Pages 86-94 | Received 16 Apr 2019, Accepted 24 Sep 2019, Published online: 02 Oct 2019
2

ABSTRACT

In Satellite Images, enhancement plays a dynamic research topic in image processing. The aim of enhancement is to process an image so that the result is more suitable than original image for specific remote sensing application. Satellite image enhancement techniques provide a lot of choices for improving the visual quality of remotely sensed images. In this research review, image fusion plays an important role, since it effectively combines auxiliary image content to enhance information contained in the individual datasets. This article provides an overview of the existing enhancement techniques. There are many techniques which have been proposed for enhancing the digital images which may be used for enhancing Satellite images. Here, a survey on various Satellite image enhancement techniques has been performed which recommends fusion-based enhancement performs superior while comparing with non-fusion-based enhancement techniques.

Introduction

In Satellite Images, Hyperspectral Image enhancement plays an active research topic in remotely sensed image processing. Hyperspectral image classification has become a challenging problem due to mixed pixels that would improve by enhancing the hyperspectral image and unmixing of classes. Developing and implementing the enhancement techniques require adequate information of the existing problems and idea about the obtained hyperspectral image. Enhancement of hyperspectral images plays a vital role in classifying the pure pixels from the mixed one. Despite many significant advances made in the field of enhancement, unmixing of classes from hyperspectral image is a challenging task due to its high dimensionality, low spatial resolution and mixed pixels. This article reviews the most relevant existing enhancement methods.

Review of hyperspectral image enhancement methods

In the last decades, many researchers in the field of hyperspectral imaging have developed significant approaches for the enhancement of images in hyperspectral technology. Hyperspectral imagery is typically collected and represented as a data cube or image cube with spatial information collected in the X-Y plane, and spectral information is represented in the Z direction. Most of the sensors operate either in panchromatic mode or hyperspectral mode. A panchromatic image consists of only one band. It is usually displayed as a gray scale image, i.e. the displayed brightness of a particular pixel is proportional to the intensity of solar radiation reflected by the targets in the pixel. Thus, a panchromatic image is interpreted as a black-and-white aerial photograph of the object.

A panchromatic mode sensor gives high spatial resolution image, but it lacks spectral resolution, which does not contain any color information. Therefore, fusion-based enhancement technique is used to get the spatial and spectral information from the image. Since the panchromatic image does not cover the same spectral range as the hyperspectral image, the details extracted from the panchromatic image would result in the introduction of spectral distortions with unclear foreground and background information. Therefore, a model is required to add the details of the matte in such a way that the spatial quality of the hyperspectral image is improved while its spectral quality remains unchanged with sharp edge information. The algorithmic approach to obtain high-resolution images is fusion of bands which have rich entropy to increase the spatial information. Many sensing platforms are equipped to capture high spectral and low spatial resolution hyperspectral image as well as low spectral and high spatial resolution auxiliary image (i.e. panchromatic image).

Figure 1. Various satellite image enhancement approaches.

Figure 1. Various satellite image enhancement approaches.

Generally, the enhancement algorithm is of two methods, namely non-fusion-based and fusion-based. The Various approaches to enhancement methods are shown in .

Non-fusion-based enhancement methods

Non-fusion-based enhancement methods focus on the spatial resolution of hyperspectral imaging systems. Three different non-fusion-based enhancement methods are described in the literature, namely Spectral Mixture Analysis-based, Learning-based and Matting-based method.

Spectral mixture analysis approach

Spectral Mixture Analysis (SMA) is a soft-classification approach which models the total reflectance in a pixel as the linear combination of reflectance from each class using LMM which predicts the proportion of each class within each pixel using the model. A variety of approaches based on SMA using LMM have been proposed to address the problem of spatial resolution in hyperspectral images. SMA-based sub-pixel processing is performed in which the spatial dependencies of materials in mixed pixels are not considered thus acting as an initial stage for the spatial resolution enhancement of hyperspectral images (Ruescas et al., Citation2010).

Brown, Gunn, and Lewis (Citation1999) have proposed a linear SVM approach, which estimates land cover components by sub-pixel processing. This model automatically selects the relevant pure pixels and determines the number of classes in the region of interest by providing accurate representation of land covers. Atkinson (Citation2005) proposed an algorithm for the enhancement of hyperspectral images in spatial resolution using sub-pixel target mapping. High-resolution pixels are placed based on spatial correlation using a distance-weighted function.

In the algorithm proposed by Villa, Chanussot, Benediktsson, and Jutten (Citation2011), the spectral unmixing is performed to determine the proportion of endmembers in each pixel. Sub-pixels in the model are located by spatial resolution mapping performed by simulated annealing. However, the limitation is the requirement of high computational load because of the large number of bands present in hyperspectral images. Even though SMA methods provide the abundances of the endmembers within a pixel which is very useful in determining the presence of object in remote sensing applications; the limitation is that they do not exploit spatial and spectral information to their full capacity.

Learning-based approach

The second classification of non-fusion-based method is termed as learning-based method in which a set of training images is used for learning the super resolution attributes. Based on the learning method, it is categorized into Hopfield Neural Network (HPNN) and Back Propagation Neural Network (BPNN). A method used by Gu, Zhang, and Zhang (Citation2008) obtains the abundance map using linear SMA which is based on spatial correlation of land covers. In this method, they use low-resolution training images to determine the parameters used in super resolution mapping.

Han et al. (Citation2019) have applied a similar approach which uses low-resolution images and their down-sampled versions to train BPNN in the learning-based method. A mean filter is used for the down sampling, and the super resolution is performed by considering spatial correlation of different materials present in hyperspectral images. Hence, hyperspectral images themselves are treated as the training data to achieve better coherence between the results of the enhanced and the original hyperspectral images.

Zhang and Mishra (Citation2014) have implemented a support vector regression approach which does not use explicit formula to describe prior information about the nonlinear relationships between the coarse fractional pixels and labelled sub-pixels from the best matching high-resolution training data. Due to the above-mentioned limitations, learning-based method is hardly used in practice.

Matting-based approach

The third classification of non-fusion-based approach is termed as matting-based approach which extracts the foreground object from an image. Levin, Rav-Acha, and Lischinski (Citation2008) have proposed a new alpha matting model in which one assumes the colors of the foreground and the background to vary linearly inside a small patch. The result is imperfect with insufficient user input. Wang and Suter (Citation2007) have introduced a new model in which data terms are based on color models of the foreground and the background regions. However, the result is worse, that some dark-green areas in the image background are semi-transparent layers, i.e. dark green is a mix of dark foreground with green background.

Chen, Zou, Zhiying Zhou, Zhao, and Tan (Citation2013) proposed a new image matting with local and nonlocal smooth priors. In this method, editing propagation essentially introduces a nonlocal smooth prior on the alpha matte in which the manifold is preserved. Prior from matting the non-locally smooth Laplacian complements each other, and hence for natural matting, it is combined with a data term from color sampling. The color distribution is similar in foreground and background images. It is not easy to set a common window size for all test data which is a limitation. This method generalizes good results with a small fixed window size with the help of nonlocal smoothness constraint.

A matting technique called KNN matting is proposed for ordinary image by Chen et al. (Citation2013) with a closed form solution that can hardness the preconditioned conjugate gradient method and runs in a few seconds after accepting very sparse user mark-ups. Xu, Price, Cohen, and Huang (Citation2017) have proposed a deep image matting model to obtain high-level context and use high-level features. In this method a neural network is capable of capturing higher order features resulting in higher computational complexity.

Even though the non-fusion-based methods provide a good solution to hyperspectral image enhancement by gaining information while extracting the foreground from its background images, high spatial resolution is not obtained. In order to have high spatial information, fusion-based enhancement is needed.

Fusion-based enhancement methods

In fusion-based enhancement methods, hyperspectral images generate the high spatial resolution scene by fusing a low spatial resolution hyperspectral image with the auxiliary information. The fusion-based enhancement methods are classified into component substitution approach, numerical and statistical-based approach, multiresolution approach and optimization–based approach.

Component substitution (CS) approach

The most popular methods are Intensity-Hue Saturation (IHS) color transformation and Principal Component Substitution. A very popular technique is IHS (Malpica, Citation2007). Color enhancement, feature enhancement and improvement of spatial resolution are the standard procedures in analysis of images (Pohl & Van Genderen, Citation1998). This technique converts a color image from RGB space to the HIS color space. Here, the intensity band is replaced by the auxiliary information. Implementing this method is very efficient, but this technique produces color distortion because the auxiliary information is not created from the same wavelengths of light as the RGB image. Therefore, this method has been modified to Fast Intensity-Hue Saturation (FIHS) (Tu, Huang, Hung, & Chang, Citation2004).

The modification performed in the FIHS method is that it extends the IHS method from three bands to four by incorporating an infrared component because the auxiliary information is taken from infrared light, in addition to visible wavelength. This modification allowed the calculated intensity to better match the auxiliary information, thus causing less color distortion in the fused image. The trade-off between spatial improvement and spectral quality loss received a lot of attention and led to the introduction of trade-off parameters (Tu et al. Citation2004). To obtain the desired result, these parameters allow a fine tuning by the user.

To overcome the spectral quality problems, researchers have proposed the Adaptive IHS method (Rahmani, Strait, Merkurjev, Moeller, & Wittman, Citation2010) which adaptively adjusts linear combination of the coefficients of multispectral bands. The weights induced by edge injection process in the spatial detail are too large that results in color changes. Thus, this causes spectral distortion. In addition to this, weights induced by edges lead to reduction in sharpness of the fused image. Another improved method which is Improved AIHS (Leung, Liu, & Zhang, Citation2014) is designed by a more adaptive weighting matrix in the spatial detail injection step. It performs better than AIHS. Edge of high reflection area is more prone to distortion. Hence, the overall spectral distortion is higher.

Another technique, namely, Generalized IHS Brovey Transform (BT) Smoothing Filter-based Intensity Modulation (SFIM) Dehnavi and Mohammadzadeh (Citation2013) is proposed which incorporates Generalized IHS, BT and SFIM by using two adjustable parameters. This modulation approach is the most frequently employed approach in which the spatial and spectral information are controlled. In addition, it preserves more spectral information, but suffers more spatial information loss.

Hubert et al. (Citation2005) has proposed a novel fusion technique which is performed using Principal Component Analysis (PCA) approach. This approach partitions the dataset into sub-groups of bands. Therefore, computational complexity is reduced.PCA is applied to each subgroup based on dominant classes. The spectral signature of a class is used as the transfer function of matched filter applied to corresponding bands of the dataset. The principal component of each sub-group is used as a component of the final RGB image. Since the energy is not uniformly distributed for each group, the color distortion is low which results in less visual quality.

Qu et al. (Citation2018) proposed a Structure Tensor-based algorithm for Hyperspectral and Panchromatic Image fusion. In this algorithm, an image enhancement approach is utilized to sharpen the spatial information of panchromatic image and the spatial details of the Hyperspectral image which is obtained by using an adaptive weight method. The structure tensor is introduced to extract spatial details of the enhanced panchromatic image. In order to avoid artifacts at the boundaries, a guided filter is applied to the integrated spatial information image. To reduce spectral and spatial distortion, an injection matrix is constructed. This algorithm provides more spatial details while preserving the spectral information. Xie et al. (Citation2019) proposed an enhancement algorithm using multispectral and Hyperspectral fusion model, based on the observation models. In this method, all parameters can be learned from the training data and spatial, spectral response operators are discovered. This algorithm provides color and brightness much closer to the low-resolution Hyperspectral image. Jayanth, Kumar, and Koliwad (Citation2018) proposed an enhancement algorithm using regionally weighted principal component analysis and wavelet algorithm. In this algorithm spectral information is preserved with improvement in spatial quality and good clarity. Parveen, Kulkarni, and Mytri (Citation2018) proposed an image enhancement algorithm for low-resolution satellite images. This algorithm improves the interpretation and makes the image visually clear.

Component substitution-based approaches focus on making an ideal image intensity and a high-frequency injection model to preserve spectral information. But incurse more spatial information loss sharpness reduction and increasing spectral distortion. Some algorithms can be applied only to a specific sensor, although a few commercially available fusion software tools have proven to be suitable for all available optical panchromatic and multispectral images. In addition, these tools have a greater potential to improve the spectral quality, although they only show visually prominent results.

Multi-resolution approach

Multi-Resolution Approach (MRA) merges the spatial information from a high-resolution image with the radiometric information from a low-resolution image. The process is to sharpen the low-resolution image. In recent years, the powerful MRA technique such as wavelets, curvelets and others have become popular because of increase in computational power and availability of algorithms in commercial remote sensing software. Fusion based on multiresolution contourlet transform has been proposed by Miao and Wang (Citation2006). In this approach, first directional image pyramids up to certain levels using the contourlet decomposition are obtained. The low-frequency coefficients which are at the top of the image pyramids are fused using the average-based rule. At the remaining levels, the fusion rule selects the coefficients from the source image which have higher energy in the local region.

MRA-based approaches decompose images into many number of channels depending on the local frequency content (Nunez et al., Citation1999). The pyramid is used to represent the multi-scale models for the original image. With increasing level, the original image is approximated at coarser spatial resolution. In between the individual pyramid levels, the transform is performed using wavelet and Curvelet transforms. The wavelet transform approach is based on substitution and addition. In substitution approach, selected multispectral wavelet planes are substituted by the planes of the corresponding panchromatic images. In addition approach, the decomposed panchromatic planes are added to the multispectral bands.

Garzelli, Nencini, Alparone, and Baronti (Citation2005) have proposed a fusion based on multiresolution analysis to describe how high-pass information is modelled from panchromatic image. The basic wavelet transform substitution methods are low-low, low-high, high-low and high-high. These decompositions exist to form the pyramid at several levels. The obtained fused image is inverse transformed. In practice, the wavelet transform function and scaling are not explicitly derived (Amolins, Zhang, & Dare, Citation2007). They are described by coefficients which are fused by the different rules of fusion to produce the resultant image Better results are obtained when the fusion process is context-driven (Aiazzi et al. Citation2002). This process is to make the fused bands the most similar to the narrow band multispectral sensor image with the same resolution as the broadband image sensing the single panchromatic band. In order to achieve gain equalization, the higher frequency coefficients taken from the high-resolution image are selected based on statistical congruence and weighted by a space-varying factor. Ringing artifacts are completely moderate. Here, the spectral signatures of small size may be restored (Aiazzi, Alparone, Barducci, Baronti, & Pippi, Citation2001), even though a heavily smeared image is obtained.

Pradhan, King, Younan, and Holcomb (Citation2006) have proposed a multiresolution analysis which is extended to discrete function. The space is conserved and determines the best possible number of decomposition levels required for merging images with a particular resolution ratio. If the resolution ratio is high, the decomposition levels are more to produce better results. Computational complexity is more due to more decomposition levels. Recently, Contourlet Transform (CT) have been proposed by Metwalli et al. (Citation2014). This transform captures and links discontinuity points into linear structures. The ability is to have different number of directions at each scale of multiresolution decomposition. The non-subsampled CT works on a non-subsampled pyramid and produces better results. This technique can also be found as hybrid component together with PCA and IHS (Xiao-Hui Citation2008).

Proportional Additive Wavelet and Laplacian-based context-based decision method have been considered as the good image fusion approaches during the Data-fusion contest, which perform better than CS-based method (Alparone, Aiazzi, Baronti, Garzelli, & Nencini, Citation2006). However, in MRA-based fusion, spatial distortions may occur, because of aliasing effects and the blurring of textures, and spatial enhancement is not satisfactory compared with CS – based methods (Aiazzi, Alparone, Baronti, Garzelli, & Selva, Citation2006). MRA (Mallat, Citation1989) have provide effective tools, like wavelets and pyramid, to carry out image merging tasks. However, in the case of high pass detail injection in an image, spatial distortions, aliasing effects, originating shifts or blur of contours and textures may occur (Yocky, Citation1996). These disadvantages, which may be as much annoying as spectral distortions, are emphasized by mis-registration between MS and Panchromatic data, especially if the MRA underlying detail injection is not shift-invariant (Aiazzi et al. Citation2002, González-Audícana, Saleta, Catalán, & García, Citation2004). Wenyan, Zhenhong, Yu, Yang, and Kasabov (Citation2018) proposed an enhancement algorithm based on equal weight image fusion which improves the accuracy of change detection with less visuality. To avoid these problems, Numerical and statistical-based approach is proposed which gives more efficient outputs.

Numerical and statistical-based approach

The simplest and earliest methods used in remote sensing are mathematical combinations of different images. Addition, subtraction, multiplication and division approaches play an important role in earth observation. One of the approaches is called subtractive resolution merge influence of user and predefined calculated band weights in subtractive resolution without difference in result have been analyzed by Ashraf, Brabyn, and Hicks (Citation2013). A classical technique is BT based on the spectral modelling that reaches a normalization of the input band through subtraction and addition. A major drawback is the distortion in color induced by BT. A modification of BT is the colour normalized spectral sharpening (Vrabel, Citation2000). This method groups the input bands into spectral segments and is, therefore, an adaptive approach which improves the spectral quality of the fused images.

Another modification is modified BT which is based on local modulation of the multispectral image by the ratio of the new intensity and initial intensity components (Chibani, Citation2007). The variational model is formulated by Ballester, Caselles, Igual, Verdera, and Rougé (Citation2006) which describe the relationship between lower resolution multispectral image and high-resolution panchromatic image using subsampling and filtering. It assumes that multispectral image with its geometry is comprised in panchromatic image. This method was extended recently by Duran, Coll, and Sbert (Citation2013) to adapt the process considering local relationships of neighbouring pixels which have the denoising effect.

In commercialization purpose, one of the statistical approach algorithms performed is Fuze Go known as pansharpening algorithm. It uses a least square fit between the gray values of the input bands. The output values are estimated with statistical methods (Xu et al Citation2014). The strength is that the fully automated process also allows inexperienced users to achieve good results, and the fact that the input images are treated individually to find the best match (Zhang & Mishra, Citation2014). Devika and Parthasarathy (Citation2018) proposed a fuzzy statistics-based technique for enhancing the satellite images. This algorithm results an efficient and accurate fuzzy clustering. Therefore, to enhance the contrast, techniques which jointly perform the combination operation with a solution are required.

Optimization-based approach

The maximization or minimization of a real function by choosing inputs within an allowable set for determining the resultant of the function is termed as optimization technique. In order to solve optimization-based problems algorithms or iterative methods are used that converge to a finite solution. An optimization-based approach is used for fusion for multi-exposure optical images by Raman and Chaudhuri (Citation2007) where a set of images has been fused for the purpose of enhancing the dynamic range in the output image. However, due to smoothness incorporation in the resultant image in cost function, it results in a smooth solution. A fast approach for fusion of hyperspectral images through redundancy elimination was proposed by Kotwal and Chaudhuri (Citation2010). In this method, a specific set of image bands selected is mutually correlated, and most of the information is retained in the data. As only a fraction of the entire data is being fused, this method is computationally much faster. A new approach for visualization-based fusion of hyperspectral image bands was proposed by Kotwal and Chaudhuri (Citation2012). Here, the geological input data have a very small value of intrinsic contrast and is difficult to visualize.

Xu, Zhang, Li, and Ding (Citation2015) have proposed a Gram Schmidt approach which generates a simulated lower resolution pan image through weighted sum of Green, Blue and Red and near-infrared multispectral bands. In spatial and spectral evaluations, results are blurred in all the band combinations and inobvious color distortion and strange artifacts are introduced. In addition, this transformation is computationally intensive, and hence it takes more time in generating output images. Wang et al. (Citation2013) proposed a projected gradient approach based on unmixing-based non-negative matrix factorization. This method produces the fused image with high spectral and spatial resolution. It improves the spatial resolution without losing much of its color information. Rajathurai A and Chellakon H S (Citation2018) proposed a KNN matting model which has a closed form solution that leverages the existing approaches by producing efficient visualization multilayer extraction results with reduced computational complexity.

Ben Abbes, Bounouh, Farah, de Jong, and Martínez (Citation2018) compare three satellite image using time series decomposition methods for vegetation change detection. This results of the comparative analysis show the better performance of image fusion techniques when compared to non-fusion based techniques. Kaplan (Citation2018) proposed a weighted intensity hue saturated transform algorithm for image enhancement. In this technique, the intensity component is obtained by weighting function which preserves more information from the bands of the input image so that the visual and quantitative comparisons give superior results.

Hashimoto et al. (Citation2011) proposed a multispectral image enhancement algorithm which gives an effective visualization. In this method, the user can specify the spectral band to extract the spectral feature and the color for visualizing independently so that the desired feature is enhanced in spectral domain in the specified color. Mozgovoy, Hnatushenko, and Vasyliev (Citation2018) proposed an algorithm for automated recognition of vegetation, waterbodies and the territory in satellite images. This algorithm provides a significant increase of the efficiency and reliability while updating maps of large cities which reduces the financial cost and also the human errors are minimised.

Guo, Ma, Bao, and Wang (Citation2018) proposed an algorithm for fusing panchromatic and short wave infrared bands based on convolutional neural network. This method effectively enhances the spatial information by separating the basic architecture into three layers. Yadav and Agrawal (Citation2018) proposed an enhancement algorithm using road network identification and extraction in satellite imagery using otsu’s method. This method detects and extracts the road network from high-resolution satellite images which enhances the contrast of the image. Md Noor, Ren, Marshall, and Michael (Citation2017) proposed an enhancement algorithm for corneal epithelium injuries in Hyperspectral images this algorithm improves the interpretability of data into clinically relevant information to facilitate diagnostics.

Gunlu (Citation2014) proposed an enhancement algorithm for the prediction of stand parameters using pan-sharpened IKONOS satellite image. Multiple stepwise regression analysis is used to estimate these stand parameters. It gives high accurate measurements, with higher cost and time. Qifal Wang, Jia, Qin, Yang, and Hu (Citation2011) proposed an enhancement technique for multispectral and panchromatic image fusion. This method obtains a high spatial resolution multispectral image with high similarity referenced true high-resolution multispectral image. Gewali et al. (Citation2018) proposed an algorithm for Hyperspectral image analysis based on machine learning. This analysis algorithm extracts the desired information from intrinsic spectral variation while ignoring the extrinsic variation and intrinsic variation caused by unrelated factors.

Li et al. (Citation2019) proposed an enhancement algorithm which offers large scale degraded underwater images. This algorithm is highly desirable in which effective non-reference underwater image quality evaluation metrics are calculated. Maselli, Chiesi, and Pieri (Citation2016) proposed a novel approach for the enhancement of spatial properties which produces NDVI image series. The statistical method is applied to improve the spatial features of the abundance images based on the end members. Tiede, Baraldi, Sudmanns, Belgiu, and Lang (Citation2017) proposed an architecture and prototypical implementation of a semantic querying system for big earth observation image bases, which enhances the vision of the photographic images. Gavankar and Ghosh (Citation2018) proposed an automatic building foot print extraction from high-resolution satellite image using mathematical morphology. In this approach, buildings can be detected from different size and shapes. This method eliminates false-detected buildings. Lal and Anouncia (Citation2016) proposed an enhanced dictionary-based sparse representation fusion for multi-temporal remote sensing images. A locally adaptive dictionary is created such that the dictionary contains patches extracted from images. This technique preserves spectral information, errors, color and visual quality of the fused product.

Conclusion

This article summarizes the review of Satellite image enhancement methods. Nonfusion-based enhancement method provides low spatial information with higher computational complexity. In order to improve the spatial information and reduce the computational complexity, fusion-based method is preferred. Fusion-based enhancement results in high spectral distortion, low spatial resolution, reduced contrast and sharpness. Therefore, it is necessary to have a fusion technique to enhance the Satellite images for better visualization and classification accuracy.

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  • Aiazzi, B., Alparone, L., Barducci, A., Baronti, S., & Pippi, I. (2001). Information-theoretic assessment of sampled hyperspectral imagers. IEEE Transactions on Geoscience and Remote Sensing, 39(7), 1447–1458. doi:10.1109/36.934076
  • Aiazzi, B., Alparone, L., Baronti, S., & Garzelli, A. (2002). Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis. IEEE Transactions on Geoscience and Remote Sensing, 40(10), 2300–2312. doi:10.1109/TGRS.2002.803623
  • Aiazzi, B., Alparone, L., Baronti, S., Garzelli, A., & Selva, M. (2006). MTF-tailored multiscale fusion of high-resolution MS and Pan imagery. Photogrammetric Engineering & Remote Sensing, 72(5), 591–596. doi:10.14358/PERS.72.5.591
  • Alparone, L., Aiazzi, B., Baronti, S., Garzelli, A., & Nencini, F. (2006, July). A new method for MS+ Pan image fusion assessment without reference. In 2006 IEEE International Symposium on Geoscience and Remote Sensing (pp. 3802–3805). IEEE.
  • Amolins, K., Zhang, Y., & Dare, P. (2007). Wavelet based image fusion techniques—An introduction, review and comparison. ISPRS Journal of Photogrammetry and Remote Sensing, 62(4), 249–263. doi:10.1016/j.isprsjprs.2007.05.009
  • Ashraf, S., Brabyn, L., & Hicks, B.J. (2013). Alternative solutions for determining the spectral band weights for the subtractive resolution merge technique. International Journal of Image and Data Fusion, 4(2), 105–125. doi:10.1080/19479832.2011.607473
  • Atkinson, P.M. (2005). Sub-pixel target mapping from soft-classified, remotely sensed imagery. Photogrammetric Engineering & Remote Sensing, 71(7), 839–846. doi:10.14358/PERS.71.7.839
  • Ballester, C., Caselles, V., Igual, L., Verdera, J., & Rougé, B. (2006). A variational model for P+ XS image fusion. International Journal of Computer Vision, 69(1), 43–58. doi:10.1007/s11263-006-6852-x
  • Bayarri, M.J., Berger, J.O., Paulo, R., Sacks, J., Cafeo, J.A., Cavendish, J., … Tu, J. (2007). A framework for validation of computer models. Technometrics, 49(2), 138–154. doi:10.1198/004017007000000092
  • Ben Abbes, A., Bounouh, O., Farah, I.R., de Jong, R., & Martínez, B. (2018). Comparative study of three satellite image time-series decomposition methods for vegetation change detection. European Journal of Remote Sensing, 51(1), 607–615. doi:10.1080/22797254.2018.1465360
  • Brown, M., Gunn, S.R., & Lewis, H.G. (1999). Support vector machines for optimal classification and spectral unmixing. Ecological Modelling, 120(2–3), 167–179. doi:10.1016/S0304-3800(99)00100-3
  • Chen, X., Zou, D., Zhiying Zhou, S., Zhao, Q., & Tan, P. (2013). Image matting with local and nonlocal smooth priors. In 2013 Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, China, (pp. 1902–1907).
  • Chibani, Y. (2007). Integration of panchromatic and SAR features into multispectral SPOT images using the ‘a tròus’ wavelet decomposition. International Journal of Remote Sensing, 28(10), 2295–2307. doi:10.1080/01431160600606874
  • Dehnavi, S., & Mohammadzadeh, A. (2013). A new developed GIHS-BT-SFIM fusion method based on edge and class data. ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 1(3), 139–145. doi:10.5194/isprsarchives-XL-1-W3-139-2013
  • Devika, G., & Parthasarathy, S. (2018). Fuzzy statistics-based affinity propagation technique for clustering in satellite cloud image. European Journal of Remote Sensing, 51(1), 754–764. doi:10.1080/22797254.2018.1482731
  • Duran, J., Coll, B., & Sbert, C. (2013). Chambolle’s projection algorithm for total variation denoising. Image Processing on Line, 2013, 311–331. doi:10.5201/ipol.2013.61
  • Garzelli, A., Nencini, F., Alparone, L., & Baronti, S. (2005, July). Multiresolution fusion of multispectral and panchromatic images through the curvelet transform. In Proceedings. 2005 IEEE International Geoscience and Remote Sensing Symposium, 2005, Netherland. IGARSS’05. (Vol. 4, pp. 2838–2841). IEEE.
  • Gavankar, N.L., & Ghosh, S.K. (2018). Automatic building footprint extraction from high-resolution satellite image using mathematical morphology. European Journal of Remote Sensing, 51(1), 182–193. doi:10.1080/22797254.2017.1416676
  • Gewali, U.B., Monteiro, S.T., & Saber, E. (2018). Machine learning based hyperspectral image analysis: A survey. arXiv Preprint arXiv, 1802.08701.
  • González-Audícana, M., Saleta, J.L., Catalán, R.G., & García, R. (2004). Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition. IEEE Transactions on Geoscience and Remote Sensing, 42(6), 1291–1299. doi:10.1109/TGRS.2004.825593
  • Gu, Y., Zhang, Y., & Zhang, J. (2008). Integration of spatial–Spectral information for resolution enhancement in hyperspectral images. IEEE Transactions on Geoscience and Remote Sensing, 46(5), 1347–1358. doi:10.1109/TGRS.2008.917270
  • Günlü, A., Ercanlı, İ., Sönmez, T., & Başkent, E.Z. (2014). Prediction of some stand parameters using pan-sharpened IKONOS satellite image. European Journal of Remote Sensing, 47(1), 329–342. doi:10.5721/EuJRS20144720
  • Guo, M., Ma, H., Bao, Y., & Wang, L. (2018). Fusing panchromatic and swir bands based on cnn-a preliminary study over worldview-3 datasets. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, 42, 3.
  • Han, X., Yu, J., Luo, J., & Sun, W. (2019). Hyperspectral and multispectral image fusion using cluster-based multi-branch bp neural networks. Remote Sensing, 11(10), 1173.
  • Hashimoto, N., Murakami, Y., Bautista, P.A., Yamaguchi, M., Obi, T., Ohyama, N., … Kosugi, Y. (2011). Multispectral image enhancement for effective visualization. Optics Express, 19(10), 9315–9329. doi:10.1364/OE.19.009315
  • Hubert, M., & Rousseeuw, P. J., & Vanden Branden, K. (2005). ROBPCA: a new approach to robust principal component analysis. Technometrics, 47(1), 64–79
  • Jayanth, J., Kumar, T.A., & Koliwad, S. (2018). Fusion of multispectral and panchromatic data using regionally weighted principal component analysis and wavelet. Current Science, 115(10), 1938. doi:10.18520/cs/v115/i10/1938-1942
  • Kaplan, N.H. (2018). Weighted intensity hue saturation transform for image enhancement and pansharpening. Turkish Journal of Electrical Engineering and Computer Science, 26(1), 204–219. doi:10.3906/elk-1704-43
  • Kotwal, K., & Chaudhuri, S. (2010, December). A fast approach for fusion of hyperspectral images through redundancy elimination. In Proceedings of the seventh Indian conference on computer vision, graphics and Image processing (pp. 506–511). ACM. doi:10.1177/1753193409347495
  • Kotwal, K., & Chaudhuri, S. (2012). An optimization-based approach to fusion of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 5(2), 501–509. doi:10.1109/JSTARS.2012.2187274
  • Lal, A.M., & Anouncia, S.M. (2016). Enhanced dictionary based sparse representation fusion for multi-temporal remote sensing images. European Journal of Remote Sensing, 49(1), 317–336. doi:10.5721/EuJRS20164918
  • Leung, Y., Liu, J., & Zhang, J. (2014). An improved adaptive intensity–Hue–Saturation method for the fusion of remote sensing images. IEEE Geoscience and Remote Sensing Letters, 11(5), 985–989. doi:10.1109/LGRS.2013.2284282
  • Levin, A., Rav-Acha, A., & Lischinski, D. (2008). Spectral matting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(10), 1699–1712. doi:10.1109/TPAMI.2008.168
  • Li, C., Guo, C., Ren, W., Cong, R., Hou, J., Kwong, S., & Tao, D. (2019). An underwater image enhancement benchmark dataset and beyond. arXiv Preprint arXiv, 1901.05495.
  • Mallat, S.G. (1989). A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis & Machine Intelligence, 11(7), 674–693. doi:10.1109/34.192463
  • Malpica, J.A. (2007). Hue adjustment to IHS pan-sharpened IKONOS imagery for vegetation enhancement. IEEE Geoscience and Remote Sensing Letters, 4(1), 27–31. doi:10.1109/LGRS.2006.883523
  • Maselli, F., Chiesi, M., & Pieri, M. (2016). A novel approach to produce NDVI image series with enhanced spatial properties. European Journal of Remote Sensing, 49(1), 171–184. doi:10.5721/EuJRS20164910
  • Md Noor, S., Ren, J., Marshall, S., & Michael, K. (2017). Hyperspectral image enhancement and mixture deep-learning classification of corneal epithelium injuries. Sensors, 17(11), 2644. doi:10.3390/s17050968
  • Metwalli, M.R., Nasr, A.H., Faragallah, O.S., El-Rabaie, E.S.M., Abbas, A.M., Alshebeili, S.A., & Abd El-Samie, F.E. (2014). Efficient pan-sharpening of satellite images with the contourlet transform. International Journal of Remote Sensing, 35(5), 1979–2002. doi:10.1080/01431161.2013.873832
  • Miao, Q., & Wang, B. (2006, April). The contourlet transform for image fusion. In Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2006 (Vol. 6242, p. 62420Z). International Society for Optics and Photonics, Florida.
  • Mozgovoy, D.K., Hnatushenko, V.V., & Vasyliev, V.V. (2018). Automated recognition of vegetation and water bodies on the territory of megacities in satellite images of visible and ir bands. ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, 4, 3.
  • Nunez, J., Otazu, X., Fors, O., Prades, A., Pala, V., & Arbiol, R. (1999). Multiresolution-based image fusion with additive wavelet decomposition. IEEE Transactions on Geoscience and Remote Sensing, 37(3), 1204–1211. doi:10.1109/36.763274
  • Parveen, R., Kulkarni, S., & Mytri, V.D. (2018). Automated extraction and discrimination of open land areas from IRS-1C LISS III imagery. International Journal of Computers and Applications, 1–10. doi:10.1080/1206212X.2018.1558937
  • Pohl, C., & Van Genderen, J.L. (1998). Review article multisensor image fusion in remote sensing: Concepts, methods and applications. International Journal of Remote Sensing, 19(5), 823–854. doi:10.1080/014311698215748
  • Pradhan, P.S., King, R.L., Younan, N.H., & Holcomb, D.W. (2006). Estimation of the number of decomposition levels for a wavelet-based multiresolution multisensor image fusion. IEEE Transactions on Geoscience and Remote Sensing, 44(12), 3674–3686. doi:10.1109/TGRS.2006.881758
  • Qu, J., Lei, J., Li, Y., Dong, W., Zeng, Z., & Chen, D. (2018). Structure tensor-based algorithm for hyperspectral and panchromatic images fusion. Remote Sensing, 10(3), 373. doi:10.3390/rs10030373
  • Rahmani, S., Strait, M., Merkurjev, D., Moeller, M., & Wittman, T. (2010). An adaptive IHS pan-sharpening method. IEEE Geoscience and Remote Sensing Letters, 7(4), 746–750. doi:10.1109/LGRS.2010.2046715
  • Rajathurai, A., & Chellakkon, H.S. (2018). Improved visualization using a fusion technique based on KNN matting of remotely sensed images. Journal of the Indian Society of Remote Sensing, 46(2), 179–187. doi:10.1007/s12524-017-0693-7
  • Raman, S., & Chaudhuri, S. (2007, October). A matte-less, variational approach to automatic scene compositing. In 2007 IEEE 11th International Conference on Computer Vision (pp. 1–6), Bombay. IEEE.
  • Ruescas, A.B., Sobrino, J.A., Julien, Y., Jiménez-Muñoz, J.C., Sòria, G., Hidalgo, V., … Mattar, C. (2010). Mapping sub-pixel burnt percentage using AVHRR data. Application to the Alcalaten area in Spain. International Journal of Remote Sensing, 31(20), 5315–5330. doi:10.1080/01431160903369592
  • Tiede, D., Baraldi, A., Sudmanns, M., Belgiu, M., & Lang, S. (2017). Architecture and prototypical implementation of a semantic querying system for big Earth observation image bases. European Journal of Remote Sensing, 50(1), 452–463. doi:10.1080/22797254.2017.1357432
  • Tu, T.M., Huang, P.S., Hung, C.L., & Chang, C.P. (2004). A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geoscience and Remote Sensing Letters, 1(4), 309–312. doi:10.1109/LGRS.2004.834804
  • Villa, A., Chanussot, J., Benediktsson, J.A., & Jutten, C. (2011). Spectral unmixing for the classification of hyperspectral images at a finer spatial resolution. IEEE Journal of Selected Topics in Signal Processing, 5(3), 521–533. doi:10.1109/JSTSP.2010.2096798
  • Vrabel, J. (2000). Multispectral imagery advanced band sharpening study. Photogrammetric Engineering and Remote Sensing, 66(1), 73–80.
  • Wang, H., & Suter, D. (2007). A consensus-based method for tracking: Modelling background scenario and foreground appearance. Pattern Recognition, 40(3), 1091–1105. doi:10.1016/j.patcog.2006.05.024
  • Wang, Q., Jia, Z., Qin, X., Yang, J., & Hu, Y. (2011). A new technique for multispectral and panchromatic image fusion. Procedia Engineering, 24, 182–186. doi:10.1016/j.proeng.2011.11.2623
  • Wang, Y.X., & Zhang, Y.J. (2013). Nonnegative matrix factorization: A comprehensive review. IEEE Transactions on Knowledge and Data Engineering, 25(6), 1336–1353. doi:10.1109/TKDE.2012.51
  • Wenyan, Z., Zhenhong, J., Yu, Y., Yang, J., & Kasabov, N. (2018). SAR image change detection based on equal weight image fusion and adaptive threshold in the NSST domain. European Journal of Remote Sensing, 51(1), 785–794. doi:10.1080/22797254.2018.1491804
  • Xiao-Hui, Y.A.N.G., & Li-Cheng, J.I.A.O. (2008). Fusion algorithm for remote sensing images based on nonsubsampled contourlet transform. Acta Automatica Sinica, 34(3), 274–281. doi:10.3724/SP.J.1004.2008.00274
  • Xie, Q., Zhou, M., Zhao, Q., Meng, D., Zuo, W., & Xu, Z. (2019). Multispectral and hyperspectral image fusion by MS/HS fusion net. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1585–1594), China.
  • Xu, N., Price, B., Cohen, S., & Huang, T. (2017). Deep image matting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2970–2979), China.
  • Xu, Q., Zhang, Y., & Li, B. (2014). Recent advances in pansharpening and key problems in applications. International Journal of Image and Data Fusion, 5(3), 175–195. doi:10.1080/19479832.2014.889227
  • Xu, Q., Zhang, Y., Li, B., & Ding, L. (2015). Pansharpening using regression of classified MS and pan images to reduce color distortion. IEEE Geoscience and Remote Sensing Letters, 12(1), 28–32. doi:10.1109/LGRS.2014.2324817
  • Yadav, P., & Agrawal, S. (2018). Road network identification and extraction in satellite imagery using otsu’s method and connected component analysis. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, XLII-5, 91–98. doi:10.5194/isprs-archives-XLII-5-91-2018
  • Yocky, D.A. (1996). Multiresolution wavelet decomposition I me merger of landsat thematic mapper and SPOT panchromatic data. Photogrammetric Engineering & Remote Sensing, 62(9), 1067–1074.
  • Zhang, Y., & Mishra, R.K. (2014). From UNB PanSharp to Fuze Go–The success behind the pan-sharpening algorithm. International Journal of Image and Data Fusion, 5(1), 39–53. doi:10.1080/19479832.2013.848475