1,608
Views
32
CrossRef citations to date
0
Altmetric
Original Articles

A comparative data-fusion analysis of multi-sensor satellite images

, , &
Pages 671-687 | Received 12 Nov 2011, Accepted 06 Nov 2012, Published online: 07 Dec 2012

Abstract

Remote-sensing data play an important role in extracting information with the help of various sensors having different spectral, spatial and temporal resolutions. Therefore, data fusion, which merges images of different spatial and spectral resolutions, plays an important role in information extraction. This research investigates quality-assessment methods of multisensor (synthetic aperture radar [SAR] and optical) data fusion. In the analysis, three SAR data-sets from different sensors (RADARSAT-1, ALOS-PALSAR and ENVISAT-ASAR) and optical data from SPOT-2 were used. Although the PALSAR and the RADARSAT-1 images have the same resolutions and polarisations, images are gathered in different frequencies (L and C bands, respectively). The ASAR sensor also has C-band radar, but with lower (25 m) resolution. Since the frequency is a key factor for penetration depth, it is thought that the use of different SAR data might give interesting results as an output. This study describes a comparative study of multisensor fusion methods, namely the intensity-hue-saturation, Ehlers, and Brovey techniques, by using different statistical analysis techniques, namely the bias of mean, correlation coefficient, standard deviation difference and universal image quality index methods. The results reveal that Ehlers' method is superior to the others in terms of spectral and statistical fidelity.

1. Introduction

Multisensor satellite data fusion is used to increase the quantitative analysis (i.e. to facilitate image interpretation) due to complementary information of different spectral and spatial multisensor data characteristics (Cetin and Musaoglu Citation2009; Chitade and Katiyar Citation2012; Han, Li, and Gu Citation2008; Jalan and Sokhi Citation2012; Kumar, Mukhopadhyay, and Ramachandra Citation2009; Kumar et al. Citation2011; Li and Wang Citation2001; Tsai Citation2004, Yuhendra et al. Citation2012; Zhang Citation1999; Zhou, Civco, and Silander Citation1998). Today, there are many algorithms for image fusion, and integration of multisensor data is of the essence for many applications.

In recent years, the launch of new synthetic aperture radar (SAR) satellites such as ALOS, TerraSAR-X, RADARSAT-2 and COSMO-SkyMed have opened a new era for remote-sensing applications. Previous studies proved that the combination of optical and microwave data provides more accurate identification when compared with the results obtained with individual sensors (Aschbacher and Lichtenegger Citation1990; Rahman, Sumantyo, and Sadek Citation2010). Because the response of radar is more a function of geometrical properties and the structure of the surface, the use of multiple types of sensors for image fusion increases the information content of images in comparison with surface reflection as in optical images. The following definition explains the data-fusion framework in remote sensing rather than the tools and algorithms used in fusion processes. ‘Data fusion is a formal framework in which the means and tools for the alliance of data originating from different sources are expressed. It aims at obtaining information of greater quality; the exact definition of “greater quality” will depend upon the application’ (Wald Citation1999). In general, the image-fusion techniques can be classified into three levels: pixel (iconic) fusion, feature-level (symbolic) fusion and knowledge/decision-level fusion (Pohl and van Genderen Citation1998). Among them, pixel-based fusion has the best potential for keeping the original information of the input images in the merged output data.

Among the various image-fusion algorithms in the literature, intensity-hue-saturation (IHS) transformation is one of the most widely used methods for merging complementary multisensor data-sets. Although the IHS technique originally uses only three band images in the fusion process, it has been used for many applications and upgraded by the Ehlers method (Balik-Sanli, Kurucu, and Esetlili Citation2008; Ehlers Citation2005; Sunar and Musaoglu Citation1998). Other common techniques are the Brovey transformation, which is an arithmetic combination method (Binh et al. Citation2006), and the principal-component analysis (PCA) method, which has the ability to reduce the dimensionality of the original data from n (number of input bands) to two or three principal components containing the majority of information (Amarsaikhan and Douglas Citation2004; Pohl and van Genderen Citation1998). The description and use of various methods and tools can be found in Aiazzi, Baronti, Alparone et al. (Citation2006), Aiazzi, Baronti, Selva et al. (Citation2006), Bethune, Muller, and Donnay (Citation1998), Jin, Ruliang, and Ruohong (Citation2006), Liu (Citation2000) and Shi et al. (Citation2005).

Several studies compare fusion techniques such as IHS, PCA, discrete-wavelet-transformation (DWT) and Brovey to achieve the best spectral and spatial quality (Colditz et al. Citation2006; Shi et al. Citation2005; Zhang and Hong Citation2005; Zhou, Civco, and Silander Citation1998). Generally, image parameters such as mean value (i.e. average intensity of an image), standard deviation, entropy information, and profile-intensity curves are used for assessing the quality of the fused images (Zhang Citation1999; Zhang et al. Citation2005). In addition to these techniques, bias of mean (BM) and correlation-coefficient (CC) analyses are other statistical tools used for measuring the distortion between the original image and the fused image in terms of spectral information. Still other well-known measurements to assess the quality of the fusion process are the estimation of the universal-image-quality index (UIQI) (Wang Citation2002) and the relative dimensionless global error in synthesis (ERGAS, Erreur Relative Globale Adimensionnelle de Synthèse) (Wald Citation2002).

The contribution of SAR images to optical images is also investigated in some research studies that use IHS image fusion (Kurucu et al. Citation2009) and DWT fusion (Huang et al. Citation2005). In these studies, a comparative analysis was described using L-band JERS SAR and Landsat TM data with different fusion techniques such as IHS, DWT, PCA and high-pass-philtre (HPF), and the mean, standard deviation, correlation coefficient and entropy factors were computed to calculate the quality of the fused images (Rokhmatuloh et al. Citation2003; Shi et al. Citation2005). Pal, Majumdar, and Bhattacharya (Citation2007) fused four bands of IRS-1C with ERS-2 using PCA techniques to enhance the detection of geological information.

In this study, a horizontal-horizontal (HH) polarised L band PALSAR, an HH polarised C band RADARSAT-1, five vertical-vertical polarised ENVISAT-ASAR images, and a SPOT-2 XS were used for image fusion. The quality-assessment analyses were used to assess the fusion results of each red-green-blue (RGB) image composition, and the results were compared both visually and statistically. Based on the literature review, BM, CC, standard deviation difference (SDD) and UIQI statistical analyses were performed for all fused images (Binh et al. Citation2006; Colditz et al. Citation2006; Rokhmatuloh et al. Citation2003; Shi et al. Citation2005; Wald Citation2002).

2. Study area and data used

In this research, the Menemen plain in the western part of the Gediz Basin in the Aegean region of Turkey was selected as study area. Its central geographical coordinates are at longitude 27° 04′ east and latitude 38° 36′ north. The Aegean Sea lies to the west of the study area, and Manisa Province lies to the north (). The study area is about 400 km2, which includes both residential and agricultural areas, with a smooth micro relief. In the SAR images, the surface roughness affects geometry and backscatter values. To eliminate the distortions caused by the relief, the area having micro relief is preferred as in many SAR applications. Hence, an area having a smooth micro relief with an approximate slope of 1–2% (i.e. flat), as well as linear features such as field borders, channels, etc., was chosen as the study area.

Figure 1. Map of study area.
Figure 1. Map of study area.

In a large part of the study area, where the fields were prepared for cotton and corn farming, the actual planting (seeding) started at the beginning of May 2006 – i.e. the study area was unplanted except for winter crops, which are wheat and barley. Hence, the soil in the area was compacted to prevent moisture loss. The surface roughness of the study area is homogeneous. In this season, since the area had not received enough rain until the beginning of May, the soil moisture levels varied according to its water-holding capacity. In the view of the study area (from the false-colour composite image derived from SPOT-2), red colours imply land cover of wheat and barley, green colours imply ploughed soil surfaces with different moisture contents and reddish black colours imply swampy areas with natural plantation ().

In the analysis, three SAR data-sets from different SAR sensors, namely RADARSAT-1, PALSAR and ASAR were used. summarises the main characteristics of the data in detail.

Table 1. Specifications of the data used.

3. Methodology

3.1. Pre-processing

As a first step, the SAR images were filtered to reduce the inherent speckle noise. For this purpose, various filters such as mean, median, Frost, Lee, and Gamma-Map were tested with different window sizes and over two passes on the data-sets. According to the visual interpretations, the best results were obtained by two passes using median and mean filters. The first filtering was applied using a 3×3 median filter, and the second iteration used a 5×5 mean filter.

Since the image fusion of different data-sets will be conducted at the pixel level, spatial registration accuracies should be in the sub-pixel range to avoid the combination of unrelated data. Therefore, in fusion applications, geometric correction is very important for registration of the images. First, the SPOT XS image was rectified using cadastral maps at 1/5000 scale. However, the map includes only the cadastral details of the area such as field borders. The study area covers agricultural fields, and it was difficult to find good ground control points (GCPs) in the area covered by a 1/5000-scaled cadastral map sheet. Therefore, topographic maps at 1/25,000 scale were also used as an additional data source to collect GCPs such as road intersections, water channels, bridges, and so on from the study area and its surroundings. Second, the filtered SAR images were registered to the rectified SPOT image using an image-to-image registration method with a root mean square error of less than one pixel. Images were registered using first-order polynomial transformation, and they were re-sampled using bilinear interpolation.

The RADARSAT and ALOS PALSAR images were re-sampled to 8 m, and the ENVISAT ASAR images were re-sampled to 25 m using the nearest-neighbour algorithm in order to retain the multi-temporal information content of the radar data. The nominal resolution of a RADARSAT fine-beam image is 8 m (Toutin Citation2001). In Toutin (Citation2001), the pixel spacing of RADARSAT-1 fine-beam modes F1 and F5 is 6.25×6.25 m, whereas ground resolution is 9.1×8.4 m and 7.8×8.4 m, respectively. In this context, while rectifying RADARSAT-1 images they are generally re-sampled to 8×8 m resolution (Gauthier et al. Citation2006). In this paper, RADARSAT-1 images were rectified to a Turkish uniform coordinate system by re-sampling to an average resolution of 8 m. With regard to the ALOS PALSAR images, as shown in the ERSDAC PALSAR User Guide (ERSDAC Citation2006), the pixel spacing of the PALSAR image is 6.25×6.25 m, and the spatial resolution is approximately 9.5 m in range×10-m azimuth. It is also declared that the highest ground resolution is 7 m. Thus, the PALSAR images were rectified to a Turkish uniform coordinate system by re-sampling to an average resolution of 8 m. In the original (i.e. raw) ENVISAT ASAR images, the pixel size is 12.5 m and the geometric resolution is 22–37 m, depending on the image swath type and incidence angle. On average, the resolution is re-sampled at around 25 m (Beran Citation1994; Meadows and Wright Citation2002). Therefore, the ENVISAT images used in the study were rectified to a Turkish Uniform Coordinate System (i.e. UTM ED50, northing and easting in metres) with 25-m pixel spacing. The average of five ENVISAT ASAR images, which gives more detailed information especially for the linear features than does a single image, was calculated, and the output image was also considered in the analysis (Engdahl and Hyyppa Citation2000; Gunzl and Selige Citation1998; Wang et al. Citation2001).

3.2. Image-fusion methods used

In this study, the three most widely used data-fusion methods, namely IHS, Ehlers and Brovey were applied, and their results were examined. Since the late 1980s, various algorithms have been developed for fusing or pansharpening to combine the panchromatic and multispectral images to produce an enhanced multispectral image of high spatial resolution (e.g. Cakir and Khorram Citation2003; Chavez, Sides, and Anderson Citation1991; Chen, Hepner, and Forster Citation2003; Cliché, Bonn, and Teillet Citation1985; Ehlers Citation1991; Li, Kwok, and Wang Citation2002; Price Citation1987; Shettigara Citation1992; Welch and Ehlers Citation1987; Yesou, Besnus, and Rolet Citation1993; Zhang Citation1999; Zhang Citation2002; Zhang and Hong Citation2005; Zhou, Civco, and Silander Citation1998). According to a comprehensive review of multi-sensor image fusion in remote sensing done by Pohl and Van Genderen (Citation1998), the existing image-fusion techniques are grouped into two classes: ‘colour-related techniques’ such as IHS and hue-saturation-value fusion methods and ‘statistical/numerical methods’ such as PCA, HPF, Brovey transform, regression variable substitution and wavelet methods (Ling et al. Citation2008). According to this classification, we took care to choose methods from both classes, and consequently we chose the IHS and Ehlers methods as the colour-related techniques and the Brovey Transform as the statistical/numerical method. These methods are also available in commonly used image-processing software because they are accepted as successful algorithms in the literature.

The IHS method separates the intensity, hue and saturation components of a three-band multispectral image by converting from RGB colour to IHS colour. Spatial frequency related to the intensity component is replaced with the high-resolution image and transformed back to the RGB domain using an inverse IHS transform (Pohl and van Genderen Citation1998).

The Brovey transform uses a ratio algorithm for combining the images. It can use all spectral bands for the input layer, which is different from the IHS method. The algorithm calculates the ratio of each image band by the sum of the chosen bands, followed by a multiplication with the high-resolution image (Ranchin and Wald Citation2000).

Ehlers fusion uses IHS transformation with subsequent Fourier domain filtering. First, it transforms the selected bands as RGB-to-IHS images, then intensity and high-resolution images are transformed into the frequency domain by a fast Fourier transformation (FFT). A low-pass filter and an inverse HPF are used for the intensity and high-resolution pan images, respectively. After an inverse FFT is applied to these images, the high-pass–filtered high-resolution image and the low-pass–filtered intensity image are added to form an enhanced intensity component. An inverse IHS transform is applied for the fused output image as RGB (Ehlers Citation2005). This process can be sequentially repeated until all bands are exhausted (Ehlers et al. Citation2010).

3.3. Quality assessment

For quantitative and qualitative analyses of the methods used, various composite images were visually evaluated to analyse the content of structural information. The following five false-colour composites (as R/G/B) were selected for the analysis (Figures 2 and 3):

  1. Image I: ENVISAT 08/06(R)/RADARSAT 28/05(G)/ENVISAT 11/06(B)

  2. Image II: ENVISAT 08/06(R)/RADARSAT 28/05(G)/ENVISAT average(B)

  3. Image III: ENVISAT 08/06(R)/RADARSAT 28/05(G)/SPOT NIR(B)

  4. Image IV: ENVISAT 08/06(R)/ENVISAT 11/06(G)/SPOT NIR(B)

  5. Image V: ENVISAT 08/06(R)/ENVISAT average(G)/SPOT NIR(B)

Figure 2. Fusion results for selected images (image I. image II. image III).
Figure 2. Fusion results for selected images (image I. image II. image III).
Figure 3. Fusion results for selected images (image IV. image V).
Figure 3. Fusion results for selected images (image IV. image V).

Each composite image was fused with PALSAR data () with three different techniques, and as a result fifteen new images were generated (Figures 2 and 3). In general, we used C-band SAR data for RGB composites and L-band SAR data as a high-resolution substitute. For the different RGB SAR combinations, we tried as much as possible to use the radar data that had different polarizations and that was acquired in closer dates. However, to include all of the information available in all ENVISAT data gathered in a slightly larger time period, we also took the average of them into two composites. With regard to considering the average of ENVISAT ASAR images (i.e. the average data of 5 ENVISAT images), previous studies demonstrated that the simple technique known as temporal averaging has the advantage of reducing the speckle without losing spatial resolution (Engdahl and Hyyppa Citation2000; Gunzl and Selige Citation1998; Wang et al. Citation2001). As a result, fields become more homogeneous, and their boundaries are better defined.

Figure 4. Different information contents in (a) PALSAR image and (b) SPOT-XS (NIR) band.
Figure 4. Different information contents in (a) PALSAR image and (b) SPOT-XS (NIR) band.

In addition, we used two different approaches, namely the ‘SAR-optical’ and the ‘SAR-SAR’ data-fusion approaches. The SPOT data were used in two combinations, but for the other three combinations the radar data were preferred. Since we chose a study area with agricultural features as a land cover/use (i.e. vegetation), we chose only the near-infrared band of SPOT data due to the fact that it is the best spectral region in which to discriminate among the different vegetated fields. Therefore, the other SPOT bands were not taken into consideration.

In this study, the main idea was to produce new hybrid images for extracting information with the help of various sensors having different spectral, spatial and temporal resolutions (). The main focus was not on extracting a certain object or phenomena from the output data; rather it was on seeing whether using different data-sets would improve the quality of the newly produced images. As seen in the Figures 2 and 3, the different information contents available in the two data-sets were the main reason they were used in the data fusion.

3.3.1. Visual comparison

In order to evaluate the spectral quality of the fused images, visual interpretation was done by comparing the fused output images with the selected original five RGB composites. The increase in the spatial resolution and its effects for the visual interpretation were analysed qualitatively by comparisons of the specific features such as field borders, roads and buildings in the original data-set.

3.3.2. Statistical evaluation

For quantitative analysis, image-fusion quality was evaluated using the following four statistical tools: BM, CC, SDD and UIQI.

3.3.2.1. Bias of mean

BM is the difference between the means of the original colour image and the fused output image (Bethune, Muller, and Donnay Citation1998). The value is normalised with respect to the mean value of the original image. The ideal value is zero.

3.3.2.2. Correlation coefficient

This measures the correlation between the original and the fused output images. The higher the correlation between the fused and original images, the better is the estimation of the spectral values. The ideal value of the CC is one.

3.3.2.3. Standard deviation difference

SDD is the difference between the standard deviations of the original colour image and the fused output image. Standard deviation reflects the average amount of the variation from the ‘average’ (mean) of the image. A low standard deviation indicates that the image pixels tend to be very close to the mean.

3.3.2.4. Universal-image-quality index

This is designed by modelling any image distortion as a combination of three factors: loss of correlation, luminance distortion and contrast distortion.

The image-fusion techniques aim to inject the spatial detail into the multispectral imagery while keeping the original spectral values. Today, the standard image-fusion methods used are often successful, but most of them distort the colour (spectral) information in the process. Therefore, the comparison between the methods was done to evaluate whether or not the spectral information in the input image was preserved in the output image. Within the fusion literature, there are many studies discussing why it is important to include sensor spectral consistency in image-fusion methods – e.g. Ehlers (Citation2004, Citation2006), Ehlers et al. (Citation2010), Klonus (Citation2008) and Klonus and Ehlers (Citation2007).

4. Application results

The spectral quality of the fused images was evaluated by visual interpretation – i.e. comparison with the original colour composite images and spectral profile plots. For that purpose, the profile shown in was taken into consideration, and changes in spectral values were evaluated by comparing the resulting profiles with those of the originals ().

Figure 5. Spectral profile for analysing spectral similarities/changes after the fusion process.
Figure 5. Spectral profile for analysing spectral similarities/changes after the fusion process.
Figure 6. Spectral profile analysis of the fused output images.
Figure 6. Spectral profile analysis of the fused output images.

As can be seen in , there was a high correlation between the output grey values of the Ehlers method and the original RGB images. Usually, CC and UIQI values obtained from the Ehlers method were quite high for image bands 2 and 3, and this inference is also valid for the statistical values for BM and SDD. The BM and differences in standard deviation were lower for image bands 2 and 3, whereas they were higher for image band one.

All fused output images were compared statistically with original composite images, and the results are presented in . Among the three methods, the Ehlers fusion yielded the best statistical results. In other words, the BM was the smallest, CC was the highest of all, SDD was the smallest, and UIQI values are were the largest. The other methods IHS and Brovey fusion performed worse with similar statistical results. For band one of all fused images, all three fusion methods showed poor performance, especially for the CC. However, in general, it was apparent that the poorest results were from the Brovey transform (i.e. higher biases of mean and SDDs), and CC and UIQI values for IHS and Brovey were lower than those obtained from the Ehlers method. CC and UIQI values were almost similar for IHS and Brovey.

Table 2. Statistical evaluation results (each band of RGB images compared with the corresponding bands of fused images).

5. Conclusions

With the advances in satellite technology, there are now various kinds of image data-sets having different capabilities, including multi-sensor, multi-temporal, multi-resolution, and multi-frequency data, which are gathered from operational Earth-observation satellites and are widely used in many applications. Today, the fusion of image data has become a valuable tool in remote-sensing image evaluation to integrate the best characteristics of each type of sensor. This research investigates the quality assessment of multi-sensor (RADARSAT-1, ALOS-PALSAR, ENVISAT-ASAR and optical SPOT-2) image-fusion output, as well as the techniques used, namely IHS, Ehlers and Brovey transforms. Statistical analysis techniques used to investigate the fusion quality are BM, CC, SDD and UIQI.

The spectral quality of the fused images is evaluated by visual interpretation – i.e. comparison with the original colour composite images and the spectral profile plots. The comparison of the original colour composites with the outputs generated with the IHS, Ehlers and Brovey methods indicated that the IHS and Brovey fusion methods, although they distorted the colours, yielded much sharper images (i.e. had better appearance in the same resolution) compared with the original images. On the other hand, as can be seen from the output images, the Ehlers method generally retained spectral consistency more than the other methods, even for data having different spectral-response information (e.g. band centre, bandwidth). Therefore, this study indicated that the Ehlers method is more convenient for studies in which the spectral information content is the most important (i.e. thematic classification). Also, the Ehlers method has been accepted as the best method for radar/optical fusion for the three algorithms applied, with IHS in second place. However, it was also observed that averaging the radar data-set leads to additional sharpening in the Ehlers method, as can be seen in , image 5.

Statistically, only four common evaluation criteria (BM, CC, SSD and UIQI) were applied in this study, and overall the Ehlers fusion method proved best in terms of spectral and statistical fidelity with the original RGB images among the three algorithms applied. Although these four criteria cover most image-evaluation aspects, our future work will involve adopting more indices to assess image-fusion results, with an emphasis on the effects of image fusion on classification accuracy.

References

  • Aiazzi, B., S. Baronti, L. Alparone, A. Garzelli, and F. Nencini. 2006. “Information-Theoretic Assessment of Fusion of Multispectral and Panchromatic Images.” In Proceedings of IEEE 9th International Conference on Information Fusion (Fusion 2006), 10–13 July 2006, Florence, Italy, 1–5. doi:10.1109/ICIF.2006.301778.
  • Aiazzi, B., S. Baronti, M. Selva, and L. Alparone. 2006. “Enhanced Gram-Schmidt Spectral Sharpening Based on Multivariate Regression of MS and Pan Data.” In Proceedings of IEEE International Conference on Geoscience and Remote Sensing Symposium (IGARSS 2006), 31 July–4 August2006, Denver, CO, 3806–3809. doi:10.1109/IGARSS.2006.975.
  • Amarsaikhan, D., and T. Douglas. 2004. “Data Fusion and Multisource Image Classification.” International Journal of Remote Sensing 25: 3529–3539. 10.1080/0143116031000115111
  • Aschbacher, J., and J. Lichtenegger. 1990. “Complementary Nature of SAR and Optical Data: A Case Study in the Tropics.” Earth Observation Quarterly 31: 4–8.
  • Balik-Sanli, F., Y. Kurucu, and M. T. Esetlili. 2008. “Determining Land Use Changes by Radar-Optic Fused Images and Monitoring its Environmental Impacts in Edremit Region of Western Turkey.” Environmental Monitoring and Assessment 151: 45–58. 10.1007/s10661-008-0248-z
  • Beran, J. 1994. Statistics for Long-Memory Processes. London: Chapman & Hall.
  • Bethune, S., F. Muller, and J. P. Donnay. 1998. “Fusion of Multispectral and Panchromatic Images by Local Mean and Variance Matching Filtering Techniques.” In Proceedings of Fusion of the Second International Conference on Earth Data: Merging Point Measurement, Raster Maps and Remotely Sensed Image (EARSeL Ecole des Mines de Paris – SEE), 28–30 January 1998, Sophia Antipolis, France, 31–36.
  • Binh, D. T., W. Christine, S. Aziz, B. Dominique, and P. Vancu. 2006. “Data Fusion and Texture-Direction Analyses for Urban Studies in Vietnam.” In Proceedings of EARSeL 1st Workshop on Urban Remote Sensing (SIG–URS 2006), 2–3 March 2006, Berlin, 1–7.
  • Cakir, H. I., and S. Khorram. 2003. “Fusion of High Spatial Resolution Imagery with High Spectral Resolution Imagery Using Multiresolution Approach.” In ASPRS Annual Conference Proceedings, 05 2003, Anchorage, AK, CD-ROM.
  • Cetin, M., and N. Musaoglu. 2009. “Merging Hyper Spectral and Panchromatic Image Data: Qualitative and Quantitative Analysis.” International Journal of Remote Sensing 30(7), 1779–1804. 10.1080/01431160802639525
  • Chavez, P. S., S. C. Sides, and J. A. Anderson. 1991. “Comparison of Three Different Methods to Merge Multiresolution and Multispectral Data: TM & SPOT Pan.” Photogrammetric Engineering and Remote Sensing 57: 295–303.
  • Chen, C. M., G. F. Hepner, and R. R. Forster. 2003. “Fusion of Hyperspectral and Radar Data Using the IHS Transformation to Enhance Urban Surface Features.” ISPRS Journal of Photogrammetry and Remote Sensing 58: 19–30. 10.1016/S0924-2716(03)00014-5
  • Chitade, A. Z., and S. K. Katiyar. 2012. “Multiresolution and Multispectral Data Fusion using Discrete Wavelet Transform with IRS Images: Cartosat–1, IRS LISS III and LISS IV.” Journal of the Indian Society of Remote Sensing 40(1), 121–128. 10.1007/s12524-011-0140-0
  • Cliché, G., F. Bonn, and P. Teillet. 1985. “Integration of the SPOT Pan Channel into its Multispectral Mode for Image Sharpness Enhancement.” Photogrammetric Engineering and Remote Sensing 51: 311–316.
  • Colditz, R. R., T. Wehrmann, M. Bachmann, K. Steinnocher, M. Schmidt, G. Strunz, and S. Dech 2006. “Influence of Image Fusion Approaches on Classification Accuracy: A Case Study.” International Journal of Remote Sensing 27: 3311–3335. 10.1080/01431160600649254
  • Ehlers, M. 1991. “Multisensor Image Fusion Techniques in Remote Sensing.” ISPRS Journal of Photogrammetry and Remote Sensing 46: 19–30. 10.1016/0924-2716(91)90003-E
  • Ehlers, M. 2004. “Spectral Characteristics Preserving Image Fusion Based on Fourier Domain Filtering.” In Proceedings of SPIE, Conference on Remote Sensing for Environmental Monitoring, GIS Applications, and Geology, IV. Remote Sensing Europe 2004, 13 September 2004, Maspalomas, Gran Canaria, Spain, 5574, 1–13. doi:10.1117/12.565160.
  • Ehlers, M. 2005. “Urban Remote Sensing: New Developments and Trends.” In Proceedings of Joint Symposium URBAN–URS 2005 (URS 2005), 14–16 March 2005, Tempe, AZ, 1–6.
  • Ehlers, M. 2006. “New Developments and Trends for Urban Remote Sensing.” In Urban Remote Sensing, Q. Weng and D.A. Quattrochi, 357–376. Boca Raton, FL: CRC Press.
  • Ehlers, M., S. Klonus, P. J. Astrand, and P. Rosso. 2010. “Multi-Sensor Image Fusion for Pansharpening in Remote Sensing.” International Journal for Image and Data Fusion 1: 25–45. 10.1080/19479830903561985
  • Engdahl, M., and J. Hyyppa. 2000. “Temporal Averaging of Multitemporal ERS–1/2 Tandem INSAR Data.” In Proceedings of IEEE 2000 International Geoscience and Remote Sensing Symposium (IGARSS 2000), 24–28 July 2000, Honolulu, HI, 5, 2224–2226. doi:10.1109/IGARSS.2000.858363.
  • ERSDAC. 2006. PALSAR User Guide. 3rd ed. Earth Sensing Data Analysis Center (ERSDAC). Accessed November 8. http://www.palsar.ersdac.or.jp/e/guide/pdf/Ref_Guide_en.pdf
  • Gauthier, Y., V. Weber, S. Savary, M. Jasek, L. M. Paquet, and M. Bernier. 2006. “A Combined Classification Scheme to Characterize River Ice from SAR Data.” Earsel Proceedings 5 (1). http://www.eproceedings.org/
  • Gunzl, M. H., and T. Selige. 1998. “Field Boundary Detection Using Multi-Temporal SAR.” In Proceedings of IEEE 1998 Geoscience and Remote Sensing Symposium (IGARSS '98), 6–10 July 1998, Seattle, WA, 1, 339–341. doi:10.1109/IGARSS.1998.702898.
  • Han, S. S., H. T. Li, and H. Y. Gu. 2008. “The Study on Image Fusion for High Spatial Resolution Remote Sensing Images.” In Proceedings of XXI ISPRS Congress, Commission vii, the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS 2008), 3–11 07 2008, Beijing, China, XXXVII (B7), 1159–1164.
  • Huang, Y., J. Liao, H. Guo, and X. Zhong. 2005. “The Fusion of Multispectral and SAR Images Based Wavelet Transformation over Urban Area.” In Proceedings of IEEE 2005 Geoscience and Remote Sensing Symposium (IGARSS'05), 25–29 July 2005, Seoul, Korea, 6, 3942–3944. doi:10.1109/IGARSS.2005.1525774.
  • Jalan, S., and B. S. Sokhi. 2012. “Comparison of Different Pan-Sharpening Methods for Spectral Characteristic Preservation: Multi-Temporal CARTOSAT-1 and IRS-P6 LISS-IV Imagery.” International Journal of Remote Sensing 33(18), 5629–5643. 10.1080/01431161.2012.666811
  • Jin, Y., Y. Ruliang, and H. Ruohong. 2006. “Pixel Level Fusion for Multiple SAR Images Using PCA and Wavelet Transform.” In Proceedings of radar 2006, CIE international conference (CIE'06), 16–19 October 2006, Shanghai, China, 1–4. doi:10.1109/ICR.2006.343209.
  • Klonus, S. 2008. “Comparison of Pansharpening Algorithms for Combining Radar and Multispectral Data.” In Proceedings of XXI ISPRS Congress, the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS 2008), 3–11 July 2008, Beijing, China, XXXVII (B6b), 189–194.
  • Klonus, S., and M. Ehlers. 2007. “Image Fusion using the Ehlers Spectral Characteristics Preserving Algorithm.” GIScience and Remote Sensing 44(2), 93–116. 10.2747/1548-1603.44.2.93
  • Kumar, U., A. Dasgupta, C. Mukhopadhyay, N. V. Joshi, and T. V. Ramachandra. 2011. “Comparison of 10 Multi-Sensor Image Fusion Paradigms for IKONOS Images.” International Journal of Research and Reviews in Computer Science 2(1), 40–47.
  • Kumar, U., C. Mukhopadhyay, and T. V. Ramachandra. 2009. “Pixel Based Fusion Using IKONOS Imagery.” International Journal of Recent Trends in Engineering 1(1), 173–175.
  • Kurucu, Y., F. Balik-Sanli, M. T. Esetlili, M. Bolca, and C. Goksel. 2009. “Contribution of SAR Images to Determination of Surface Moisture on the Menemen Plain, Turkey.” International Journal of Remote Sensing 30: 1805–1817. 10.1080/01431160802639764
  • Li, S., J. T. Kwok, and Y. Wang. 2002. “Using the Discrete Wavelet Transform to Merge Landsat TM and SPOT Panchromatic Images.” Information Fusion 3: 17–23. 10.1016/S1566-2535(01)00037-9
  • Li, S., and Y. Wang. 2001. “Discrete Multiwavelet Transform Method to Fusing Landsat-7 Panchromatic Image and Multi-Spectral Images.” In Proceedings of IEEE 2001 Geoscience and Remote Sensing Symposium (IGARSS ‘01), 9–13 July 2001, Sydney, NSW, 4, 1962–1964. doi:10.1109/IGARSS.2001.977130.
  • Ling, Y., M. Ehlers, E. L. Usery, and M. Madden. 2008. “Effects of Spatial Resolution Ratio in Image Fusion.” International Journal of Remote Sensing 29(7), 2157–2167. 10.1080/01431160701408345
  • Liu, J. G. 2000. “Smoothing Filter-Based Intensity Modulation: A Spectral Preserve Image Fusion Technique for Improving Spatial Details.” International Journal of Remote Sensing 21: 3461–3472. 10.1080/014311600750037499
  • Meadows, P., and P. Wright. 2002. “ASAR APP and APM Image Quality.” In Proceedings of ENVISAT validation workshop (ESRIN), 9–13 December 2002, Frascati, Italy, SP-531.
  • Pal, S. K., T. J. Majumdar, and A. K. Bhattacharya. 2007. “ERS-2 SAR and IRS-1C LISS III Data Fusion: A PCA Approach to Improve Remote Sensing Based Geological Interpretation.” ISPRS Journal of Photogrammetry and Remote Sensing 61: 281–297. 10.1016/j.isprsjprs.2006.10.001
  • Pohl, C., and J. L. van Genderen. 1998. “Multisensor Image Fusion in Remote Sensing: Concepts, Methods, and Applications.” International Journal of Remote Sensing 19: 823–854. 10.1080/014311698215748
  • Price, J. C. 1987. “Combining Panchromatic and Multispectral Imagery from Dual Resolution Satellite Instruments.” Remote Sensing of Environment 21: 119–128. 10.1016/0034-4257(87)90049-6
  • Rahman, M. M., J. T. S. Sumantyo, and M. F. Sadek. 2010. “Microwave and Optical Image Fusion for Surface and Sub-Surface Feature, Mapping in Eastern Sahara.” International Journal of Remote Sensing 31(20), 5465–5480. 10.1080/01431160903302999
  • Ranchin, T., and L. Wald. 2000. “Comparison of Different Algorithms for the Improvement of the Spatial Resolution of Images.” In Proceeding of the Third Conference Fusion of Earth Data: Merging Point Measurements, Raster Maps and Remotely Sensed Images (Fusion of Earth Data), 26–28 January 2000, Sophia Antipolis, France, hal-00395050 (version 1), 33–41.
  • Rokhmatuloh-Tateishi, R., K. Wikantika, K. Munadi, and M. Aslam. 2003. “Study on the Spectral Quality Preservation Derived from Multisensor Image Fusion Techniques Between JERS–1 SAR and Landsat TM Data.” In Proceedings of IEEE 2003 Geoscience and Remote Sensing Symposium (IGARS'03), 21–25 July 2003, Toulouse, France, 6, 3656–3658. doi:10.1109/IGARSS.2003.1295228.
  • Shettigara, V. K. 1992. “A Generalized Component Substitution Technique for Spatial Enhancement of Multispectral Images Using a Higher Resolution Data Set.” Photogrammetric Engineering and Remote Sensing 58: 561–567.
  • Shi, W., C. O. Zhu, Y. Tian, and J. Nichol. 2005. “Wavelet-Based Image Fusion and Quality Assessment.” International Journal of Applied Earth Observation and Geoinformation 6: 241–251. 10.1016/j.jag.2004.10.010
  • Sunar, F., and N. Musaoglu. 1998. “Merging Multiresolution Spot P and Landsat TM Data: The Effects and Advantages.” International Journal of Remote Sensing 19: 219–225. 10.1080/014311698216206
  • Toutin, T. 2001. “Potential of Road Stereo Mapping with RADARSAT Images.” Photogrammetric Engineering & Remote Sensing 67(9), 1077–1084.
  • Tsai, V. J. D. 2004. “Evaluation of Multiresolution Image Fusion Algorithms.” In Proceedings of IEEE 2004 Geoscience and Remote Sensing Symposium (IGARSS ‘04), 20–24 September 2004, Anchorage, AK 10.1109/IGARSS.2004.1369104
  • Wald, L. 1999. “Definitions and Terms of References in Data Fusion.” In Proceedings of Joint EARSeL/ISPRS Workshop, International Archives of Photogrammetry and Remote Sensing, 3–4 June 1999, Valladolid, Spain, 32 (part 7-4-3 W6), 2–6.
  • Wald, L. 2002. Data Fusion: Definitions and Architectures–Fusion of Images of Different Spatial Resolutions. Grou Radenez, Paris: Les Presses de l'Ecole des Mines. ISBN:2-911762-38-X
  • Wang, C. T., K. S. Chen, C. T. Chen, and J. M. Kuo. 2001. “A Study of Target Identification from Multi-Temporal SAR Images.” In Proceedings of 22nd Asian Conference on Remote Sensing (ACRS 2001), 5–9 November 2001, Singapore, 2, 1021–1025.
  • Wang, Z. 2002. “A Universal Image Quality Index.” IEEE Signal Processing Letters 9: 1–4. 10.1109/97.988714
  • Welch, R., and M. Ehlers. 1987. “Merging Multiresolution SPOT HRV and Landsat TM Data.” Photogrammetric Engineering and Remote Sensing 53: 301–303.
  • Yesou, H., Y. Besnus, and J. Rolet. 1993. “Extraction of Spectral Information from Landsat TM Data and Merger with SPOT Panchromatic Imagery – A Contribution to the Study of Geological Structures.” ISPRS Journal of Photogrammetry and Remote Sensing 48: 23–36. 10.1016/0924-2716(93)90069-Y
  • Yuhendra, S., I. Alimuddin, J. T. S. Sumantyo, and H. Kuze. 2012. “Assessment of Pan-Sharpening Methods Applied to Image Fusion of Remotely Sensed Multi-Band Data.” International Journal of Applied Earth Observation and Geoinformation 18: 165–175. 10.1016/j.jag.2012.01.013
  • Zhang, S., P. Wang, X. Chen, and X. Zhang. 2005. “A New Method for Multi-Source Remote Sensing Image Fusion.” In Proceedings of IEEE 2005 Geoscience and Remote Sensing Symposium (IGARSS ‘05) 6, 3948–3951. doi:10.1109/IGARSS.2005.1525776.
  • Zhang, Y. 1999. “A New Merging Method and its Spectral and Spatial Effects.” International Journal of Remote Sensing 20: 2003–2014. 10.1080/014311699212317
  • Zhang, Y. 2002. “Automatic Image Fusion: A New Sharpening Technique for IKONOS Multispectral Images.” GIM International 16: 54–57.
  • Zhang, Y., and G. Hong. 2005. “An IHS and Wavelet Integrated Approach to Improve Pan-sharpening Visual Quality of Natural Colour IKONOS and Quickbird Images.” Information Fusion 6: 225–234. 10.1016/j.inffus.2004.06.009
  • Zhou, J., D. L. Civco, and J. A. Silander. 1998. “A Wavelet Transform Method to Merge Landsat TM and SPOT Panchromatic Data.” International Journal of Remote Sensing 19: 743–757. 10.1080/014311698215973

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.