2,269
Views
55
CrossRef citations to date
0
Altmetric
Research Articles

Multivariate statistical analysis of measures for assessing the quality of image fusion

, &
Pages 47-66 | Received 31 Aug 2009, Accepted 19 Oct 2009, Published online: 17 Feb 2010

Abstract

Various measures are available for assessing image fusion quality. Some measures are from traditional image quality assessment and some are specially designed for image fusion evaluation. It has been found from a survey that there is a total of 27 measures in common use. It can be imagined that some of them are more reliable than others for certain applications and some of them may be quite highly correlated. Therefore, a thorough mathematical analysis of these measures is desirable to understand what measures should be adopted for a given application. This article describes a multivariate statistical analysis of these measures to reduce redundancy and find comparatively independent measures for assessing the quality of fused images. First, correlation coefficients are calculated for the 27 measures; then, factor analysis using principal components is performed based on correlation matrix; and finally, hierarchical clustering is carried out on the factors to obtain finer clusters and to find representative measures. Experiments are carried out on 144 fused images. Based on the results, the 27 measures are classified into five categories: difference-based, noise-based, similarity-based, information-clarity-based and overall-based. Further, the most representative measure is selected from each category as a recommendation.

1. Introduction

Image fusion is a process for combining two or more different images to form a new image by using certain algorithms. In remote sensing image fusion, the aim is to integrate spatial information (such as edges and textures) from a panchromatic (PAN) image (with higher spatial resolution) and spectral information (such as colours) from a multispectral (MS) image (with higher spectral resolution) to obtain an MS image with higher spatial resolution.

In the past two decades, many image fusion methods have been developed at three different levels, i.e. pixel, feature and decision level (Pohl and Van Genderen Citation1998). This leads to an increasing need for evaluation of the performance of image fusion algorithms (Angell Citation2005). Quality assessment is one of such evaluations. Two types of quality assessment are possible: direct and indirect. The former is to assess the quality of the fused images and the latter is to assess the quality of the products extracted from the fused images. This article will deal with the issues in the direct approach.

To assess the quality of fused images, some quality measures are required. The measures could be either qualitative (e.g. excellent, good, bad) or quantitative (see Section 2 for a review). Qualitative measures are normally used for the results of visual inspection due to the limitation of human eyes, although quantitative results can also be converted into qualitative terms. However, in mathematical modelling, quantitative measures are more desirable. As can be seen from a review in Section 2, a total of 27 quantitative measures are in common use. Naturally, one would ask ‘should I use all of these 27 measures every time when assessing the quality of fused images?’, or ‘which measure should be adopted in a given application?’. To answer these questions, a thorough mathematical evaluation is required. Indeed, in this project, a multivariate statistical analysis is employed to reduce the redundancy among these measures, i.e. find comparatively independent measures for assessing the quality of fused images.

Section 2 reviews existing measures for image fusion quality evaluation. Section 3 presents the strategies and methodology for multivariate statistical analysis of these measures. Section 4 reports experiments and results. Conclusions are made in Section 5.

2. Measures for evaluating image fusion quality

There are three approaches for assessing the quality of fused images according to the level of using reference images (Wang and Bovik Citation2006): no reference (NR), full reference (FR) and reduced reference (RR).

In NR quality assessment, as the name implies, NR image is used. A number of statistical parameters are computed from the test image. In FR quality assessment, a reference image is used as benchmark for comparison. From such a comparison, a number of parameters can be computed. In RR quality assessment, certain features are extracted from a reference image for assisting the evaluation of the quality of the test image.

The FR or RR quality assessment methods, as the names imply, employ reference images as benchmarks. However, in practice, no MS images at the same spatial resolution as PAN images are available for use as benchmarks (reference images) for comparison. On the other hand, for convenience, the resolutions of the images for comparison must be identical. To meet this requirement, two solutions are available. The first is to degrade the fused image to the same spatial resolution as that of the original MS image. The other solution is to degrade the original PAN image to the resolution of MS images and degrade MS images to a lower spatial resolution (normally by the ratio between MS resolution and PAN resolution), and then fuse the degraded images to obtain a new image (Wald et al. Citation1997). In this case, the original MS image can be used as reference image.

It has been found that 27 measures have been in use for assessment of fused images. These measures are expressed mathematically and explained in . It should be noted that the list shown in is not complete and only the commonly used measures are included. For example, mean error and absolute mean error are omitted. It should be noted that some of the measures listed in are for single images only (i.e. for NR approach). They are included because sometimes we are also interested in the quality of the fused image in an absolute sense, i.e. not necessarily comparative.

Table 1. The 27 measures for assessing the quality of fused images.

The measures in are listed in alphanumeric order and they look haphazard. On the other hand, it is believed that, at the end of the analysis conducted in this project, they will be nicely categorised automatically, that is they are unclassified by intention.

In , σ denotes the standard deviation; μ denotes the mean value; f is the fused image with a dimension of M × N; r is the reference image with the same dimension as f; (x, y) is the location of pixel in the image, ; K is the number of MS image bands and h denotes the normalised histogram of the image.

3. Strategies and methodology

3.1 A strategy for this study

As discussed in Section 2, there are a total of 27 measures found in literature commonly used for assessing image quality. It is impractical to make use of all these measures to assess the quality of image fusion each time. Therefore, the numbers must be reduced. To reduce the number, the first consideration is to remove redundant measures, that is, one representative measure may be selected from those, which are lightly correlated. To achieve this, the following strategy is employed in this study:

  • select some images with different natures for fusion;

  • compute the quality of fused images quantitatively using all these 27 measures;

  • calculate the correlation coefficients of these measures;

  • carry out factor analysis (FA) using principal component for the correlation matrix;

  • conduct a hierarchical clustering on the factors to obtain finer clusters and

  • recommend a representative measure for each cluster.

3.2 Pearson correlation coefficient

A number of solutions are available to obtain the correlation coefficient (Rodgers and Nicewander Citation1988). In multivariate statistical analysis, e.g. FA, the Pearson correlation coefficient is one of most commonly used measures of the correlation between two variables. It reflects the degree of linear relationship between two variables, which ranges from +1 to −1. A correlation coefficient of ‘+1’ means a perfect positive linear relationship between two variables. At the other extreme, a correlation coefficient of ‘−1’ means a perfect negative linear relationship between two variables. In the middle, a correlation coefficient of 0 means no linear relationship between the two variables. The definition of correlation coefficient is given in V2 in .

A correlation coefficient matrix can be built using the correlation coefficients between two variables for multivariate statistical analysis. This represents the normalised measure of the strength of linear relationship between variables.

3.3 FA using principal component

Principal component analysis (PCA) is a multivariate statistical technique used for data reduction and for deciphering patterns within large sets of data (Wold et al. Citation1987, Chen et al. Citation2007). Principal components are eigenvectors of a variance, covariance or a correlation matrix of the original data matrix. These eigenvectors may provide significant insight into the structure of the matrix not available at the first glance.

FA is a statistical method used to describe variability among observed variables in terms of fewer unobserved variables called factors. The observed variables are modelled as linear combinations of the factors, plus ‘error’ terms. The information gained about the interdependencies can be used later to reduce the set of variables in a dataset. The classical model of FA (Lawley and Maxwell Citation1962) is given by

where is the vector of the observed variables, is the vector of the latent common factors and is the vector of the latent specific factors. The matrix is the so-called loading matrix.

The main purpose of FA is to reduce the contribution of less significant variables to simplify the component structure of the results from PCA. This goal can be achieved by rotating the axes defined by PCA. Here, rotation is used to reorient the factor loadings so that the factors are more interpretable. The rotation of factors is carried out as a way of transforming the factors so as to obtain the apparent factor structure. The simplest case of rotation is an orthogonal (varimax) rotation in which the angle between the reference axes of factors is maintained at 90°. This type of rotation is used with FA using principal component.

3.4 Hierarchical cluster analysis

Hierarchical cluster analysis (HCA) is a method for finding the underlying structure of objects through an iterative process that associates (using agglomerative methods) or dissociates (using divisive methods) objects one by one, and that is halted when all objects have been processed (Almeida et al. Citation2007). The resulting clusters of objects should exhibit high internal (within cluster) homogeneity and high external (between clusters) heterogeneity (McKenna Citation2003).

Given N vectors , a hierarchical clustering algorithm attempts to establish a dendrogram between the sets of clusters according to the minimum distance rule:

  1. starts with N clusters, each consisting of exactly one point (or vector);

  2. find the most similar clusters Ci and Cj , then merge Ci and Cj into one cluster and

  3. repeat step (2) for a total of N−1 times.

Any valid metric may be used as a measure of similarity between pairs of observations. The choice of which clusters to merge or split is determined by a linkage criteria, which is a function of the pairwise distances between observations. The commonly used linkage criteria are complete linkage (or furthest neighbour), single linkage (or nearest neighbour) and average linkage (between groups and within groups), centroid linkage and Ward's linkage. The use of different linkage criteria may lead to different clustering results. It has been found by Milligan (Citation1980) that the hierarchical clustering with average linkage is preferred for the data sets possessing the features of internal cohesion and external isolation. This method is thus adopted in this study and squared Euclidean distance is used as a measure of similarity.

Average linkage treats the distance between two clusters as the average distance between all pairs of objects where one object of a pair belongs to each cluster. Cluster membership is determined by the minimum of

The quantity dik is the distance between object i in cluster U and object k in cluster W, and NU and NW are the numbers of objects in the two clusters.

4. Experiments

4.1 Test images

To make the number of samples large enough for statistical analysis, a total of 12 pairs of QuickBird PAN and MS images with different land covers are selected as original images for fusion. These covers include urban, rural or urban and rural mixed areas. They are shown in . Also, a total of 12 fusion methods are performed on each of the image pairs. These fusion methods are average of PAN and MS images (Average), computationally efficient pixel level image fusion (CEPIF), contrast pyramid (Contrast), filter subtract decimate pyramid (FSD), gradient pyramid (Gradient), Laplace pyramid (Laplace), morphological difference pyramid (Morph.), ratio pyramid (Ratio), region-based (Region), Haar wavelet transform (Haar), discrete wavelet transform (DWT) and unsampling Contourlet transform (Contourlet) fusions. As a result, totally 144 test images are obtained as samples.

Figure 1. Original PAN and MS images for fusion.

Figure 1. Original PAN and MS images for fusion.

Due to page limits, not all of the fused images are presented here. shows the 12 fused images from pair 1 using 12 different fusion methods mentioned above, and shows the fused images from the 12 pairs in using the same fusion method-contrast pyramid fusion.

Figure 2. A set of 12 fused images from pair 1 in .

Figure 2. A set of 12 fused images from pair 1 in Figure 1.

Figure 3. A set of 12 fused images from the 12 pairs in using the same fusion method.

Figure 3. A set of 12 fused images from the 12 pairs in Figure 1 using the same fusion method.

4.2 Correlation of measures

All 27 measures are used to assess the quality of the fused images. In the end, a matrix of 27 × 144 is obtained as quality statistics. The correlation coefficients of measures are given in . Here, the degrees of freedom (df) is , where n is the number of samples and m is the number of variances. It can be found from a statistical table for the critical value of correlation coefficient, when df is 117 and the confidence level is 1%, the critical value is 0.228, that is when the correlation coefficient is larger than 0.228, two variables will be deemed to have good correlation.

Table 2. Pearson correlation coefficients of the measures listed in .

is the correlation matrix of the 27 measures. From this table, it can be seen that many of the measures are highly correlated with each other. V3 is correlated with all the others. It means that CE is the most representative measure for assessing image fusion quality. However, it is not enough to consider only one measure. As it is difficult to detect finer structures in the relationships between measures from this correlation table, the correlation matrix is further evaluated using FA.

4.3 Factor analysis

The correlation coefficient matrix is analysed by FA using principal components. To verify the appropriateness of FA for this study, the Kaiser–Meyer–Olkin (KMO) measurement of sample adequacy and Bartlett test of sphericity are initially performed on the correlation matrix. In such a test, the sample size should be five times larger than the number of variables. Under this condition, when the KMO is greater than 0.5 and Bartlett's test of sphericity significance is less than 0.05, the sample is considered adequate for FA (MacCallum Citation1983).

As shown in , for this experiment, the KMO measure of sampling adequacy is 0.82 and Bartlett's test of sphericity significance (sig.) = 0.00 < 0.05. This means that the sample is adequate for FA. The preliminary results of FA are shown in .

Table 3. KMO and Bartlett's test of sample in this experiment.

Table 4. The initial eigenvalues of components by FA.

An eigenvalue gives a measure of the significance for the component. The highest eigenvalue means the most significant component. Eigenvalues of 1.0 or greater are considered significant (Kim and Mueller Citation1978). It can be seen from that there are four principal components with the eigenvalues values larger than 1 and the accumulated variance of these four principal factors is 89.63%. This indicates that these four components can well reflect the information in the original data matrix. A scatter plot is shown in , which clearly indicates which component should be retained (the eigenvalues of the factors retained should be larger than 1).

Figure 4. The scatter plot of components.

Figure 4. The scatter plot of components.

An orthogonal rotation using varimax method is then applied to simplify the principal component and render it easier to interpret.

shows the component loading matrix after rotation. The loading is the correlation coefficient of measures and components. For each measure, the higher the loading value on the component, the more important the measure on this component. The component with the highest loading value for a measure is used to classify the measure into categories (factors). From , it can be seen that all the measures of Warp, SAM, RMSE, RM, WSNR, RASE, PSNR, MSE, SNR, ERGAS, NLSE, QNR and CE have the highest loading in component 1 and thus are classified into one category. As described in , these measures represent the spectral distortion, or signal-to-noise ratio or error of the fused image using the reference image. As the distortion (or the noise or the error) is calculated by the difference between the fused and the reference images; this category can thus be interpreted as difference-based. All the measures of UIQI, SSIM, IFC, CC, Q4, RRIQA and MI have the highest loading value in the second component and they are also classified into one category. These measures represent the structural similarity, correlation or information shared between the fused and the reference images. Thus, this category can be interpreted as similarity-based. All the measures of SD, QILV, SF, AG, Entropy and UE have the highest value in the third component and they are classified into a category. As described in , SD, SF and AG measure the clarity of fused image. At the same time, QILV and UE measure the information of the fused image. Thus, this category can be interpreted as information-clarity-based. The measure of has the highest value of loading in component 4 and it becomes a category by itself. As measures the gradient information transferred from PAN and MS images to the fused image, this category can be interpreted as overall-based.

Table 5. Rotated component loading matrix.

4.4 Hierarchical clustering analysis

Hierarchical clustering creates a hierarchy of clusters which may be represented in a tree structure called a dendrogram. The root of the tree consists of a single cluster containing all measures and the leaves correspond to individual measures. The number of clusters can be found from the crossing of vertical cut-off line and dendrogram. The optimal cut-off line in the dendrogram is at the largest difference of distance between two successive combination stages (Beaulieu et al. 2008). The number of the stages equals the number of measures minus 1. The last stage is reached when the cluster U and cluster W include all the measures to be tested.

The combination schedule and the dendrogram for the category with highest loading value in component 1 (i.e. the difference-based category) are shown in and , respectively. The number of measures in this category is 13, thus the number of combination stages is 12. From , it can be seen that the largest distance difference between two successive stages is 409.3. These two successive stages are stage 11 and stage 12. Thus, in , the optimal cut-off line is at a distance between 123.6 and 532.9. In this way, two sub-categories are obtained, the first of which consists of Warp, RMSE, MSE, RM, ERGAS, RASE, SAM, NLSE, QNR and CE, and the second of which consists of PSNR, SNR and WSNR. Indeed, it is quite understandable that the first sub-category measures the difference between fused image and reference image, but the second sub-category measures the ratio of signal to noise. Of course, how to select the optimal number of clusters is determined by the actual requirement of quality assessment. The first sub-category can also be further sub-divided according to the different position of the cut-off line. Similarly, the second sub-category can also be sub-divided.

Figure 5. Dendrogram for difference-based category.

Figure 5. Dendrogram for difference-based category.

Table 6. Combination schedule of measures in difference-based category.

Similarly, the clustering sequence for the category with highest loading value in component 2 is shown in and the resultant dendrogram is shown in . It can be seen that RRIQA is the last measure to be combined in the cluster. It means that RRIQA is independent of other measures in this category and thus is a good representative to measure the similarity between fused and reference images. When this category is further sub-divided, for example, the cut-off line is at distance between 67.3 and 120.3, three sub-categories are obtained. The first sub-category consists of UIQI, SSIM, IFC and Q4; the second sub-category, CC and MI; and the third sub-category, RRIQA by itself.

Figure 6. Dendrogram for similarity-based category.

Figure 6. Dendrogram for similarity-based category.

Table 7. Combination schedule of measures in similarity-based category.

The process of clustering sequence for the category with highest loading in component 3 is shown in and the resultant of clustering is shown in . From , it can be seen that QILV is the last measure to be combined and thus is representative of this category. When the cut-off line is at different distances, different sub-divided categories can be obtained.

Figure 7. Dendrogram for information-clarity-based category.

Figure 7. Dendrogram for information-clarity-based category.

Table 8. Combination schedule of measures in information-clarity-based category.

5. Conclusions

In this study, a comprehensive analysis of 27 existing measures for assessing the quality of fused images has been conducted. A total of 12 image pairs with different types of land covers are used for the testing. Correlation analysis, FA using principal components and hierarchical analysis have been employed as mathematical tools. Through such a comparative analysis, the 27 measures are classified into five categories. A representative measure is identified for each category as follows (with underline):

  1. difference-based: Warp, SAM, RMSE, RM, RASE, MSE, ERGAS, NLSE, QNR and CE;

  2. noise-based: SNR, PSNR and WSNR;

  3. similarity-based: UIQI, SSIM, IFC, CC, Q4, RRIQA and MI;

  4. information- and clarity-based: SD, QILV, SF, AG, Entropy and UE; and

  5. overall-based: .

That is to say, the representative measures for assessing image fusion quality are CE for difference-based, WSNR for noise-based, RRIQA for similarity-based, QILV for information-clarity-based and for overall-based. It is therefore recommended that

  1. when evaluating the difference between fused and original MS image, users should make use of difference-based measures;

  2. when evaluating the noise of the fused image compared to the original MS image, users should make use of optimal noise-based measures;

  3. when evaluating the similarity of structure or the shared information between the fused image and the original MS image, users should make use of the similarity-based measures;

  4. when evaluating the information or clarity of the fused image, users should make use of optimal information-clarity-based measures; and

  5. when evaluating the overall quality of the fused image comparing to the original MS and PAN images, they should make use of the overall-based measures.

Of course, more categories and representative measures can be obtained when the distance of cut-off line in the hierarchical clustering is smaller, if there is a need. This will certainly depend on the actual requirement.

Acknowledgement

This research was supported by the State 973 project (2006CB701304) in China and a project funded by The Hong Kong Polytechnic University (G-U633).

References

  • Aiazzi , B . 2006 . Information-theoretic assessment of fusion of multispectral and panchromatic images . Proceedings of the 9th international conference on information fusion . 10–13 July 2006 , Florence, Italy. pp. 1 – 5 .
  • Aja-Fernandez , S . 2006 . Image quality assessment based on local variance . Proceedings of the 28th IEEE annual international conference on engineering in medicine and biology society . 30 August–3 September 2006 , New York, USA. pp. 4815 – 4818 .
  • Almeida , JAS . 2007 . Improving hierarchical cluster analysis: a new method with outlier detection and automatic clustering . Chemometrics and Intelligent Laboratory Systems , 87 ( 2 ) : 208 – 217 .
  • Alparone , L . 2004 . A global quality measurement of pan-sharpened multispectral imagery . IEEE Geoscience and Remote Sensing Letters , 1 ( 4 ) : 313 – 317 .
  • Alparone , L . 2006 . A new method for MS + pan image fusion assessment without reference . IEEE international conference on geoscience and remote sensing symposium . 31 July–4 August 2006 , Denver, USA. 3802–3805
  • Alparone , L . 2007 . Comparison of pansharpening algorithms: outcome of the (2006) GRS-S data-fusion contest . IEEE Transactions on Geoscience and Remote Sensing , 45 ( 10 ) : 3012 – 3021 .
  • Angell , C . 2005 . Fusion performance using a validation approach . Proceedings of the 8th international conference on information fusion . 25–28 July 2005 , UK. pp. 1170 – 1177 . Waterfall Solutions .
  • Beaulieu, M., Foucher, S., and Gagnon, L., 2003. Multi-spectral image resolution refinement using stationary wavelet transform. IEEE International conference on geoscience and remote sensing symposium, 21–25 July Toulouse, France, 4032–4034.
  • Canga , EF . 2005 . Characterisation of image fusion quality metrics for surveillance applications over bandlimited channels . Proceedings of the 8th international conference on information fusion . 25–28 July 2005 , UK. pp. 484 – 490 . Waterfall Solutions .
  • Chen , Y and Blum , RS . 2005 . Experimental tests of image fusion for night vision . Proceedings of the 8th international conference on information fusion . 25–28 July 2005 , UK. pp. 491 – 498 . Waterfall Solutions .
  • Chen , K . 2007 . Multivariate statistical evaluation of trace elements in groundwater in a coastal area in Shenzhen, China . Environmental Pollution , 147 ( 3 ) : 771 – 780 .
  • Choi , M . 2003 . Biorthogonal wavelets-based Landsat 7 image fusion . Proceedings of the 24th Asian conference on remote sensing and the international symposium on remote sensing . 3–7 November 2003 , Busan, Korea. 494–496
  • Damera-Venkata , N . 2000 . Image quality assessment based on a degradation model . IEEE Transactions on Image Processing , 9 ( 4 ) : 636 – 635 .
  • Eskicioglu , AM and Fisher , PS . 1995 . Image quality measures and their performance . IEEE Transactions on Communications , 43 ( 12 ) : 2959 – 2965 .
  • Karathanassi , V , Kolokousis , P and Ioannidou , S . 2007 . A comparison study on fusion methods using evaluation indicators . International Journal of Remote Sensing , 28 ( 10 ) : 2309 – 2341 .
  • Kim , JO and Mueller , CW . 1978 . “ Introduction to factor analysis: what it is and how to do it ” . In Quantitative applications in the social sciences series , 48 – 49 . Newbury Park, California : Sage University Press .
  • Kite , TD , Evans , BL and Bovik , AC . 2000 . Modeling and quality assessment of halftoning by error diffusion . IEEE Transactions on Image Processing , 9 ( 5 ) : 909 – 922 .
  • Lawley , DN and Maxwell , AE . 1962 . Factor analysis as a statistical method . Journal of the Royal Statistical Society, Series D (The Statistician) , 12 ( 3 ) : 209 – 229 .
  • MacCallum , R . 1983 . A comparison of factor analysis programs in SPSS, BMDP, and SAS . Psvchometrika , 48 ( 2 ) : 223 – 231 .
  • McKenna , JE Jr . 2003 . An enhanced cluster analysis program with bootstrap significance testing for ecological community analysis . Environmental Modelling and Software , 18 ( 3 ) : 205 – 220 .
  • Milligan , GW . 1980 . An examination of the effect of six types of error perturbation on fifteen clustering algorithms . Psychometrika , 45 ( 3 ) : 325 – 342 .
  • Parcharidis , I and Kazi-Tani , LM . 2000 . Landsat TM and ERS data fusion: a statistical approach evaluation for four different methods . IEEE international conference on geoscience and remote sensing symposium . 24–28 July 2000 , Honolulu, Hawaii, USA. pp. 2120 – 2122 .
  • Pohl , C and Van Genderen , JL . 1998 . Multisensor image fusion in remote sensing: concepts, methods and applications . International Journal of Remote Sensing , 19 ( 5 ) : 823 – 854 .
  • Qu , GH , Zhang , DL and Yan , PF . 2002 . Information measure for performance of image fusion . Electronics Letters , 38 ( 7 ) : 313 – 315 .
  • Ranchin , T and Wald , L . 2000 . Fusion of high spatial and spectral resolution images: the ARSIS concept and its implementation . Photogrammetric Engineering and Remote Sensing , 66 ( 1 ) : 49 – 61 .
  • Rodgers , JL and Nicewander , WA . 1988 . Thirteen ways to look at the correlation coefficient . The American Statistician , 42 ( 1 ) : 59 – 66 .
  • Sasikala , M and Kumaravel , N . 2007 . A comparative analysis of feature based image fusion methods . Information Technology Journal , 6 ( 8 ) : 1224 – 1230 .
  • Sheikh , HR , Bovik , AC and de Veciana , G . 2005 . An information fidelity criterion for image quality assessment using natural scene statistics . IEEE Transactions on Image Processing , 14 ( 12 ) : 2117 – 2128 .
  • Valet , L , Mauris , G and Bolon , P . 2001 . A statistical overview of recent literature in information fusion . IEEE Aerospace and Electronic Systems Magazine , 16 ( 3 ) : 7 – 14 .
  • Vijayaraj, V., O’Hara, C.G., and Younan, N.H., 2004. Quality analysis of pansharpened images. IEEE International conference on geoscience and remote sensing symposium, 20–24 September Alaska, USA, 85–88.
  • Wald , L , Ranchin , T and Mangolini , M . 1997 . Fusion of satellite images of different spatial resolutions: assessing the quality of resulting images . Photogrammetric Engineering and Remote Sensing , 63 ( 6 ) : 691 – 699 .
  • Wang , Z and Bovik , AC . 2002 . A universal image quality index . IEEE Signal Processing Letters , 9 ( 3 ) : 81 – 84 .
  • Wang , Z and Bovik , AC . 2006 . Modern image quality assessment , New York, USA : Morgan and Claypool .
  • Wang , Z . 2004 . Image quality assessment: from error visibility to structural similarity . IEEE Transaction on Image Processing , 13 ( 4 ) : 600 – 612 .
  • Wang , Z and Simoncelli , EP . 2005 . Reduced-reference image quality assessment using a wavelet-domain natural image statistic model . Proceedings of the Society of Photo-optical Instrumentation Engineers, Human Vision and Electronic Imaging X , 5666 : 149 – 159 .
  • Wold , S , Esbensen , K and Geladi , P . 1987 . Principle component analysis . Chemometrics and Intelligent Laboratory Systems , 2 ( 1 ) : 37 – 52 .
  • Xydeas , C and Petrovic , V . 2000 . Objective image fusion performance measure . Electronics Letters , 36 ( 4 ) : 308 – 309 .
  • Yang , XH . 2007 . Fusion of multi-spectral and panchromatic images using fuzzy rule . Communications in Nonlinear Science and Numerical Simulation , 12 ( 7 ) : 1334 – 1350 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.