300
Views
2
CrossRef citations to date
0
Altmetric
Original Articles

Local Evaluation of a Fusion System for 3-D Tomographic Image Interpretation

, &
Pages 362-378 | Published online: 06 Nov 2010

Abstract

Information fusion has been studied in various domains of computer sciences and engineering, and these techniques have increasingly been used. Such systems aim to build new interesting information from much information. Over time, fusion systems have become complex systems integrating information extraction, representations, combinations and interpretations. The performance evaluation of such systems has become a real problem. The choices of methods and parameter values have a significant impact on the quality of the results. A global evaluation of the fused results does not allow the end-users to adjust the numerous parameters. We propose a local approach to evaluate the mission completeness of the fusion system subparts. In this article, we focus on the formulation of the mission of extraction subparts and we measure their degree of achievement. We aim at showing the end-users which subparts do not completely achieve the functionality they were designed for.

NOMENCLATURE

A j =

attribute computed from original 3-D image, j ∈ {H, C, Cor, E, LH, D1, D2, D3, D4} the set of attributes

A x , A y , A z =

PCA window size

C i =

decision criterion i ∈ {1, 2}

d i =

adjustment of attribute dynamic output

D x , D y , D z =

analyzing direction of co-ocurrence matrix

G x , G y , G z =

gradient window size

=

histogram of the attribute A j

=

normalized histogram of the attribute A j

=

normalized histogram of voxels contained in the region R i of the attribute A j

R i =

sought-after region R i of the set {R 1,…,R n }

=

the set of sought-after regions R j such as j ≠ i

s =

severity degree

=

region separability index of the attribute A j belonging to the region R i

T Global =

global detection rate of the set of the sought-after regions {R 1,…,R n }

=

detection rate of the sought-after region R i

W x , W y , W z =

window size for co-ocurrence matrix computation

α=

Derich filter coefficient

λ1, λ2, λ3 =

eigenvalues representative to intensity gradient organization in the image

1. INTRODUCTION

With the democratization of image acquisition devices, interpretation tasks have increasingly been used by experts to analyze a given phenomenon. Given the large amount of data and the repetitive aspect of such an analysis, the experts have appealed to a computer-based system to help them in their work.

In most image analysis applications, the experts look for different kinds of regions simultaneously. The sought-after regions are generally completely different. It is extremely hard to detect these regions using a unique measure based on image processing. Several complementary measurements are needed (texture measurements, structure orientations, based-form measurements, etc.) and they must be fused to form the global result. Those systems are called information fusion systems (Hall and Llinas Citation2001). Their role is to manage a complete information treatment chain starting from the information extraction and to the information interpretation in the expert's working space. The involvement of humans in fusion systems has given rise to cooperative fusion systems (Gunes et al. Citation2003). The user is involved in the different stages of information processing by those systems.

Cooperative fusion systems devoted to image interpretation are more and more complex (Kokar, Tomasik, and Weyman Citation2004). They are composed of many subparts, as illustrated in Figure . The first subpart concerns the extraction of some pertinent information from the original image. Several image processing techniques could be used to characterize the different sought-after regions. Then, the extracted information must be represented into a common and commensurable space in order to be aggregated in the third subpart. Finally, the output must be expressed in an understandable space for the end-user. This step is achieved by the interpretation subpart.

Figure 1 Process of a fusion system for image interpretation.

Figure 1 Process of a fusion system for image interpretation.

Such systems are not easy to use and to adjust by the end-users who are not specialists in computer science. Moreover, an optimized adjustment obtained for a given data is not compulsory the best one for other data. It also raises the problem of the performance evaluation (Levin Citation2001) of the information fusion system. Generally, the fusion systems are evaluated thanks to a global output result quality (Zhang Citation1996). However, this quality is difficult to obtain completely because it involves quantitative and qualitative aspects. Realized in the output space, it is also not adapted to loop-back on the different subparts of the system. This kind of evaluation does not seem to be sufficient to improve the iteration with the experts and are not adapted for the management of the fusion systems (for which, parameters must be adjusted and blocks added or removed, etc.).

In this article, we propose to evaluate the local performance of the extraction subparts of the fusion systems in which the parameters are the most numerous. The proposed approach is based on a subpart mission achievement evaluation. Its use for parameter adjustments task is illustrated both on the cartography of the synthetic images and on a real application. The obtained results show how the approach contributes to give explicative information on the fusion system.

The article is organized as follows. In section 2, the fusion system for 3-D image interpretation is presented and the common global result evaluation is discussed. In section 3, another point of view is proposed to qualify the local part of the fusion process in a better way. Finally, in section 4, the interests of the approach are illustrated on two different applications. The conclusion and perspectives of this work are given in section 5.

2. AN OVERVIEW OF THE FUSION SYSTEM

2.1. The Studied Cooperative Fusion System

Information fusion systems (Appriou et al. Citation2001) are well-known for their capability of taking into account several pieces of information and of managing their completude, uncertainty, precision, etc. Because of this, the construction of better information is possible. The fusion system, discussed here, was designed to segment 3-D images into regions, to facilitate its understanding (Jullien et al. Citation2008).

The designed system is presented in Figure . Its global objective is to decide the belonging class of each voxel of the image. It works in cooperation with the end-users who give some examples of the sought-after regions. The extraction subpart consists of different image characteristic measurements based on image processing techniques to acquire pertinent information on the sought-after regions. In what follows, two main families of measurement are used.

Figure 2 Fusion system designed for 3-D image analysis.

Figure 2 Fusion system designed for 3-D image analysis.

Texture measurement. This is based on the co-ocurrence matrix. Haralick and Shapiro (Citation1992) have proposed a selection of five main measurements. The attributes are homogeneity A H , contrast A C , correlation A Cor , entropy A E , and local homogeneity A LH .

Local organization. The local structure organization within the images is evaluated thanks to a principal component analysis of the gray levels intensity gradients. The three obtained eigenvalues λ1, λ2, and λ3 are representative of the gradient organization in the image (Hancock, Baddeley, and Smith Citation1992). Their are synthesized into four attributes A D1, A D2, A D3, and A D4 representing the organizational strength in several directions.

The representation step consists of building similarity maps for each attribute and for each sought-after region. The information is thus expressed in a common and commensurable space. Then, different Choquet integrals are applied to compute a belonging degree for each voxel to the sought-after regions. The main advantage of this aggregation tool is its capacity to take into account the interaction between the attributes (Grabisch Citation1996). Finally, an interpretation step based on a maximum belonging criterion is applied to build the complete mapping of the 3-D image.

The fusion system needs the adjustment of many parameters to be adapted to the image resolution, the image range, or the sought-after regions. Numerous parameters are concentrated on the extraction subpart. In Table , a list of all the parameters of the system is presented. A non-specialist in image processing and in data fusion will have difficulties in adjusting these parameters. Moreover, treatments are all achieved in the 3-D space, which takes important computation time. The complexity of the algorithm is also important and thus computing times are very depending on the different window sizes. Under this condition, the interactivity with the user becomes difficult if there is not a good understanding of the fusion system performance.

Table 1. Fusion system parameters

2.2. Traditional Approaches to Evaluate Image Fusion Systems

The recent proliferation of systems employing image fusion algorithms has prompted the need for reliable ways of evaluating and comparing their performance for any given application or scenario. In many applications, the results of image fusion are evaluated by subjective criteria (Toet and Franken Citation2003). However, assessing image fusion performance has proved hard in practice, particularly when the intended use is to produce a visual display. The evaluation is usually performed through robust yet impractical subjective trials that can take days to complete. Objective fusion metrics provide a computer-based alternative that needs no display equipment or complex organization of an audience.

In the literature, there are lots of quantitative measurements to quantify the performance of the results (Cvejic, Loza, and Bull Citation2005; Wang and Bovik Citation2002; Piella and Heijmans Citation2003). Some objective image quality assessment methods (Wang, Bovik, and Sheikh Citation2004), are not always valid because they are full-reference (FR) methods that require full access to the original images as references. Therefore, it is highly necessary to develop quality assessment algorithms that do not require full access to the reference images. Unfortunately, a non-reference (NR) or blind image quality assessment is an extremely difficult task (Li Citation2002). Most proposed NR quality metrics are designed for a set of predefined specific distortion types that may not be generalized to evaluate images. One interesting recent development in image quality assessment research is to design reduced-reference (RR) methods for quality assessment (Wang and Simoncelli Citation2005). Those methods do not require full access to the reference images but only need partial information, such as positive examples. Conceptually, RR methods make the quality assessment task easier than NR methods by paying the additional cost of transmitting some information to the users.

In the case of the studied system, the result is given to the end-users by a segmentation in several regions of interest and relevant feedback is expected. This is an online learning strategy, which adapts the response of a system by exploiting the user's interaction. It improves the performance of the system since (a) it reduces the gap between high-level semantics that humans uses to perceive rich visual information and low-level features, and (b) it confronts problems arising from the subjectivity of human perception that often interprets the same visual content in different ways and in different conditions (Rui et al. Citation1998). Finally, the quality of a complex system mainly depends on the way the tasks are performed, and not only on the global results.

3. THROUGH A LOCAL EVALUATION APPROACH

3.1. Formulation of Subpart Mission

A useful innovation which would be useful for the end-user, is a system that not only offers a decision but also provides an explanation of why a specific decision is made (Dasarathy Citation2000). The global evaluation of the fused image does not provide such an explanation, especially in the context of complex systems. Therefore the effect of parameter adjustments cannot be known. Possible conflicting impacts of two adjustments on the global result could be kept without any more information. For example, considering the criterion which gives some information on the compactness of a region, it is difficult to guide the users to adjust the gradient window size according to the compactness. It is very difficult to explain the influence and the dependency of a parameter directly on the global result. Therefore, there is a need for a local measure to adjust the subpart parameters. Another encountered difficulty is the fact that the global result (3-D segmented image) is not comparable to the extracted attributes and to the input image (because the representation spaces are different). As a consequence, the defined measures for a global result evaluation are not applicable on the subpart output. Moreover, the users are not able to define and express what the local evaluation must be because the local working spaces are not understandable.

A means to adjust the fusion system in a better way is to focus on the fusion system subparts, which do not completely achieve the functionality they were designed for. Because of this, the main mission of each subpart needs to be well-formulated and then, a mission achievement measurement will allow the performance of the subpart to be quantified according to its objective and independently to the method used inside the subparts.

The mission of the different subparts which compose the fusion system devoted to 3-D image analysis can be expressed by the following.

Extraction subpart. It extracts information from the original data. The output could contain a smaller quantity of information than the original image, but it must bring a better separability between the sought-after regions.

Representation subpart. It consists in representing the extracted information in another commensurable space. The objective is to preserve the separability through the transformation.

Aggregation subpart. This step aggregates the different information in order to build some new, more interesting information in reducing the information dimensionality while increasing its robustness.

Interpretation subpart. It corresponds to expressing the fused result in an understandable space for the end-users.

This article focuses on the extraction subpart, which contains numerous parameters and proposes a process to evaluate its performance according to the mission it was designed for. The proposed indicator measures the separability between the regions obtained on the output of the extraction subpart.

3.2. Region Separability Measure

The reference of the sought-after regions noted by the experts will be used to compute the proposed region separability index. Indeed, even if the reference region expresses information into the output space of the fusion system, it contains both the type of region and its localization. This information appears independent of the space in which the calculation is made. The idea is to report the mask of the reference regions into the attribute space. The evaluation process, then, consists in comparing the voxel distribution of the attribute values between points of different sought-after regions. Figure illustrates the approach. The experts define examples of the sought-after regions, which are used as reference regions later in the fusion process. These regions are noted as R 1, R 2, and R 3 on the input image of Figure . The histogram of the voxels contained in region R 1 is compared to the histogram of region () for a given parameter adjustment of the jth attribute A j (Figure ).

Figure 3 Application of the reference region mask into the attribute space.

Figure 3 Application of the reference region mask into the attribute space.

Figure 4 Illustration of two normalized histograms (issue of the industrial application).

Figure 4 Illustration of two normalized histograms (issue of the industrial application).

The expression of the histogram (noted H) for the jth attribute A j and for the voxels v belonging to a region R i is formulated by

with

A j (v) is the value of the jth attribute A j for the voxel v

R i is a sought-after region of region set {R 1,…,R n }

j ∈ {H, C, Cor, E, LH, D1, D2, D3, D4} the set of attributes

index: represents all the possible values that attribute A j can take. Attributes are 8 bits encoded images so index ∈ [0.255]

The obtained histograms are then normalized by the total number of points belonging to the regions they were computed for

where Card(R i ) is the cardinality of R i (i.e., the number of voxel of region R i ).

The region separability measure is built comparing the two histograms and . Measures between histograms are numerous (Cha and Srihari Citation2002), and the choice was guided by the following main objective: the separation between the two histograms independently on their forms and stretchness. In this case, an intersection surface evaluation like the Manhattan distance is interesting. The expression of the region separability, noted of attribute A j and for region R i is given by

The obtained distance is equal to 1 when the two histograms have an empty intersection and 0 when they are completely overlapping. The Manhattan distance is symmetric and respects the triangle inequality. In the context of the cartography of a 3-D image, an attribute provides the information when it can discriminate at least one sought-after region in a better way. The proposed separability measure allows this discriminating power to be quantified. The next section illustrates two different utilizations of the proposed separability index for fusion system designing and the parameter adjustment.

4. EXPERIMENTAL RESULTS

4.1. Application to 3-D Synthetic Images

The local evaluation is illustrated on a synthetic 3-D image (shown in Figure ). Image sizes are 245 × 200 × 250. Three textured regions are sought-after: R 1 is a region with low intensity variance, R 2 is a region with high intensity variance compared to R 1, and R 3 is composed of a succession of two textures that form a kind of oriented region. The advantage of the use of a synthetic image is that the full reference (FR) is available on the output.

Figure 5 A 3-D synthetic image.

Figure 5 A 3-D synthetic image.

Two attributes are computed: the first one, A LH , is based on local homogeneity computed from the co-ocurrence matrix; the second one, A D2, is based on local organization. The initial parameters of the attributes (shown in Table ) were set approximately according to the structure resolution of the sought-after regions. The results of the extraction subpart evaluation and detection rates are illustrated in Table . The detection rates of each sought-after region and the global detection rate T global are obtained by computing the confusion matrix on the voxels of the reference region. The obtained global detection rate is T global = 86.42%. The regions R 1 and R 3 are less detected, whereas all computed attributes present an interesting separability. Therefore, a way to increase the detection rate is to give more information about the sought-after regions to the system.

Table 2. First stage: The initial parameters of the attributes

Table 3. First stage: Initial fusion process

Thus, a third attribute A D3 based on local organization is computed using default parameters. Table resumes all the information when A D3 is added to the fusion system. It has made a slight increase in the global detection rate T global. The separability indicator shows that the new attribute does not yet have an interesting separability for R 1 () and R 3 (). A better separability is found in adjusting the parameters of A D3. In decreasing the PCA window sizes to adapt the calculation to the image resolution and increasing the dynamic of this attribute, we raised a strong separability for region R 1 and R 3 (Table ). With this new adjustment guided by the separability measure, the global detection rate becomes interesting (T global = 93.86%), and the three regions are quite detected. Figure presents a 2-D section of the obtained classified images at each stage. Dark gray level voxels represent region R 1, clear gray level voxels for region R 2, white voxels for region for R 3, and black voxels are not classified voxels (reject class) due to the weak belonging degree of the voxels at all the sought-after regions. These images clearly illustrate the increasing quality of the classification with fewer rejected voxels and less fragmentation through the different stages.

Figure 6 3-D obtained cartographies. (a) Origional image, (b) first stage, (c) second stage, and (d) third stage.

Figure 6 3-D obtained cartographies. (a) Origional image, (b) first stage, (c) second stage, and (d) third stage.

Table 4. Second stage: Adding attribute A D3

Table 5. Third stage: Parameters setting of attribute A D3

4.2. Application to 3-D Tomographic Image Interpretation

This real application now concerns the analysis of electro-technical parts manufactured by Schneider Electric. The studied parts are mainly composed of glass fibers mixed with an organic matrix. The quality of the parts is directly correlated to the fiber organization. The experts (geophysics, part designers, etc.) try to understand the inside part organization to find the best fabrication process (fiber length, injection point, baking time, etc.). Their main goal is to obtain sound elements having excellent mechanical and dielectrical performance to be used in low and high voltage environments. The method chosen by Schneider Electric to analyze the parts is based on x-ray computed tomography (CT). It is a reliable nondestructive evaluation technique. The CT results are 3-D gray-scale images which provide data about the organization of the internal morphology. Figure presents a 3-D tomographic image corresponding to a studied part. On this image, the glass fibers appear in clear voxels, whereas the organic matrix is in gray voxels. It is thus possible to observe the fiber organization within the part without destroying it.

Figure 7 A 3-D tomographic image sample.

Figure 7 A 3-D tomographic image sample.

The characterization of such an image consists in detecting three typical regions of interest. The first sought-after region is the oriented region (noted R 1, Figure a), which has a regular and organized texture with a single preferential orientation of the glass fibers. They are made up of long white fibers giving the impression of a flow. The disordered regions (noted R 2, Figure b) do not appear organized on the images, locally chaotic, i.e., for which there is not a clearly defined principal orientation. The regions called lack of reinforcement (noted R 3, Figure c) only contain resin (or paste) and no glass fibers. They appear in clear and homogeneous gray level on the images.

Figure 8 Samples of the three sought-after regions. (a) Oriented region noted R 1, (b) disordered region noted R 2, and (c) lack of reinforcement noted R 3.

Figure 8 Samples of the three sought-after regions. (a) Oriented region noted R 1, (b) disordered region noted R 2, and (c) lack of reinforcement noted R 3.

In this real case in which regions are more complex (presence of noise, shape, organization, resolution, etc.), the choice of the pertinent attributes is also difficult. In this situation, the expert calculates several attributes with different parameters offline. He needs to keep the relevant attributes to fuse them to get a better cartography of the 3-D tomographic image. Several strategies based on separability criterion to choose attributes are possible. Eleven attributes were computed; their parameters are shown in Table and the separability of the three sought-after regions of each attribute is shown in Table . Two approaches of attribute selection using the separability criterion are discussed hereafter.

Table 6. Parameters of attributes

Table 7. Attribute set: Separability between sought-after regions

The first one consists in choosing attributes that separate the maximum number of regions by itself. This criterion (labeled C 1) leads to choose attributes , A E , and A D3 which have an interesting separability for all the sought-after regions. The results of fusion according to this criterion are shown in Table .

Table 8. Selection according to criterion C 1

The global detection rate obtained after the fusion is T global = 73.55%, which is relatively weak. This is due to the strong redundancy of attributes A E and A D3 and to the lack of complementarity between them. This criterion favors attributes that provide information about each sought-after region despite the information being redundant or the quantity not being enough, which explains the important quantity of non-classified voxels.

The second approach is based on a collaborative point of view of the attributes. This criterion (labeled C 2) consists in choosing attributes that have a maximum separability for at least one region. The selected attributes are A D2, A H , and . The results of fusion according to this criterion are shown in Table .

Table 9. Selection according to criterion C 2

The global detection rate obtained is T global = 81.3%, which is an interesting detection rate for such a kind of applications. The approach makes a better performance possible compared to the previous one. Each attribute characterizes one sought-after region (the complementary of information provided by the attributes) and thanks to the fusion process, a better mapping of the image is obtained. The rate of non-classified voxels is consequently decreased.

The reference regions defined by the expert were made on different slices of the studied 3-D tomographic image. Figure shows the cartographies obtained for these sections. In Figures b and 9c, the cartographies obtained in region R 1 are shown and in Figures e and 9f the cartographies found in region R 2 are shown. For both regions (R 1 and R 2), the cartographies obtained thanks to the criterion C 2 are more interesting because few voxels are nonclassified. The regions are less fragmented and the detection rates are better. In the case of Figures h and 9i which represent the cartographies in region R 3, the detection rate obtained under the criterion C 1 () is higher than the detection rate obtained under the criterion C 2 (), because the three attributes selected under the criterion C 1 present an interesting separability of the region R 3. However globally, the second approach remains more interesting than the first one.

Figure 9 Obtained cartographies. (White voxels represent the lack of reinforcement region, clear gray level voxels the disordered region, dark gray level voxels the oriented regions, and black voxels are not classified voxels.). (a) Reference of R 1 given by the expert; (b) obtained cartography (criterion C 1, ); (c) obtained cartography (criterion C 2, ); (d) reference of R 2 given by the expert; (e) obtained cartography (criterion C 1, ); (f) obtained cartography (criterion C 2, T R 2  = 97.2%); (g) reference of R 3 given by the expert; (h) obtained cartography (criterion C 1, ); and (i) obtained cartography (criterion C 2, ).

Figure 9 Obtained cartographies. (White voxels represent the lack of reinforcement region, clear gray level voxels the disordered region, dark gray level voxels the oriented regions, and black voxels are not classified voxels.). (a) Reference of R 1 given by the expert; (b) obtained cartography (criterion C 1, ); (c) obtained cartography (criterion C 2, ); (d) reference of R 2 given by the expert; (e) obtained cartography (criterion C 1, ); (f) obtained cartography (criterion C 2, T R 2  = 97.2%); (g) reference of R 3 given by the expert; (h) obtained cartography (criterion C 1, ); and (i) obtained cartography (criterion C 2, ).

5. CONCLUSION

Recently, information fusion has increasingly been used in many domains and the need of an evaluation of the performance of these systems has become evident. Fusion systems are complex systems composed of different subparts having nonlinear and noncontinuous behaviors. A performance evaluation is needed in order to help both in the design of such systems validating the different methods used in the system and to bring some help in the numerous parameter adjustments. Generally, the global evaluation is not sufficient to interact locally with the system.

This article has expressed the problem of the evaluation of fusion systems using a local evaluation of each subpart of the system. As there is no reference in the output space of the subparts, a mission is expressed to describe the objective of the subparts. A mission achievement measure is then proposed to quantify the local performance of each subpart. The illustration of this approach is made on the extraction subpart, which contains numerous parameters. In the context of the 3-D image interpretation, the defined mission concerns the separability between sought-after regions provided by an attribute. A separability measure based on the histograms comparison has been proposed.

In the studied application, two different utilizations of the separability measure has been shown. The first one points out how to add input information progressively to the fusion system or to adjust the parameters of a given attribute which are not optimized according to the sought-after regions. The second kind of utilization concerns the feature selection. The local evaluation could bring some help to select the most pertinent information.

Illustrated on synthetic images (full reference case) and tomographic images (reduce reference), the interests of local evaluations have been shown. The work now consists in propagating this approach to the following subparts of the fusion system. Even if the local evaluation provides a good indication where to intervene on the system, it does not indicate how to do it. It is still difficult to suggest image processing parameters adjustment guidelines to the end-user, who is not a specialist in computer sciences. Automatic optimization tools using the proposed separability measure as cost-function could be an interesting perspective to this work.

ACKNOWLEDGMENT

The authors would like to thank Schneider Electric. The fusion system studied here has been fully developed in collaboration with the material research laboratory of Grenoble.

Notes

Values in bold correspond to the less separated regions.

Modified parameters are in bold.

Values in bold correspond to the well-separated regions.

Values in bold correspond to the well-separated regions.

Values in bold correspond to the well-separated regions.

REFERENCES

  • Appriou , A. , A. Ayoun , S. Benferhat , P. Besnard , L. Cholvy , R. Cooke , F. Cuppens , D. Dubois , H. Fargier , M. Grabisch , R. Kruse , J. Lang , S. Moral , H. Prade , A. Saffiotti , P. Smets , and C. Sossai . 2001 . Fusion: General concepts and characteristics . International Journal of Intelligent Systems 16 ( 10 ): 1107 – 1134 .
  • Cha , S.-H. and S. N. Srihari . 2002 . On measuring the distance between histograms . Pattern Recognition 35 ( 6 ): 1355 – 1370 .
  • Cvejic , N. , A. Loza , and D. Bull . 2005 . A similarity metric for assessment of image fusion algorithms . Journal of Signal Processing 2 ( 3 ): 178 – 182 .
  • Dasarathy , B. 2000 . Elucidative fusion systems—An exposition . International Journal on Information Fusion 1 ( 1 ): 5 – 15 .
  • Grabisch , M. 1996 . The application of fuzzy integrals in multicriteria decision making . European Journal of Operational Research 89 ( 3 ): 445 – 456 .
  • Gunes , V. , M. Ménard , P. Loonis , and S. Petit-Renaud . 2003 . Combination, cooperation and selection of classifiers: A state of the art . International Journal of Pattern Recognition and Artificial Intelligence 17 ( 8 ): 1303 – 1324 .
  • Hall , D. and J. Llinas . 2001 . Handbook of Multisensor Data Fusion . Boca Raton , Florida , USA : CRC Press .
  • Hancock , P. J. B. , R. J. Baddeley , and L. S. Smith . 1992 . The principal components of natural images . Network: Computation in Neural Systems 3 ( 1 ): 61 – 70 .
  • Haralick , R. M. and L. G. Shapiro . 1992 . Computer and Robot Vision . Boston : Addison-Wesley Longman Publishing Co., Inc .
  • Huang , L.-L. , A. Shimizu , Y. Hagihara , and H. Kobatake . 2003 . Gradient feature extraction for classification-based face detection . Pattern Recognition 36 ( 11 ): 2501 – 2511 .
  • Jullien , S. , L. Valet , G. Mauris , P. Bolon , and S. Teyssier . 2008 . An attribute fusion system based on the choquet integral to evaluate the quality of composite parts . IEEE Trans. On Instrumentation and Measurement 57 ( 4 ): 755 – 762 .
  • Kokar , M. M. , J. A. Tomasik , and J. Weyman . 2004. Formalizing classes of information fusion systems. Information Fusion 5(3):189–202.
  • Levin , M. S. 2001 . System synthesis with morphological clique problem: Fusion of subsystem evaluation decisions . Information Fusion 2 ( 3 ): 225 – 237 .
  • Li , X. 2002 . Blind image quality assessment . IEEE Int. Conf. Image Processing pp. 449 – 452 .
  • Piella , G. and H. Heijmans . 2003 . A new quality metric for image fusion . Proc. International Conference on Image Processing ICIP 2003 , Sept. 14–17 3 : III–173 – 6 .
  • Rui , Y. , T. Huang , M. Ortega , and S. Mehrotra . 1998 . Relevance feedback: A power tool for interactive content-based image retrieval . IEEE Trans. Circuits Systems Video Technol. 8 ( 5 ): 644 – 655 .
  • Toet , A. and E. M. Franken . 2003 . Perceptual evaluation of different image fusion schemes . Displays 24 ( 1 ): 25 – 37 .
  • Wang , Z. , A. Bovik , and H. Sheikh . 2004 . Image quality assessment: From error measurement to structural similarity . IEEE Trans. Image Processing 13 ( 4 ): 600 – 612 .
  • Wang , Z. and A. Bovik . 2002 . A universal image quality index . IEEE Signal Processing Letters 9 ( 3 ): 81 – 84 .
  • Wang , Z. and E. P. Simoncelli . 2005 . Reduced-reference image quality assessment using a wavelet-domain natural image statistic model . Proc. of SPIE Human Vision and Electronic Imaging. 5666 : 149 – 159 .
  • Zhang , Y. 1996 . A survey on evaluation methods for image segmentation . Pattern Recognition 29 ( 8 ): 1335 – 1346 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.