829
Views
21
CrossRef citations to date
0
Altmetric
Articles

Classification of California agriculture using quad polarization radar data and Landsat Thematic Mapper data

&
Pages 50-63 | Published online: 14 Apr 2013

Abstract

This study evaluated the accuracy of classifying California agriculture using spaceborne quad polarization radar from the Japanese ALOS PALSAR system and optical Landsat Thematic Mapper (TM) data. In addition, the study analyzed the utility of radar texture and sensor fusion techniques. The original radar had an overall accuracy of 74% but with individual crop producer's accuracies ranging from 100% for almonds to 49% for alfalfa. Landsat provided a much higher overall classification accuracy of 91%. The merger of Landsat with radar texture increased overall accuracy to 97%, indicating the advantages of sensor integration.

Introduction

With the launch of recent spaceborne radar systems, including RADARSAT-2, ALOS PALSAR, and TerraSAR-X, the availability of quad polarization radar data may prove beneficial for multiple environmental applications. However, the potential of quad polarization radar data for land cover/use classification and other applications remains in its early stages as a result of previously limited data availability and analysis. Only with more examination of the relatively new spaceborne quad polarization data can the scientific and application community maximize the potential benefits of these technological innovations.

Traditional means of providing reliable land cover/use information has been primarily undertaken by multispectral systems such as Landsat Thematic Mapper (TM). These systems, in part because of their limited functionality in cloud-covered areas, are not able to fully sustain the demands for providing land cover/use information for some areas. However, with the success of RADARSAT-1 and other spaceborne radar systems, this limitation has been addressed to a certain extent.

Radar systems have distinct differences over traditional multispectral sensor (visible and infrared) systems, which have made the application of radar data of increasing interest to the scientific community. The longer wavelengths of radar are capable of penetrating atmospheric conditions that limit traditional spaceborne multispectral systems (Henderson et al. Citation2002). These microwave wavelengths have potential usefulness for those geographic areas that are often obscured by cloud cover (Almeida-Filho et al. Citation2007). As an active sensor that illuminates the surface with its own energy as opposed to being dependent on daylight as is the case with optical sensors, nighttime acquisition is also possible. These unique capabilities provide the possibility of obtaining useful spatial information using radar independently, which may not be accomplished by optical sensors.

The surface interaction of radar is very different than optical sensors, thus providing different information about landscapes. The response of radar is a function of surface roughness, geometry, and internal structure as opposed to surface reflection with optical wavelengths. The variation in radar backscatter from a feature may be a result of incident angle, acquisition date, look direction, moisture on the surface, or the physical composition, dielectric constant, of the feature itself. Backscatter is also strongly influenced by the orientation of the feature to the incoming radar signal (Haack Citation2007).

One of the difficulties with the analysis of radar data is that until relatively recently most radar spaceborne systems only collected data using a single wavelength with a fixed polarization. Hence, only one component of the total surface scattering is being measured, while any additional information contained within the returned radar signal is lost (Dell'Acqua, Gamba, and Lisini Citation2003; Toyra, Pietroniro, and Martz Citation2001). More recent systems, such as the Japanese PALSAR and the Canadian RADARSAT-2, include an increased number of polarizations. Imagery acquired under different polarizations will obtain different backscatter responses, providing more informational for analysis (Banner and Ahern Citation1995; Gauthier, Bernier, and Fortin Citation1998; Hegarat-Mascle et al. Citation1997).

Generally, the visible and infrared wavelength systems are recognized as being superior to radar data due to their multispectral information content (Brisco and Brown Citation1999). Multispectral systems, however, are a strong argument for the utility of quad polarization radar and sensor fusion as more bands will provide more information. Radar responds differently to varied terrain and dielectric factors such as plant canopy roughness and structure, plant moisture content, and subcanopy conditions than optical sensors. As such, a combined sensor analysis of optical and radar could contribute to improved surface information (Amarsaikhan et al. Citation2012; Santos and Messina Citation2008).

Research objectives

The primary objective of this study was to examine the potential of independently using quad polarization radar and optical data for land cover/use classification. The study also evaluated improvements that can be made to the original data sets (radar and optical) to increase the overall classification accuracy by fusing these data sets together.

The effectiveness of applying different texture measures to the radar imagery was also addressed. The intent was to determine whether radar texture yields better classification for the various land covers/uses as compared to using original radar. Textural information may be as important as spectral information in radar, as the information content of an image resides in both the intensity (spectral) of individual pixels and the spatial arrangement of the pixels (Anys and He Citation1995; Champion et al. Citation2008; Kurosu et al. Citation1999). Standard image classification procedures used to extract information from remotely sensed images usually ignore this spatial information and are based purely on spectral characteristics. Such classifiers may be ineffective when applied to land classes such as residential and urban areas that are largely distinguished by their spatial rather than their spectral characteristics (Solberg and Anil Citation1997; Townsend Citation2002).

The advantages of using derived radar measures, such as texture measures at different window sizes, in comparison to original radar data, have been demonstrated by Haack et al. (Citation2002) and Herold, Haack, and Solomon (Citation2005). Textural information may also be used in combination with the backscatter measurements of radar for analysis (Huang, Legarsky, and Othman Citation2007; Nyoungui, Tonye, and Alono Citation2002; Sawaya et al. Citation2010; Zhang et al. Citation2008).

Over the years, the fusing of data from multiple sensors has been a common technique and this trend is bound to continue as the geospatial technologies improve (Roberts, Van Aardt, and Ahmed Citation2011). According to Chavez, Sides, and Anderson (Citation1991), one of the reasons for this increase in fusing multiple data sets is due to the complimentary information of the different data sets. It is crucial for the scientific community to harness this potentially useful technique as it may improve the geographic knowledge of surface features. Fusing data from multiple sensors, i.e., Landsat and radar, has proved to be an efficient technique in improving the overall accuracy of classification (Leckie Citation1990 ; Pal, Majumdar, and Bhattachrya Citation2007).

Sensor fusion may reduce the uncertainty associated with data from a single source (Saraf Citation1999; Schistad, Jain, and Taxt Citation1994; Simone et al. Citation2002). Image fusion over the years has become a useful technique in the remote sensing field, not only making the interpretation process faster and more reliable but also providing unique and accurate information for the extracted features (Wen and Chen Citation2004). There are distinct advantages of fusing radar with optical data, as the end product has the advantage of spatial information (radar image texture) and spectral information from the optical and infrared bands. Therefore, in addition to examination of the PALSAR and TM data independently, this study examined their integration.

Data sets and study area

The radar data used in this study were acquired from the Japanese ALOS PALSAR L-band Synthetic Aperture Radar (SAR) sensor at 12.5 m spatial resolution in four polarizations (HH, HV, VH, and VV). The radar image covered an area of approximately 35 km × 65 km. To process the radar imagery, the PALSAR image files were converted from 32-bit floating point to unsigned 8 bit in part because of software limitations and also to be consistent with the Landsat imagery. All PALSAR images were accurately georeferenced to the Universal Transverse Mercator (UTM) coordinate system.

Landsat imagery for the study area was acquired from the United States Geological Survey (USGS). The multispectral Landsat TM images had a spatial resolution of 28.5 m for the three visible (blue, green, and red) and three infrared (near infrared, mid-infrared, and mid-infrared) bands. The Landsat images have a footprint of approximately 183 km by 170 km.

The Landsat images were resampled to 12.5 m using a nearest-neighbor procedure. This was done to maintain consistency across all images in the analysis. To maintain the texture values of the data, the resampling should be to the smallest pixel of the original data types. During the resampling process, all six reflective bands, three optical and three infrared bands, were included. The TM panchromatic and the thermal band were not included.

The study site was located in the central valley of California, approximately 180 km southeast of San Francisco and situated between the Coast Range and the Sierra Nevada Mountains (). This region of California is known for its high agricultural productivity, particularly for fruit, and is commonly referred to as the “Fruit Basket of the World.” The PALSAR imagery () for this study site was acquired on 1 May 2007, and the Landsat data ( ) were acquired on 11 May 2007.

1. California study site location.

1. California study site location.

2. PALSAR image (35 km by 65 km) for California. Acquired 1 May 2007 (polarizations VV, VH, and HV; RGB).

2. PALSAR image (35 km by 65 km) for California. Acquired 1 May 2007 (polarizations VV, VH, and HV; RGB).

3. Subset mosaic of TM images (50 km by 75 km) for California (Bands 234-BGR).

3. Subset mosaic of TM images (50 km by 75 km) for California (Bands 234-BGR).

It was critical to have the same seasonality in images for California because of examination of various crops and the variability of those crops during the growing season. As opposed to using generic land covers, this study site focuses on distinguishing individual agricultural crops. The four crops identified for California included almonds, cotton, fallow/idle cropland, and alfalfa. Considering that these areas of interest are quite specific, small changes in seasonality (temperature and moisture) would cause discrepancies during classification. Hence, it was essential to use imagery acquired for the same year and at the same time in the growing season.

On the radar image a variety of different tones are related to the diversity of various crop types grown in this part of California. Each crop has a unique spectral signature, resulting in the variation of tones. According to the United States Department of Agriculture (USDA), there are well over 100 different types of land covers/uses, mostly crops, in this area (USDA Citation2007). There is very little residential and urban land cover in this region, as the majority of the flat valley is used for agricultural purposes.

The imagery for this study site was acquired during late spring when there is very little fallow and bare ground surface. This is due to the intensive agricultural practices in the area. On the radar image, there are a few dark areas, and without ancillary information they could be confused with fallow or bare ground. However, this is not the case, as these dark areas are early stages of cotton fields. Cotton fields have low radar backscatter due to the crop not being fully matured on this date. A fully matured cotton crop would have higher radar returns. If the same imagery would have been acquired in September, the cotton fields would have had a stronger radar signal and thus would have appeared brighter.

Almonds and alfalfa can be clearly distinguished on the Landsat image (). Both these classes have a vibrant red tone, which separated them from the surrounding land covers/uses. Cotton and fallow visually look very similar to each other with light gray tones in the center of the image. It is only after looking at DN values and ancillary information (field patterns in Google Earth) for these two classes that they were accurately identified.

Using California as a study site is interesting because of the ancillary information known for the site. Detailed 2007 crop maps based on the classification of Landsat and AWiFS imagery were obtained from the USDA for calibration and validation purposes.

Methodology and results

The basic approach for this study was to obtain spectral signatures for the crops of interest using supervised signature extraction from calibration areas of interest (AOIs). The AOIs were based upon the available USDA crop information.

To obtain appropriate radar-derived measures, four different types of texture were examined at four different window sizes. Classification of the various data sets was accomplished using a maximum likelihood decision rule, and then an error matrix of individual crop user's, producer's, and overall accuracies were obtained using multiple validation sites, different than the calibration sites. Error matrices were obtained for the original radar, radar texture, combined original radar and texture, Landsat TM, and sensor fusions of the TM and original radar and the TM and radar texture. These results are presented in the following sections.

Original radar

PALSAR statistical values for representative crop AOIs were extracted and are presented in . Almond groves have higher DN values compared to the other crops for all four polarizations. The DN values for cross-polarization are higher, as compared to like polarization. The high DN values for almonds are caused by the strong return signal from the almond trees, which are larger as compared to other crops. Almond trees are in close proximity to each other, which strengthens the radar backscatter, resulting in the higher DN values.

Table 1. AOI class statistics (DN values) from the PALSAR scene. Mean and standard deviation

Cotton and fallow/idle cropland both have very similar DN values. Cotton planting in California starts in March or April and is harvested in September. Considering the time of the year when the radar imagery was acquired (May), the crop had not fully matured and hence the radar returns are low. Similarly, for the fallow/idle cropland, the radar backscatter is weak, because there is not much present on the ground to return the signal, acting almost specularly. This low backscattering property of the ground surface results in both cotton and fallow/idle cropland having similar DN values. The similarity in DN values for both these crops may result in misclassification.

Alfalfa has high DN values across all four polarizations. Given the time of the year, the alfalfa crop had matured, providing the high backscatter DN values.

Classification accuracies for the original four radar polarizations have been summarized in . Interestingly, the combined four polarization results are virtually identical to the single HH polarization. Both like polarizations were 10–15% better than the cross-polarization bands.

Table 2. Classification accuracies for stacked radar bands (HH, HV, VH, and VV)

While the overall classification accuracies appear to be satisfactory, the producer's accuracies for fallow/idle croplands and alfalfa are relatively poor. Almonds had excellent classification accuracies (above 90%) across all individual and the four stacked bands. These high-classification accuracies for almonds can be validated by looking at the high DN values in . The DN values for almonds are quite high, as compared to other land covers/uses, making them highly separable from other classes.

Cotton was only accurately classified in the HH and VV bands, where the producer's accuracy was above 78%. For all other individual and the four stacked bands, the producer's accuracy for cotton was low. This poor classification of cotton relates back to the similarity in the DN values for cotton and fallow/idle. The majority of the misclassified pixels for cotton were fallow/idle cropland.

Fallow/idle cropland was also misclassified with other pixels – primarily cotton and alfalfa. The producer's accuracy for fallow/idle cropland is extremely low across all bands, particularly the HH (20%) and VH (21%). The producer's accuracy for all four bands was comparatively better than the individual bands, but it was still below the level of acceptability (70%).

Alfalfa was also very poorly classified across all band combinations. The producer's accuracy ranged from 43% to 54% for the individual bands, and only 49% when all four bands are stacked together. The majority of the misclassification for alfalfa was with fallow/idle cropland pixels. This was quite surprising because the statistical values for the two classes are quite different.

Radar texture

Using Transformed Divergence (TD) separability analysis, the 13 × 13 window sizes yielded the best variance texture results compared to three other window sizes (5 × 5, 7 × 7, and 11 × 11). The use of texture did not help in improving the accuracies (). Rather, there is a drop in the overall classification accuracies to 72% from an original 73%. Classification accuracies for certain classes such as alfalfa dropped drastically when texture is applied.

Table 3. Classification accuracies for stacked variance texture bands (13 × 13 window)

Cotton and almonds were the two crops for which there is an increase in the producer's accuracy when texture is applied to the original radar data set. Overall cotton and alfalfa had good producer's accuracy close to 100% when all four bands are stacked together.

The producer's accuracy for fallow/idle croplands was exceptionally low. This is possible due to the edge pixels that have been misclassified after texture is applied to the radar image. The majority of the misclassification that occurred for fallow/idle cropland was with cotton. All pixels that should have been classified as fallow/idle cropland were misidentified as being cotton, resulting in low producer's accuracy for the land cover/use.

For classification purposes, radar texture did not yield any positive results but rather surprisingly reduced the overall and producer's accuracy for a few classes. One of the reasons for this decrease in classification accuracies as stated previously might relate to the misclassification of the edge pixels.

Merged radar with radar texture

summarizes the classification accuracies for original radar combined with radar variance texture. Unfortunately, there is only a marginal improvement in the overall classification accuracies. The overall accuracies increase to 76% from 73% (original radar) and 72% (variance texture).

4 Stacked original radar with radar texture (variance 13 × 13)

Almonds and cotton both have good producer's accuracies in the original and radar texture bands. Hence, it is no surprise that when both data sets are fused the producer's accuracies are quite good, i.e., 100% for almonds and 76% for cotton. There is a pattern that emerges when looking at producer's accuracies for fallow/idle cropland and alfalfa. The producer's accuracy for these two land covers/uses is based on the averages of the two data sets combined. The original radar yielded higher producer's accuracy for fallow/idle (64%) and alfalfa (49%); however, when these values are fused with radar texture, the overall producer's accuracy drops down to 59% for fallow/idle and 44% for alfalfa. One of the reasons for this is related to the low producer's accuracy within the individual radar texture bands for fallow/idle and alfalfa.

The fusing of original radar data sets with radar variance texture did show minorimprovements. However, there are certain classes such as fallow/idle cropland and alfalfa that have low-classification accuracies (below 70%) regardless of the number and combinations of original and radar texture bands. As stated earlier, a liability inthis study is that the imagery was acquired too early in the growing season when some crops are in their early stages of growth and similar in response to radar sensors.

Landsat Thematic Mapper

The average DN and standard deviations for the Landsat data are given in . Similar to the original radar data sets, these DN values provide validating information, which is useful when examining the classification results. contains the statistical values for spectral signatures extracted from the same representative AOIs as the radar.

5 AOI class statistics (DN values), TM image

Almonds have high DN values for the NIR and MIR bands. Both NIR and MIR bandsare very responsive to chlorophyll level in the tree leaves. Considering the time ofthe year, almond trees are fully emerged and therefore have high reflectance in the NIRbands. The visible bands – blue (B), green (G), and red (R), have moderate DN values.

Cotton has a relatively high DN value for the first MIR band. However, the DN values for the remaining bands are low. One of the reasons for the low DN values in all other band values might be because the cotton crop was not fully matured, hence resulting in low reflectance and DN values particularly in the NIR.

Fallow/idle croplands have DN values ranging from 58 to 107 for the visible bands and 71 to 129 for the infrared bands. Based on USDA metadata, this type of land cover/classification comprises of bare ground with leftover material after crops have been harvested. Considering that there is no healthy vegetation with high chlorophyll present on these fields, the DN values for the infrared bands should have been lower. However, this is not the case and the DN values for the infrared bands are the highest as compared to the other land covers/uses. This suggests that there is probably some type of grass/vegetation that is present on the ground.

Alfalfa, as expected, has high DN values for the first MIR bands. The first MIR band is most sensitive to green vegetation, and considering the time of the year, alfalfa is in full growth and hence the high DN values for the first MIR bands. Similarly the NIR band also has high DN values for alfalfa.

summarizes the Landsat classification accuracies for all four crops. The overall accuracy for the combined Landsat bands is 91%, which is a large improvement over the original radar (73%) and radar texture bands (72%).

6 Classification accuracies for Landsat TM

Alfalfa and fallow/idle crops both have a producer's accuracy of well over 90%. This is a large improvement considering that alfalfa in the original radar data set had an accuracy of only 49% and 27% when texture was applied. Similarly fallow/idle cropland has an accuracy of 67% in the original radar data set and 4% when texture was applied. The producer's accuracy for almonds dropped to 78%, as compared to the 100% in the original radar and radar texture bands.

Sensor fusion

Based on the results of previous accuracies, it is evident that the Landsat sensor is better suited for the land cover/use classification, at least in the time frame of this data. In past research (Huang, Legarsky, and Othman Citation2007), the fusing of radar with optical data has yielded higher land cover/use classification accuracies as compared to evaluating them individually. Even though the classification accuracies attained using Landsat images are quite good, the next study component evaluated if there are any improvements that can be made by fusing radar and optical data.

The first data set to be evaluated was the original radar layer stacked with Landsat. All four original radar bands (HH, HV, VH, and VV) had been layer stacked with the six Landsat bands. The classification accuracies for both data sets combined are summarized in . There is an increase in the overall classification accuracies (94%) when the two data sets are stacked, as compared to analyzing them independently.

7 Classification accuracies for original radar and Landsat

The producer's accuracies for all four classes are excellent, with all being over 90% and only a few misclassifications for fallow/idle cropland and alfalfa. This is a vast improvement over the original radar data sets, where the two crops were regularly misclassified. The producer's accuracy for alfalfa increased to 100% from 78% (Landsat). Similarly the user's accuracy also increased to 92% from an original 68% (Landsat), when original radar and optical data are fused together.

The second data to be fused with Landsat were the radar variance texture (13 × 13). All four radar variance texture bands (HH, HV, VH, and VV) have been layer stacked with the six Landsat bands. summarizes the classification accuracies for the two data sets. The overall classification accuracies are at 97%, which is an increase of almost 3% as compared to the previous combination of Landsat and original radar.

8 Classification accuracies for radar texture (variance, 13 × 13) and Landsat

Similar to previous combinations of Landsat and original radar, all classes that had low producer's accuracies are now above 90%. The combination of the Landsat data with radar and radar texture yields exceptionally good classification accuracies. These high accuracies can be directly attributed to the Landsat imagery. None of the radar data sets were able to successfully classify all land covers/uses independently; however, when fused with the Landsat data, the results were significantly improved.

Conclusions

The overall patterns for classification accuracies for California determined that the radar yielded reasonable classification accuracies with considerable variation by crop. The use of texture for classification purposes did not prove beneficial. The classification accuracies were lower for the crops after texture was applied as compared to the original radar image. Landsat TM proved to be very useful in providing good classification for all four crops. Fusing data sets from the two different sensors resulted in the highest classification accuracies for all classes. The value of sensor fusion with these data sets suggests that more research and perhaps applied science should use this method.

An issue for this study was the early date of the imagery relative to the crop calendar for the region. Imagery at a later date might have been much more useful for both sensor types, at least for some crops, and multidate analysis as often used for crop classification might be excellent for sensor fusion classification. Clearly the radar provides useful additional information. It is also possible that a different decision rule, such as Classification And Regression Tree (CART), might be more useful than maximum likelihood for this type of sensor merger. Other extensions of this study would be primarily to other locations and other radar and optical sensors.

Acknowledgment

The authors thank the Alaskan Space Facility for providing the PALSAR imagery used in this study under sponsorship from NASA.

References

  • Almeida-Filho , R. , Rosenqvist , A. , Shimabukuro , Y. and Silva-Gomez , R. 2007 . Detecting Deforestation with Multitemporal L-Band SAR Imagery: A Case Study in Western Brazilian Amazônia . International Journal of Remote Sensing , 28 ( 6 ) : 1383 – 1390 .
  • Amarsaikhan , A. , Ganzorig , S. M. , Blotevogel , H. H. , Egshiglen , E. , Gantuyal , R. , Nergui , B. and Enkhjargal , D. 2012 . Comparison of Multisource Image Fusion Methods and Land Cover Classification . International Journal of Remote Sensing , 33 ( 8 ) : 2532 – 2550 .
  • Anys , H. and He , D. 1995 . Evaluation of Textural and Multipolarization Radar Features for Crop Classification . IEEE Transactions on Geoscience and Remote Sensing , 33 : 1170 – 1181 .
  • Banner , A. V. and Ahern , F. J. 1995 . Incident Angle Effects on the Interpretability of Forest Clearcuts Using Airborne C-HH SAR Imagery . Canadian Journal of Remote Sensing , 2 : 64 – 66 .
  • Brisco , B. and Brown , R. J. 1999 . Mulitdate SAR/TM Synergism for Crop Classification in Western Canada . Photogrammetric Engineering and Remote Sensing , 61 : 1009 – 1014 .
  • Champion , I. , Dubois-Fernandez , P. , Guyon , D. and Cottrel , M. 2008 . Radar Image Texture as a Function of Forest Stand Age . International Journal of Remote Sensing , 29 ( 6 ) : 1795 – 1800 .
  • Chavez , P. S. , Sides , S. C. and Anderson , J. A. 1991 . Comparison of Three Different Methods to Merge Multiresolution and Multispectral Data: Landsat TM and SPOT Panchromatic . Photogrammetric Engineering and Remote Sensing , 57 : 295 – 303 .
  • Dell'Acqua , F. , Gamba , P. and Lisini , G. 2003 . Improvements to Urban Area Characterization Using Multitemporal and Multiangle SAR Images . IEEE Transactions on Geoscience and Remote Sensing , 41 : 1996 – 2004 .
  • Gauthier , Y. , Bernier , M. and Fortin , J. P. 1998 . Aspect and Incident Angle Sensitivity in ERS-1 SAR Data . International Journal of Remote Sensing , 19 : 2001 – 2006 .
  • Haack , B. N. 2007 . A Comparison of Land Use/Cover Mapping with Varied Radar Incident Angles and Seasons . GIScience & Remote Sensing , 44 : 1 – 15 .
  • Haack , B. N. , Solomon , E. , Bechdol , M. and Herold , N. 2002 . Radar and Optical Data Comparison/Integration for Urban Delineation: A Case Study . Photogrammetric Engineering and Remote Sensing , 68 : 1289 – 1296 .
  • Hegarat-Mascle , S. , Vidal-Madjar , D. , Taconet , O. and Zribi , M. 1997 . Application of Shannon Information Theory to a Comparison between L- and C-Band SIR Polarimetric Data versus Incident Angle . Remote Sensing of Environment , 60 : 121 – 130 .
  • Henderson , F. , Chasan , R. , Portolese , J. and Hart , J. 2002 . Evaluation of SAR-Optical Imagery Synthesis Techniques in a Complex Coastal Ecosystem . Photogrammetric Engineering and Remote Sensing , 68 : 839 – 846 .
  • Herold , N. , Haack , B. and Solomon , E. 2005 . Radar Spatial Considerations for Land Cover Extraction . International Journal of Remote Sensing , 26 : 1383 – 1401 .
  • Huang , H. , Legarsky , J. and Othman , J. 2007 . Land-Cover Classification Using Radarsat and Landsat Imagery for St. Louis, Missouri . Photogrammetric Engineering and Remote Sensing , 73 : 37 – 43 .
  • Kurosu , T. , Uratsuka , S. , Maeno , H. and Kozo , T. 1999 . Texture Statistics for Classification of Land Use with Multitemporal JERS-1 SAR Single-Look Imagery . IEEE Transactions on Geosciences and Remote Sensing , 37 : 227 – 235 .
  • Leckie , D. G. 1990 . Synergism of Synthetic Aperture Radar and Visible/Infrared Data for Forest Type Discrimination . Photogrammetric Engineering and Remote Sensing , 56 : 1237 – 1246 .
  • Nyoungui , A. , Tonye , E. and Alono , A. 2002 . Evaluation of Speckle Filtering and Texture Analysis Methods for Land Cover Classification from SAR Images . International Journal of Remote Sensing , 23 : 1895 – 1925 .
  • Pal , S. K. , Majumdar , T. J. and Bhattachrya , A. K. 2007 . ERS-2 SAR and IRS-IC LISS III Data Fusion: A PCA Approach to Improve Remote Sensing Based Geological Interpretation . Journal of Photogrammetry and Remote Sensing , 60 : 281 – 297 .
  • Roberts , R. , Van Aardt , J. and Ahmed , F. 2011 . Image Fusion for Enhanced Forest Structural Assessment . International Journal of Remote Sensing , 32 ( 1 ) : 243 – 266 .
  • Santos , C. and Messina , J. 2008 . Multi-Sensor Data Fusion for Modeling African Palm in the Ecuadorian Amazon . Photogrammetric Engineering and Remote Sensing , 74 ( 6 ) : 711 – 724 .
  • Saraf , A. K. 1999 . IRS-IC-LISS-III and PAN Data Fusion: An Approach to Improve Remote Sensing Based Mapping Techniques . International Journal of Remote Sensing , 20 : 1929 – 1934 .
  • Sawaya , S. , Haack , B. , Idol , T. and Sheoran , A. 2010 . Land Use/Cover Mapping with Quadpolarization Radar and Derived Texture Measures Near Wad Madani, Sudan . GIScience & Remote Sensing , 47 ( 3 ) : 398 – 411 .
  • Schistad , A. , Jain , A. and Taxt , T. 1994 . Multisource Classification of Remotely Sensed Data: Fusion of Landsat TM and SAR Images . IEEE Transactions on Geoscience and Remote Sensing , 32 : 768 – 778 .
  • Simone , G. , Farina , A. , Morabito , F. C. , Serpico , S. B. and Bruzzone , L. 2002 . Image Fusion Techniques for Remote Sensing Application . Information Fusion , 3 : 3 – 15 .
  • Solberg , A. H. S. and Anil , K. J. 1997 . Texture Fusion and Feature Selection Applied to SAR Imagery . IEEE Transactions on Geosciences and Remote Sensing , 10 : 989 – 1003 .
  • Townsend , P. A. 2002 . Estimating Forest Structure in Wetlands Using Multitemporal SAR . Remote Sensing of Environment , 79 : 288 – 304 .
  • Toyra , J. , Pietroniro , A. and Martz , J. 2001 . Multisensor Hydrologic Assessment of a Freshwater Wetland . Remote Sensing of Environment , 75 : 162 – 173 .
  • USDA (United States Department of Agriculture). 2007. National Agricultural Statistics Service, California Cropland Data Layer (CDL). New York: USDA.
  • Wen , C. Y. and Chen , J. K. 2004 . Multi-Resolution Image Fusion Technique and Its Application to Forensic Science . Forensic Science International , 140 : 217 – 232 .
  • Zhang , L. , Zhao , Y. , Huang , B. and Li , P. 2008 . Texture Feature Fusion with Neighborhood Oscillating Tabular Search for High Resolution Image Classification . Photogrammetric Engineering and Remote Sensing , 74 : 323 – 331 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.