857
Views
31
CrossRef citations to date
0
Altmetric
Articles

Fusing lidar and digital aerial photography for object-based forest mapping in the Florida Everglades

, &
Pages 562-573 | Received 04 Feb 2013, Accepted 15 Aug 2013, Published online: 25 Sep 2013

Abstract

The Florida Everglades has a diverse forest community which needs to be accurately mapped to support the ongoing Comprehensive Everglades Restoration Plan (CERP). In this study, we examined whether a combination of light detection and ranging (lidar) and digital aerial photography can improve the accuracy of forest mapping in the Everglades, compared with using fine spatial resolution digital aerial photographs alone. We extracted lidar elevation and intensity features from original point cloud data at the object level to avoid the errors and uncertainties in the raster-based lidar methods. These features were combined with lidar-derived topographic information, and aerial photograph derived texture measures to map 7 forest communities in a portion of the Everglades. An overall accuracy of 71% and Kappa value of 0.64 were produced. We found that low-posting-density lidar data (i.e., <4 pts/m2) can significantly increase forest classification accuracy by providing important elevation, intensity, and topography information. It is anticipated that the modern lidar remote-sensing techniques can benefit the Everglades mapping to reduce the cost in CERP.

1. Introduction

The Florida Everglades is the largest subtropical wetland in the United States. It has been designated as a World Heritage Site, International Biosphere Reserve, and Wetland of International Importance as a result of its unique combination of hydrology and water-based ecology that supports many threatened and endangered species (Davis et al. Citation1994). In the past century, human activities have severely modified the Everglades ecosystem, resulting in a variety of environmental issues in South Florida. To protect this valuable resource, the US Congress authorized the Comprehensive Everglades Restoration Plan (CERP) in 2000 to restore the Everglades ecosystem (CERP Citation2012). CERP contains a variety of pilot environmental engineering projects, and many of which require accurate and informative vegetation maps, because the restoration will cause dramatic modification of plant communities (Doren, Rutchey, and Welch Citation1999). As one of the key plant communities in the Everglades, forests play critical roles in this system. Monitoring changes of upland and inland forests in the Everglades can provide a measure of the progress and effects of restoration on environmental health.

Current forest information in the Everglades is mainly from field studies and manual interpretation of large-scale aerial photographs (Rutchey, Schall, and Sklar Citation2008). Both procedures are time-consuming, labor-intensive, and costly. Several efforts have been made to automate forest mapping through digital analysis of remotely sensed multispectral or hyperspectral imagery, which have produced varying degrees of success (Rutchey and Vilcheck Citation1994, Citation1999; Jensen et al. Citation1995; Hirano, Madden, and Welch Citation2003; Zhang and Xie Citation2012, Citation2013a). Multispectral or hyperspectral sensors with a relatively coarse spatial resolution (i.e., 20–30 m or larger) cannot characterize forests present as small patches or linear/narrow shapes in the Everglades (Zhang and Xie Citation2012). Fine spatial resolution hyperspectral imagery has been proved useful for vegetation mapping over this region (Zhang and Xie Citation2012, Citation2013a), but collection of this type of data is costly. In CERP, fine spatial resolution digital aerial photographs have been frequently collected, but automated classification of this type of data alone produced poor accuracy due to their coarse spectral resolution and the diversity of vegetation types in the Everglades (Zhang and Xie Citation2013b).

Research has demonstrated that combining fine spatial resolution imagery with airborne light detection and ranging (lidar) shows promise for detailed forest mapping. Studies in this context can be grouped into two categories. The first is the integration of fine spatial resolution hyperspectral imagery with lidar data (e.g., Hill and Thomson Citation2005; Voss and Sugumaran Citation2008; Dalponte, Bruzzone, and Gianelle Citation2008; Asner et al. Citation2008; Jones, Coops, and Sharma Citation2010; Zhang and Qiu Citation2012; Cho et al. Citation2012). This is likely to be the best data combination for forest mapping. But again, it is impractical to use fine spatial resolution hyperspectral data for broad area mapping in the Everglades due to the higher cost in data collection. The second is the synergy of fine spatial resolution multispectral imagery and lidar data. It has been recognized that this combination is effective and efficient for forest mapping (e.g., Leckie et al. Citation2003; Holmgren, Persson, and Soderman Citation2008; Ke, Quackenbush, and Im Citation2010). Recent research on forest characterization using lidar is dominated by high-posting-density (i.e., > 4 pts/m2) lidar data because such data set is able to derive individual tree-based vertical structure to complement the spectral information of optical imagery (Ke, Quackenbush, and Im Citation2010). Application of low-posting-density lidar focuses on terrestrial topographic mapping (Hodgson and Bresnahan Citation2004), and its research for forest mapping has been limited. Ke, Quackenbush, and Im (Citation2010) found that synergistic use of low-posting-density lidar data and fine spatial resolution multispectral imagery is useful in classifying canopy level forest types in central New York State, but application of two data sources in the Everglades has not been explored.

Previous studies have shown that the object-based image analysis (OBIA) techniques are desirable for Everglades mapping because pixel-based methods may lead to the “salt-and-pepper” effect in heterogeneous landscapes (Zhang and Xie Citation2012, Citation2013a). In addition, several studies have found that OBIA methods can generate higher accuracy than pixel-based methods in mapping wetlands (e.g., Harken and Sugumaran Citation2005; Kamal and Phinn Citation2011). In this study, we explored the integration of low-posting-density lidar data and fine spatial resolution digital aerial photographs for forest classification in the Everglades using OBIA techniques and examined the potential benefits of modern lidar systems to the CERP.

2. Methods

2.1. Study areas

Our study area, shown as a color infrared aerial photograph in Figure 1, is a portion of the Lake Okeechobee watershed in the central Everglades. Lake Okeechobee is the largest freshwater lake in Florida. It is the heart of the Everglades ecosystem by providing water to the surrounding communities, and serving as a source of water for navigation, recreation, and estuaries. The lake's health has been threatened in recent decades by excessive nutrients from agricultural and urban activities, harmful high and low water levels, as well as the spread of exotic vegetation. Restoration of the Lake Okeechobee watershed is one of the key components in CERP. The study site covers an area of about 5176 acres with 7 common Everglades forest communities present: mixed wetland hardwood, upland mixed coniferous/hardwood, cabbage palm, oak-cabbage palm, live oak, upland hardwood, and palmetto prairies.

2.2. Data acquisition

Data sources used in this study include the digital aerial photographs, airborne lidar data, and reference data. Fine spatial resolution aerial photographs were collected on 18 January 2005 by the National Aerial Photography Program (NAPP). The US Geological Survey (USGS) ortho-rectified these aerial photos into data products known as digital ortho-photo quarter quads (DOQQs). The accuracy and quality of DOQQs meet the National Map Accuracy Standards (NMAS). DOQQs with four spectral channels (Red, Green, Blue, and near-infrared (NIR)) in a spatial resolution of 1 m were downloaded from the Land Boundary Information System (LABINS, http://data.labins.org/2003/) to be used for this study.

Lidar data were collected by Merrick & Company (Greenwood Village, CO, USA) using Leica ALS-50 system from June to December 2007 to support the Florida Division of Emergency Management. The Leica ALS-50 lidar system collects small footprint multiple returns and intensity at 1060 nm wavelength. The vendors reported that the positional accuracy was 0.05 feet horizontally and 0.2 feet vertically at 95% confidence level. The averaged point density for our study area is 1.1 pts/m2. The original lidar point cloud data were processed by the vendor to generate the digital terrain model (DTM) using Merrick Advanced Remote Sensing (MARS) processing software. The vertical accuracy of the DTM is 0.6 feet at 95% confidence level, which meets the National Standard for Spatial Data Accuracy (NSSDA). All the lidar point cloud data and DTM are available to the public at the International Hurricane Research Center (http://mapping.ihrc.fiu.edu/).

The South Florida Water Management District (SFWMD) provided the reference data for this study. The reference data were photo-interpreted from 2004 to 2005 NAPP aerial photographs and classified using the SFWMD modified Florida land use, land cover classification system. Features were stereoscopically interpreted using a stereo plotter and calibrated from field surveys through a project known as “Land Cover/Land Use Mapping Project” conducted at the SFWMD. The reference data was compiled on screen over DOOQs, the same digital aerial photographs as those used in this study. The positional accuracy of the data meets the NMAS. The SFWMD reports the reference data set has a minimum accuracy of 90% in classification.

We randomly selected 496 image objects as the reference data for our study area. We followed a spatially stratified data sampling strategy, in which a fixed percentage of samples were selected for each class. The number of samples for each community was estimated based on the results of image segmentation and the reference data. The segmentation process to generate image objects is detailed in next subsection. The collected reference data were split into two halves with one for calibration and the other for validation. Non-forest objects were masked out using the reference data since the main concern of this study was forest.

2.3. Image segmentation

Multiple steps are required to fuse two data sources, as summarized in . Among these steps, image segmentation is a major procedure to conduct an effective object-based classification. We used the multi-resolution segmentation algorithm in eCognition Developer 8.64.1 (Trimble Citation2011) to generate image objects from the DOQQs. The segmentation algorithm starts with one-pixel image segments, and merges neighboring segments together until a heterogeneity threshold is reached (Benz et al. Citation2004). The heterogeneity threshold is determined by a user-defined scale parameter, as well as color/shape and smoothness/compactness weights. The image segmentation is scale-dependent, and the quality of segmentation and overall classification depend on the segmentation scale (Liu and Xia Citation2010). In order to find an optimal scale for image segmentation, an unsupervised image segmentation evaluation approach (Johnson and Xie Citation2011) was used. This approach begins with a series of segmentations using different scale parameters, and then identifies the optimal image segmentation using an unsupervised evaluation method that takes into account global intrasegment and intersegment heterogeneity measures. A global score (GS) is calculated by GS = V norm + MI norm, where V norm (normalized weighted variance) measures the global intrasegment goodness, and MI norm (normalized Moran's I) measures the global inter-segment goodness. More details in computing V norm and MI norm can be found in Johnson and Xie (Citation2011). The GSs were used to determine the optimal scale for segmentation. For our study area, a series of segmentations were carried out and the best segmentation scale is the one with the lowest GS score. A scale of 60 was found to be the optimal scale for our study site, which was used to segment the DOQQ data. All four bands of the DOQQs were set to equal weights. Color/shape weights were set to 0.9/1.0 so that spectral information would be considered most heavily for segmentation. Smoothness/compactness weights were set to 0.5/0.5 so as to not favor either compact or non-compact segments.

Figure 1. Map of the Everglades and study area shown as a color infrared (CIR) aerial photograph.

Figure 1. Map of the Everglades and study area shown as a color infrared (CIR) aerial photograph.

Figure 2. Flowchart for forest mapping using lidar and digital aerial photograph.

Figure 2. Flowchart for forest mapping using lidar and digital aerial photograph.

Following segmentation, object-based features were extracted. Object-based texture measures from fine spatial resolution imagery have been proved valuable for vegetation classification in the Everglades (Zhang and Xie Citation2012, Citation2013a). We extracted first- and second-order metrics for each band of the DOQQ data in eCognition including mean, standard deviation, contrast, dissimilarity, homogeneity, entropy, and angular second moment. The gray level co-occurrence matrix (GLCM) algorithm was used to extract the second-order texture measures. Details for calculation of these metrics at the object level can be found in Trimble (Citation2011).

2.4. Lidar feature extraction

Three types of features can be extracted from lidar data: elevation, intensity, and topography. Most lidar studies only examined the contribution of elevation information on forest classification (e.g., Jones, Coops, and Sharma Citation2010; Cho et al. Citation2012). Evaluation of intensity and topography has been limited. Fusion of lidar data and optical imagery can be occurred at two levels: pixel- and feature-level (J. Zhang Citation2010). Pixel-level fusion combines raw data from multiple sources into single resolution data to improve the performance of image processing tasks. Feature-level fusion extracts features (e.g., edges, corners, lines, and textures) from each individual data source and merges these features into one or more feature maps for further processing.

Previous studies primarily adopt the pixel-level fusion strategy to combine lidar data and optical imagery for forest classification (e.g., Voss and Sugumaran Citation2008; Dalponte, Bruzzone, and Gianelle Citation2008; Jones, Coops, and Sharma Citation2010; Ke, Quackenbush, and Im Citation2010; Cho et al. Citation2012). The pixel-level fusion methods commonly begin with the generation of related raster layers (e.g., digital canopy model) from lidar point cloud data using interpolation techniques, and then combine these raster layers with the optical imagery pixel by pixel. This is referred to as the raster-based lidar approach. A major problem using lidar by this way is the introduction of errors and uncertainties in the raster layer generation step (Smith, Holland, and Longley Citation2004), which will ultimately affect the subsequent classification (Zhang and Qiu Citation2012). To overcome this problem, we extracted lidar elevation and intensity information from the original point cloud data, rather than the lidar-derived raster layers (). This is referred to as the vector-based lidar approach. Previous studies have proved that working directly on lidar point cloud data could produce higher accuracy by preserving the original lidar values (C. Zhang Citation2010).

To effectively use elevation information, topographic effect was eliminated first by subtracting DTM value underneath each point from the elevation. This is known as data normalization in lidar remote sensing. Points with a normalized elevation less than 1 foot were considered as ground points to be dropped for further analysis. Non-ground lidar points within an image object were used to derive the descriptive statistics (maximum, mean, and standard deviation) of elevation and intensity for this object. Similarly, descriptive statistics of terrain elevation and slope for each image object were derived from the DTM using pixels within an object (). Feature-level fusion strategy was employed to merge the lidar-derived features and DOQQ-derived texture measures to be used for classification.

2.5. Classification

Previous studies have illustrated a machine learning algorithm, Random Forest (RF), is effective and efficient for vegetation classification in the Everglades (Zhang and Xie Citation2013a, Citation2013b), while traditional classifiers, such as maximum likelihood and minimum distance, could not generate high accuracies (Zhang and Xie Citation2012). We thus selected the RF in this study. RF is a decision tree based ensemble classifier. To understand this algorithm, it is helpful to first know the decision tree approach. The decision tree splits training samples into smaller subdivisions at “nodes” using decision rules. For each node, tests are performed on the training data to find the most useful variables and variable values for split. The RF consists of a combination of decision trees where each decision tree contributes a single vote for assigning the most frequent class to an input vector. RF increases the diversity of decision trees to make them grow by changing the training set using the bagging aggregating (Breiman Citation2001). Different algorithms can be used to generate the decision trees. The RF often adopts the Gini index (Breiman Citation2001) to measure the best split selection. More descriptions of RF can be found in Breiman (Citation2001) and in remote-sensing context in Chan and Paelinckx (Citation2008), Rodriguez-Galiano et al. (Citation2012). The RF classification was implemented using Weka 3.7, an open-source data mining program (Hall et al. Citation2009). Two parameters need to be defined: the number of decision trees to create (k) and the number of randomly selected variables (m) considered for splitting each node in a tree. RF is not sensitive to m and it is often blindly set to (M is the total number of variables) (Gislason, Benediktsson, and Sveinsson Citation2006).

2.6. Accuracy assessment

Considerable research has been conducted for accuracy assessment in remote sensing (Foody Citation2002). Among various methods, the error matrix and Kappa statistic (Congalton and Mead Citation1983) are frequently adopted and serve as the standard approaches. For this study, we constructed the error matrix and calculated the Kappa statistics for accuracy assessment. The error matrix can be summarized as an overall accuracy and Kappa value. The overall accuracy is defined as the ratio of the number of validation samples that are classified correctly to the total number of validation samples irrespective of the class. The Kappa value describes the proportion of correctly classified validation samples after random agreement is removed. To evaluate the statistical significance of differences in accuracy between different classifications, the nonparametric McNemar test (Foody Citation2004) was adopted. The difference in accuracy of a pair of classifications is viewed as being statistically significant at a confidence of 95% if the z-score value in the McNemar test is larger than 1.96.

3. Results and discussion

3.1. Experimental analysis

To examine the contribution of different lidar features on the classification, we designed five experiments. Experiment 1 used the DOQQ-derived texture measures alone, experiments 2–4 combined DOQQ-derived texture measures with lidar-derived elevation, intensity, and topography information. Experiment 5 integrated all DOQQ- and lidar-derived features. For the RF classifier, the number of randomly selected variables for splitting node (i.e., m) was set to 4 after several trials. A number of tests using different number of trees (50–300 at an interval of 50) revealed that = 150 resulted in the highest accuracy. The produced overall accuracies and Kappa values from these experiments are shown in .

Table 1. Classification accuracies using different data sets

DOQQ data (experiment 1) produced the lowest accuracy among these experiments. Combining DOQQ data with lidar-derived elevation (experiment 2), intensity (experiment 3), and topography information (experiment 4), increased the overall accuracy from 49% to 60%, 61%, and 61%, respectively. Kappa values also show corresponding improvements. McNemar tests showed that these improvements are statistically significant. Integration of DOQQ-derived texture measures and all lidar-derived features (experiment 5) generated the best result with an overall accuracy of 71% and Kappa value of 0.64. McNemar tests revealed that experiment 5 generated significantly better outcome than other experiments (). Experiments 2–4 showed no significant difference in classification.

Lidar elevation was believed to be the most useful lidar information in vegetation classification, and thus has been commonly combined with optical imagery to improve the classification accuracy (e.g., Jones, Coops, and Sharma Citation2010; Cho et al. Citation2012; Hantson, Kooistra, and Slim Citation2012). Little work has been published on the information content of the lidar intensity returns for vegetation/forest analysis (Lim et al. Citation2003). There are three major factors affecting lidar intensity: the illuminated area, bidirectional reflectance distribution function of the illuminated targets, and incidence angles (Korpela et al. Citation2010). This means radiometric lidar features will exhibit substantial variation due to differences in the illuminated area (foliage density), reflectance of illuminated scatterers, and the geometry of leaf scatterers (leaf orientation) (Korpela et al. Citation2010). Therefore forest classification in lidar intensity needs to be based on the analysis of distribution characteristics, rather than each single pulse. Moffiet et al. (Citation2005) found that intensity return statistics for forest canopy, such as average and standard deviation, may be useful variables to assist with forest discrimination based on their lidar data exploration analysis, but has not been proved in practice or research. Our study has revealed the benefit of lidar intensity in forest classification using the statistical variables. Topography information is also important in forest classification, which is consistent with the results reported by Ke, Quackenbush, and Im (Citation2010). Topographic features are usually homogeneous within a canopy, which can help reduce the within-class variability among neighboring objects caused by shadows or gaps, thus increased the classification accuracy. In our case, the intensity and topographic features had the same contribution as the elevation information.

3.2. Object-based forest mapping

Since experiment 5 produced the highest classification accuracy, we conducted the object-based classification using the fused data set which combined DOQQ-derived texture measures and all lidar-derived features. The generated classification map is shown in . The constructed error matrix and producer's and user's accuracies based on the validation data are displayed in . The object-based classification map is more informative and useful than a traditional pixel-based one which may be noisy due to the high degree of spatial and spectral heterogeneity of the Everglades. The producer's accuracies varied from 6.3% (oak-cabbage palm) to 96% (live oak), and the user's accuracies varied from 59% (live oak) to 100% (palmetto prairies) (). It is difficult to discriminate upland hardwood (class 6) from other forest communities. Upland hardwood is a natural community and known for its rich species diversity. It is a mixture of overstory trees and understory of woody shrubs and herbaceous groundcover plants. This class was mainly confused with oak-cabbage palm (class 4) because the upland hardwood forests are dominated by oak. Cabbage palm (class 3) was also incorrectly classified as oak-cabbage palm, which is not surprising because oak-cabbage palm is a mixture of oak and cabbage palm. Inclusion of upland hardwood and cabbage palm in oak-cabbage palm resulted in a larger commission error for classifying oak-cabbage palm.

Figure 3. Classification map for the study area (color version available online).

Figure 3. Classification map for the study area (color version available online).

Table 2. Error matrix for the classified map shown in

Accurate and automated identification of forest community in the Everglades is a difficult task because most communities are a mixture of trees, shrub/scrub, herbaceous ground plants, and water. Mixture of upland and inland forests may make this task more challenging. Using fine spatial resolution digital aerial photographs alone could not produce adequate accuracy, as confirmed in this study. Inclusion of lidar-derived features significantly increased the classification accuracy, showing the potential of modern lidar systems in the Everglades mapping to support the current CERP. However, a moderate accuracy (71%) was produced even though two data sources were effectively combined. Note that there is around 3-year time gap between acquisition of aerial photograph (January 2005) and lidar data (December 2007). Topographic features might have not changed too much in 3 years, but vegetation structure characterized by lidar elevation and intensity might have been severely changed. Simultaneous collection of two data sources may be able to produce higher accuracy. In addition, increasing the lidar point density may help improve the classification accuracy by better characterizing the forest structure.

4. Conclusions

In this paper, we examined whether low-posting-density lidar can contribute forest mapping in the Florida Everglades. OBIA, data fusion, and machine learning image classification techniques were integrated to produce an accurate and informative forest map. To avoid the errors and uncertainties in the raster-based lidar method, we extracted the statistics of lidar elevation and intensity from original point cloud data. We find that low-posting-density lidar data is useful and can significantly increase forest classification accuracy by providing important elevation, intensity, and topography information. Synergetic use of two data sources produced an overall accuracy of 71% in classifying seven communities, showing promise for automated forest mapping in the Everglades. Overall, it is anticipated that the emerging lidar systems can contribute the forest/vegetation mapping in the Everglades, especially as a supplement to the current manual interpretation procedure for collecting land cover information in the CERP. With the increasing availability of lidar data, we are expecting this study can benefit the global wetland mapping in general, and the Everglades in particular.

References

  • Asner , G. P., , D. E. , Knapp, , T. , Kennedy-Bowdoin, , M. O. , Jones, , R. E. , Martin, , J. , Boardman, and R. F. , Hughes . 2008 . Invasive Species Detection in Hawaiian Rainforests Using Airborne Imaging Spectroscopy and LiDAR . Remote Sensing of Environment , 112 : 1942 – 1955 .
  • Benz , U. , Hofmann , P. , Willhauck , G. , Lingenfelder , I. and Heynen , M. 2004 . Multiresolution, Object-Oriented Fuzzy Analysis of Remote Sensing Data for GIS-Ready Information . ISPRS Journal of Photogrammetry and Remote Sensing , 58 : 239 – 258 .
  • Breiman , L. 2001 . Random Forests . Machine Learning , 45 : 5 – 32 .
  • CERP (Comprehensive Everglades Restoration Plan). 2012. “About CERP: Brief Overview.” http://www.evergladesplan.org/ (http://www.evergladesplan.org/) (Accessed: 27 November ).
  • Chan , J. C.-W. and Paelinckx , D. 2008 . Evaluation of Random Forest and Adaboost Tree Based Ensemble Classification and Spectral Band Selection for Ecotope Mapping Using Airborne Hyperspectral Imagery . Remote Sensing of Environment , 112 : 2999 – 3011 .
  • Cho , M. A. , R. , Mathieu, , G. P. , Asner, , L. , Naidoo, , J. , van Aardt, , A. , Ramoelo, , P. , Debba, , K. , Wessels, , R. , Main, , I. P. J. , Smit, and B. , Erasmus . 2012 . Mapping Tree Species Composition in South African Savannas Using an Integrated Airborne Spectral and LiDAR System . Remote Sensing of Environment , 125 : 214 – 226 .
  • Congalton , R. and Mead , R. A. 1983 . A Quantitative Method to Test for Consistency and Correctness in Photointerpretation . Photogrammetric Engineering and Remote Sensing , 49 : 69 – 74 .
  • Dalponte , M. , Bruzzone , L. and Gianelle , D. 2008 . Fusion of Hyperspectral and LiDAR Remote Sensing Data for Classification of Complex Forest Areas . IEEE Transactions on Geoscience and Remote Sensing , 46 : 1416 – 1427 .
  • Davis , S. M. , Gunderson , L. H. , Park , W. A. , Richardson , J. R. and Mattson , J. E. 1994 . “ Landscape Dimension, Composition, and Function in a Changing Everglades Ecosystem ” . In Everglades: The Ecosystem and Its Restoration , Edited by: Davis , S. M. and Ogden , J. C. 419 – 444 . Delray Beach , FL : St Lucie Press .
  • Doren , R. F. , Rutchey , K. and Welch , R. 1999 . The Everglades: A Perspective on the Requirements and Applications for Vegetation Map and Database Products . Photogrammetric Engineering and Remote Sensing , 65 : 155 – 161 .
  • Foody , G. M. 2002 . Status of Land Cover Classification Accuracy Assessment . Remote Sensing of Environment , 80 : 185 – 201 .
  • Foody , G. M. 2004 . Thematic Map Comparison, Evaluating the Statistical Significance of Differences in Classification Accuracy . Photogrammetric Engineering and Remote Sensing , 70 : 627 – 633 .
  • Gislason , P. O. , Benediktsson , J. A. and Sveinsson , J. R. 2006 . Random Forests for Land Cover Classification . Pattern Recognition Letters , 27 : 294 – 300 .
  • Hall , M. , Frank , E. , Holmes , G. , Pfahringer , B. , Reutmann , P. and Witten , I. 2009 . The WEKA Data Mining Software, an Update . SIGKDD Explorations , 11 : 1 – 18 .
  • Hantson , W. , Kooistra , L. and Slim , P. A. 2012 . Mapping Invasive Woody Species in Coastal Dunes in the Netherlands: A Remote Sensing Approach Using LiDAR and High-Resolution Aerial Photographs . Applied Vegetation Science , 15 : 536 – 547 .
  • Harken , J. and Sugumaran , R. 2005 . Classification of Iowa Wetlands Using an Airborne Hyperspectral Image: A Comparison of the Spectral Angle Mapper Classifier and an Object-Oriented Approach . Canadian Journal of Remote Sensing , 31 : 167 – 174 .
  • Hill , R. A. and Thomson , A. G. 2005 . Mapping Woodland Species Composition and Structure Using Airborne Spectral and LIDAR Data . International Journal of Remote Sensing , 26 : 3763 – 3779 .
  • Hirano , A. , Madden , M. and Welch , R. 2003 . Hyperspectral Image Data for Mapping Wetland Vegetation . Wetlands , 23 : 436 – 448 .
  • Hodgson , M. E. and Bresnahan , P. 2004 . Accuracy of Airborne LiDAR-Derived Elevation: Empirical Assessment and Error Budget . Photogrammetric Engineering and Remote Sensing , 70 : 331 – 339 .
  • Holmgren , J. , Persson , A. and Soderman , U. 2008 . Species Identification of Individual Trees by Combining High Resolution LiDAR Data with Multi-Spectral Images . International Journal of Remote Sensing , 29 : 1537 – 1552 .
  • Jensen , J. , Rutchey , K. , Koch , M. and Narumalani , S. 1995 . Inland Wetland Change Detection in the Everglades Water Conservation Area 2a Using a Time Series of Normalized Remotely Sensed Data . Journal of Photogrammetric Engineering and Remote Sensing , 61 : 199 – 209 .
  • Johnson , B. and Xie , Z. 2011 . Unsupervised Image Segmentation Evaluation and Refinement Using a Multi-Scale Approach . ISPRS Journal of Photogrammetry and Remote Sensing , 66 : 473 – 483 .
  • Jones , T. G. , Coops , N. C. and Sharma , T. 2010 . Assessing the Utility of Airborne Hyperspectral and LiDAR Data for Species Distribution Mapping in the Coastal Pacific Northwest, Canada . Remote Sensing of Environment , 114 : 2841 – 2852 .
  • Kamal , M. and Phinn , S. 2011 . Hyperspectral Data for Mangrove Species Mapping: A Comparison of Pixel-Based and Object-Based Approach . Remote Sensing , 3 : 2222 – 2242 .
  • Ke , Y. , Quackenbush , L. J. and Im , J. 2010 . Synergistic Use of QuickBird Multispectral Imagery and LiDAR Data for Object-Based Forest Species Classification . Remote Sensing of Environment , 114 : 1141 – 1154 .
  • Korpela , I. , rka , H. O. , Maltamo , M. , Tokola , T. and Hyyppä , J. 2010 . Tree Species Classification Using Airborne LiDAR – Effects of Stand and Tree Parameters, Downsizing of Training Set, Intensity Normalization, and Sensor Type . Silva Fennica , 44 : 319 – 339 .
  • Leckie , D. , Gougeon , F. , Hill , D. , Quinn , R. , Armstrong , L. and Shreenan , R. 2003 . Combined High Density LiDAR and Multispectral Imagery for Individual Tree Crown Analysis . Canadian Journal of Remote Sensing , 29 : 633 – 649 .
  • Lim , K. , Treitz , P. , Baldwin , K. , Morrison , I. and Green , J. 2003 . LiDAR Remote Sensing of Biophysical Properties of Tolerant Northern Hardwood Forests . Canadian Journal of Remote Sensing , 29 : 658 – 678 .
  • Liu , D. and Xia , M. 2010 . Assessing Object-Based Classification, Advantages and Limitations . Remote Sensing Letters , 1 : 187 – 194 .
  • Moffiet , T. , Mengersen , K. , Witte , C. , King , R. and Denham , R. 2005 . Airborne Laser Scanning: Exploratory Data Analysis Indicates Potential Variables for Classification of Individual Trees or Forest Stands According To Species . ISPRS Journal of Photogrammetry and Remote Sensing , 59 : 289 – 309 .
  • Rodriguez-Galiano , V. F. , Ghimire , B. , Rogan , J. , Chica-Olmo , M. and Rigol-Sanchez , J. P. 2012 . An Assessment of the Effectiveness of a Random Forest Classifier for Land-Cover Classification . ISPRS Journal of Photogrammetry and Remote Sensing , 67 : 93 – 104 .
  • Rutchey , K. , Schall , T. and Sklar , F. 2008 . Development of Vegetation Maps for Assessing Everglades Restoration Progress . Wetlands , 28 : 806 – 816 .
  • Rutchey , K. and Vilchek , L. 1994 . Development of an Everglades Vegetation Map Using a SPOT Image and the Global Positioning System . Photogrammetric Engineering and Remote Sensing , 60 : 767 – 775 .
  • Rutchey , K. and Vilchek , L. 1999 . Air Photointerpretation and Satellite Imagery Analysis Techniques for Mapping Cattail Coverage in a Northern Everglades Impoundment . Photogrammetric Engineering and Remote Sensing , 65 : 185 – 191 .
  • Smith, S. L., D. A. Holland, and P. A. Longley. 2004. “The Importance of Understanding Error in LiDAR Elevation Models.” In Proceedings of the ISPRS Congress, Istanbul, July 12–23.
  • Trimble. 2011. eCognition Developer 8.64.1 Reference Book. Westminster, CO: Trimble Geospatial Imaging.
  • Voss , M. and Sugumaran , R. 2008 . Seasonal Effect on Tree Species Classification in an Urban Environment Using Hyperspectral Data, LiDAR, and an Object-Oriented Approach . Sensors , 8 : 3020 – 3036 .
  • Zhang, C. 2010. “Urban Forest Inventory Using Airborne LiDAR Data and Hyperspectral Imagery.” PhD diss., University of Texas at Dallas, Dallas, TX.
  • Zhang , C. and Qiu , F. 2012 . Mapping Individual Tree Species in an Urban Forest Using Airborne LiDAR Data and Hyperspectral Imagery . Photogrammetric Engineering and Remote Sensing , 78 : 1079 – 1087 .
  • Zhang , C. and Xie , Z. 2012 . Combining Object-Based Texture Measures with a Neural Network for Vegetation Mapping in the Everglades From Hyperspectral Imagery . Remote Sensing of Environment , 124 : 310 – 320 .
  • Zhang , C. and Xie , Z. 2013a . Object-Based Vegetation Mapping in the Kissimmee River Watershed Using HyMAP Data and Machine Learning Techniques . Wetlands , 33 : 233 – 244 .
  • Zhang , C. and Xie , Z. 2013b . Data Fusion and Classifier Ensemble Techniques for Vegetation Mapping in the Everglades . Geocarto International , doi: 10.1080/10106049.2012.756940
  • Zhang , J. 2010 . Multi-Source Remote Sensing Data Fusion: Status and Trends . International Journal of Image and Data Fusion , 1 : 5 – 24 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.