3,833
Views
8
CrossRef citations to date
0
Altmetric
Articles

Building reconstruction from airborne laser scanning data

Pages 35-44 | Received 01 Jan 2013, Accepted 06 Feb 2013, Published online: 27 Mar 2013

Abstract

The research on building model reconstruction has been a long-term hot topic in both the photogrammetry and computer vision areas. The airborne laser scanning technique provides new opportunities for building model reconstruction. Despite many investigations on building reconstruction using point clouds, there are still many unresolved problems that need further research, especially fully automatic methods and intelligent user-friendly operations. This article surveys the methods, tools and problems of building model reconstruction using point clouds data. The article also points out some important but unnoticed problems in building reconstruction according to our previous experience. We hope our comments article can be helpful for researchers in understanding their position and for new researcher in acquiring general information.

1. Introduction

Building models are key elements of a digital city concept that relates to large numbers of applications, such as urban planning, crime prevention, disaster mitigation, transportation optimization and sustainable development. Building model reconstruction using various types of data has been and still is a hot research topic since many years ago Citation(1 , Citation 2). The frequently used data sources for building model reconstruction are image related, such as stereo images, videos and even single images collected from diverse platforms. But, in recent years, the emergence of laser scanning technology, which can measure high-dense point clouds with an accuracy to the decimetre level, greatly stimulates the research on 3D building model reconstruction Citation(3 , Citation 4). Though many methods can acquire point cloud information, laser scanning is the most direct, efficient and reliable way of collecting high quality surface geometry data, which is often paired with simultaneously collected optical images Citation(5).

The goal of building model reconstruction is to understand and extract the building components from raw data and represent the building model by meaningful wireframe data structure, with or without texture. To generate wireframe building models with topology information, it requires many data processing steps such as detaching the close adjacent buildings, extraction of roofs, identification of the roof types (e.g., distinction between flat and gable), regularization of the shape of building boundaries, clearing of unwanted objects, for example, skylights, chimneys, various kinds of pipes and air-con installations.

To generalize the difficulties, the kernel problem of automatic building reconstruction is how to infer correct structures of complex building from the extracted elements that may contain noise, missing features, non-existing artefacts and insufficient relations.

The main workflow of building model reconstruction can be separated into three stages: building region detection, feature extraction and model generation, as shown in Figure . As the building region detection can be achieved by data classification methods with traditional tools, most of the researches focus on the latter two steps. In consideration of additional information for model reconstruction, the input data could be of diverse types, including RGB imagery Citation(6), hyper spectral imagery Citation(7 , Citation 8), existing building boundaries and GIS data Citation(9).

Figure 1 General work-flow of building model reconstruction. It includes three main steps: building region detection, features extraction and model generation.

Figure 1 General work-flow of building model reconstruction. It includes three main steps: building region detection, features extraction and model generation.

In this article, we survey the research activities on building model reconstruction, with a special focus on the components building detection, feature extraction and model generation. Section 2 describes the workflow of building region detection. Section 3 introduces the roofs plane extraction algorithms and analyses the technique points for line extraction from point cloud data. In Section 4, we discuss the two ways of model generation: data driven and model driven. Section 5 is about the quality evaluation of reconstructed building models. Conclusions are given in Section 6.

2 Building region detection

Building region detection is the first step of building model reconstruction. Its goal is to find regions of building blocks from raw point cloud data, which are the input to the model reconstruction. The common workflow of building detection includes terrain extraction, classification of non-terrain points, grouping of regions and building blocks clustering.

In recent years, the terrain extraction algorithms became quite robust, especially with the widely used triangular network interpolation methods implemented, for example, in the commercial software Terra Solid Citation(10 , Citation 11), it is much easier to classify non-terrain points from raw point cloud data. The non-terrain points contain objects higher than the ground within a certain distance, such as 0.3 m, which could be buildings, road lamps, power lines, power towers, trees, bushes, humans, cars, etc. The non-terrain points can be clustered into groups by post-processing techniques. As the buildings are larger than any of the other classes, the building regions can be detected by removal of small groups using a morphology algorithm Citation(12 , Citation 13).

For the area with many plants, the most difficult problem in building detection is how to separate building blocks from trees, especially the big trees with dense leaves. While adding the optical image to the classification processing, the problems can be solved by Vu et al. Citation(14). Huang et al. Citation(15) proposed a classification method to urban area by fusion processing using point clouds and high-resolution images of RGB bands. As plants are very sensitive towards infrared wavelengths, the infrared images are used to assist the building region detection Citation(6 , Citation 8 , Citation 16). In Chen et al. Citation(6) and Sohn and Dowman Citation(16), the authors first extract a Digital Terrain Model from point clouds. Then, the normalized difference vegetation index (NDVI) is calculated using infrared image data. The building regions can be detected by combination analysis of the non-terrain points and NDVI. Besides the image data, existing map information is also a very useful data source. Vosselman Citation(17) utilizes the map and images together to help detect building from LiDAR data. Of course, with more supportive data sources, the building detection problems can yield better results. Actually, in many cities, the local agencies have already the database of precise 2D building boundaries, which can greatly simplify the building detection problem. Normally, the building region detection rate in dense urban areas can be better than 90% only using point cloud data. The successful detection rate can be even higher with the help of infrared images, which allow us to distinguish better between vegetation and man-made objects.

3 Feature extraction

Feature extraction is fundamental to further building structure analysis. In the real world, though there are some standard patterns in building design, the geometries of buildings have very diverse shapes, complex structures and various styles, which cannot be represented by several simple standard components. Some researchers work on building model reconstruction with simple curve surfaces by Teo Citation(18); most of them focus on regular shape buildings with planar roofs or façades Citation(19 , Citation 20). For regular shape buildings, straight lines and planes are the main features for further structure inferring and model reconstruction. The line features represent the ridges, edges of buildings, while the planar features often represent the roofs and walls.

In point cloud data, the planar roofs are easier to be detected than the edges. The size of a laser beam is very small compared to the spacing between points and can seldom exactly hit the edges of buildings. Therefore, edges of buildings in the data are often not clear and may contain much noise.

3.1 Planar roof extraction

Normally, the plane P can be represented by four parameters (a, b, c, d) in a three-dimensional Cartesian coordinate system, with the plane equation ax + by + cz + d. While in the polar coordinate system, the formula of plane can be written by normal of the plane n, which can be decided by two angles, (θ, ), and the distance s form original point O to the plane P, as shown in Figure . The equation of plane can be s(θ, ) = cos () cos (θ)x + cos () sin (θ)y + sin (θ)z. This equation reduces the number of plane parameters and is used as the basic formula for planar roofs clustering in parameter space; for example, Hough Transformation.

Figure 2 The parameters of a plane for 3D Hough transform. Figure courtesy of Overby et al. Citation(21).

Figure 2 The parameters of a plane for 3D Hough transform. Figure courtesy of Overby et al. Citation(21).

Overby et al. Citation(21) employs the Hough Transformation to detect planar roofs in irregular point clouds. Tarsha-Kurdi et al. Citation(22) compared the Hough Transformation and RANdomSAmple Consensus (RANSAC) algorithm for plane roof detection, finding that RANSAC algorithm is more efficient and robust than the Hough Transformation. Huang and Brenner Citation(23) proposed a rule based roof plane detection algorithm to find multiple planes from the point cloud at the same time, which is an extension to the Hough Transformation. This algorithm assumes that roofs planes sharing a horizontal ridge have the same azimuth and planes sharing a diagonal ridge have perpendicular azimuths, so they can be extracted together in Hough space, as shown in Figure . This assumption adds the constraints to the detection algorithm and reduces the inaccuracy in roof detection due to the noisy points.

Figure 3 Rule based roof detection to extract multiple-roofs simultaneously. The graph contains roof planes and points clustering in Hough space. Figure courtesy of Huang and Brenner Citation(23).

Figure 3 Rule based roof detection to extract multiple-roofs simultaneously. The graph contains roof planes and points clustering in Hough space. Figure courtesy of Huang and Brenner Citation(23).

Besides roof detection using 3D Hough Transformation, Vosselman Citation(24) provides a voting-based roof plane extraction algorithm. For each point p belonging to one roof, p and its n neighbor points can form into a small group G. If the point p belongs to a plane p, the parameters of fitted plane, also means normal of plane, using points in G should be same as the parameters of plane P Citation(25). Sometimes, the normal estimation is not robust and vulnerable to noise, especially at the overlap area of two strips. To increase the reliability of the normal estimation, smoothing of the data can be conducted to reduce the influence of noise to the final fitting results; for example, by Gaussian filtering.

Moreover, the detection sequence can influence the sizes of detected roofs, a point that is often neglected in roof extraction. Huang Citation(25) explained the reason of size deviation due to “early get, get more” and employed a competition algorithm to adjust the points at the boundaries of each two adjacent roofs. This algorithm evaluates the attributes of the points at the boundary of detected roofs according to its distance to the possible planes and re-assigns the points new roof label to the closest roofs. Figure shows the size problem of detected roofs and the results after competition.

Figure 4 The sizes of roofs are influenced by the detection sequence, (a) model with four same size planar surfaces, (b) sizes of detected results and (c) final results after competition.

Figure 4 The sizes of roofs are influenced by the detection sequence, (a) model with four same size planar surfaces, (b) sizes of detected results and (c) final results after competition.

Sampath and Shan Citation(26) proposed a roof detection method using the k-means cluster algorithm. First, the eigen value analysis is carried out for each roof point of a building with Voronoi neighbourhood to identify the LiDAR points into planar and non-planar. Then, the surface normals of all points are clustered with the fuzzy k-means method. To optimize the final results, the authors separate the parallel and coplanar segments based on their distances and connectivity, respectively.

Despite the various methods of roof detection Citation(21 , Citation 22 , Citation 23 , Citation 24 , Citation 25 , there are many difficult problems still to be solved:

1.

Small roof problem: The small roof only contains few points that may not be identified as roof because its area and number of points are smaller than given thresholds. Even using multiple levels of thresholds, it is still a dilemma to give proper thresholds to identify the roof.

2.

Curved roofs: There are no proper equations to describe a curved roof surface, except some standard geometry objects such as cylinder, cone, sphere and ellipse. The difficulties of parameter description bring inconvenience in detection of the surface model. Until now, there are no good automatic solutions to deal with roofs with multiple curved surfaces.

3.

Facilities on modern building roofs: The modern buildings often are tall buildings of regular boundary, but with very complex roof information, on which there maybe various objects like air-conditioners, stairs, water pipes, fences, sun shading, signal station, lightning rod, etc. To model or not to model the facility as a part of building is a dilemma. The facilities on the roofs result in great problems to the generation of reality-based models.

3.2 Feature line extraction

The line features of buildings can be grouped into two types: jump edges and ridge edges. The jump edges are the lines located at the boundaries of roofs with height jumps. The ridge lines of buildings only exist at the intersection parts of two planes.

The jump edges can be detected by means of a line detection algorithm that can be the same as that in images, such as Hough Transformation, boundary tracing. But, the ridged edges can be extracted after plane detection by intersection of two adjacent planes. Besides this, there are more specific characteristics of feature detection in point clouds that are different from those in image analysis:

1.

Regularity of lines. Most of the buildings have regular shapes. The LiDAR point clouds record the shape of the building by points without shape deformation from, for example, lens distortion or perspective projection. So, the regularity of lines in the real world can be used as constraints in line detection. There are several related investigations that calculate the main direction of the building and regularize the detected lines by parallel or perpendicular constraints Citation(24).

2.

Exact position of edges. As the point clouds are calculated based on the time of flight of laser beams with certain angles, there are always some gaps in the data at the place where locally big changes of height occur, as shown in Figure . But, for edge detection algorithm, the correct position of the edge should be carefully identified. Most of the investigations assume the edge located at the middle part of two points with big height difference. However, for airborne point cloud data, this needs to be carefully discussed.

3.

Neighbourhood system. The LiDAR data set is a kind of irregular point cloud data with fully 3D attributes. In feature processing, it is obvious to select the neighbouring points for parameter estimation or feature computation. In 3D point clouds, how to define the neighbourhood is more complex than in image space, and should be adjusted according to the purpose and data. Of course, a proper definition of neighbourhood is very useful and effective for cluster processing. Figure shows examples of three types of distances in finding the neighbouring points, which were originally analysed in planar roof detections by Filin and Pfeifer Citation(27).

Figure 5 Real edges position of building. (a) The laser beam hits the edge of a building. (b) The correct position of edge should be close to p, not in the middle part of p and q 2.

Figure 5 Real edges position of building. (a) The laser beam hits the edge of a building. (b) The correct position of edge should be close to p, not in the middle part of p and q 2.

Figure 6 Neighbourhood defined by (a) planimetric distance, (b) 3D distance, (c) distance measured by local surface. Figure courtesy of Filin and Pfeifer Citation(27), © ASPRS.

Figure 6 Neighbourhood defined by (a) planimetric distance, (b) 3D distance, (c) distance measured by local surface. Figure courtesy of Filin and Pfeifer Citation(27), © ASPRS.

Actually, the line features from point clouds are considered as unstable and imprecise. Instead, some researchers use the feature lines from geo-referenced images to compensate the disadvantages of LiDAR data.

4 Building model generation

The techniques for building model reconstruction from airborne point cloud data have many things in common with those methods used for images. One major difference is that the features mostly used with point clouds are planar roofs, while in the images these are lines. The generation of building models needs to combine the edge lines and roof surface features together to infer and create reasonable structures.

Figure shows the typical input information for building model reconstruction, starting from point clouds and images to extracted features, to the reconstruction of the final model. As there are no perfect feature detection and extraction algorithms, the input features for model generation always contain over-detection, omission and other errors. Therefore, for model generation, there must be some strategies to amplify the effects of correct features and limit and reduce the influences from errors. There are two major ideas that guide the design of model generation algorithms: data-driven (bottom-up) and model-driven (top-down). The data-driven methods try to organize the model from given information (data) with some geometrical and topological constraints Citation(19 , Citation 26). The model-driven methods require some predefined typical sub-models which are fitted to the raw data Citation(28 , Citation 29). Both ways have their pros and cons. The model-driven methods are robust with respect to the noisy data, but are not suitable to generate buildings with complex shapes, because the used models do not have the required complexity. On the contrary, the data-driven algorithms have the ability to describe buildings with miscellaneous shapes, but cannot deal with strongly erroneous input data. Also, the bottom-up approach requires some refined image understanding capabilities which are currently not yet available.

4.1 Constraints

As man-made objects, most buildings have boundaries with regular shapes and strict geometrical relations, which can be used as clues and constraints for model generation. The constraints could be:

1.

Main direction. For a large percentage of the buildings, the main direction of edges is perpendicular or parallel to the roofs. For building blocks with many sub-blocks, the main direction of the whole building may not be found, but it can be detected from each of the sub-blocks.

2.

Topological consistency. The topological consistency requires the adjacent roof boundaries to intersect strictly; for example, the connected ridge edges of two adjacent oblique roofs should strictly match, the points of one roof boundary should be coplanar, etc. It is simple but not many of the output results can fulfil this requirement.

3.

Regularity. The regularity is a prime geometric relation in many natural and man-made objects, as well as the building model. Pauly et al. Citation(30) gave a list of regularity relations on geometry, such as symmetry, transfer, scaling, rotation and the combination of them, as shown in Figure . These constraints can be used for both roofs and façade reconstruction. Such regularization functions have also been used for post-editing of building models generated by the semi-automated image analysis technique of CyberCity Modeler Citation(31).

Figure 7 Typical information for building model generation. (a) Input: original image and point cloud data. (b) Detected 2D line features and 3D planar roof features. (c) Final model with correct topological relations.

Figure 7 Typical information for building model generation. (a) Input: original image and point cloud data. (b) Detected 2D line features and 3D planar roof features. (c) Final model with correct topological relations.

Figure 8 Regularity definition. Figure courtesy of Pauly and colleagues Citation(30). © 2006 ACM.

Figure 8 Regularity definition. Figure courtesy of Pauly and colleagues Citation(30). © 2006 ACM.

4.2 Data-driven methods

The data-driven method trust the detected features and assume that the buildings have very diverse shapes and complex structures such that they cannot be decomposed to a group of typical sub-models. These methods generate the geometry of building by relying on analyzing and grouping of input features.

In the early years, many of the researchers focused on simple shape buildings. Zhang et al. Citation(32) reconstructed the building footprints from LiDAR data using orthogonal constraints. To acquire regular boundary, the authors employed an effective curve regularization algorithm with three steps: split, intersection and merge. Sampatlt and Shan Citation(33) extend the footprint-tracing algorithm by adding angle tolerances in a boundary shrink process. In order to remove the thresholds in boundary tracing, Huang et al. Citation(34) employed edge length ratio of triangles as the threshold for boundary shrinking. In Huang et al. Citation(34), the algorithm is robust to process point clouds of various densities, because the algorithm does not need a threshold related to the specific data.

Rottensteiner Citation(35) generated the building models with the boundaries of roofs planes extracted by fusion of images and point clouds. The author tried to find a method between boundary representation (B-Rep) and a system of “GESTALT” observation representing. He defined the rules, which constrain the relation of two adjacent roofs, to improve the shape of boundaries and topological relations. The rules are horizontal constraint for ridge edges, orthogonal for two adjacent walls and horizontal eave, as shown in Figure . These simple rules on local roofs relation cannot globally optimize the whole building with many roofs of complex relations. Actually, the test data used in Rottensteiner Citation(33) are more complex than the algorithm can deal with.

Figure 9 Geometry constraints used to update the quality of connecting edges of buildings. Figure courtesy of Rottensteiner Citation(35).

Figure 9 Geometry constraints used to update the quality of connecting edges of buildings. Figure courtesy of Rottensteiner Citation(35).

Partition and merge is the most effective data-driven approach that has been investigated in many publications for building model generation Citation(16 , Citation 19 , Citation 36 , Citation 37 , Citation 38). Rau and Chen Citation(36) employed the lines extracted from image data to partition detected roof regions, which showed promising results. By partition processing, short lines can extend and form into closed regions. Sohn and Dowman Citation(16) studied a binary space partitioning (BSP) strategy to reconstruct building footprints from images and LiDAR point clouds data. Sohn and colleagues Citation(19) extended the idea of BSP and merging algorithms to buildings with complex roofs topology. The input data for the BSP algorithm are detected line features and points belonging to different plane roofs. Each point is assigned an ID that represents the ID of the corresponding roof. The algorithm selects lines according to the line stability to partition points into two sub-regions. This partition iteratively runs until points in each sub-region only belong to one roof. Then, the merging process is conducted to generate the final roof topology based on the attribution of points in the regions. This algorithm produced promising results in complex structure buildings.

Figure shows the performance of this algorithm on a complex building. The advantage of using BSP is that the line features and plane features can compensate each other’s shortages and augment information even with errors. This BSP is referenced by Cheng and colleagues Citation(38) to reconstruct building model by combining features from stereo images and LiDAR data.

4.3 Model-driven methods

The strategy of using predefined models to reduce the mathematical complexity and uncertainty is widely used in investigations on building model reconstruction from images. As the number of predefined models is limited, the model-driven methods only show good performance on buildings with a combination of predefined models. Maas and Vosselman Citation(28) is the early work on the model-driven approach, which employs the invariant moments of point clouds to solve the parameters of parametric gable roofs. The building models in Citation(28) are gable roofs of simple topological relations.

Lafarge and colleagues Citation(29) proposed a method to reconstruct the building model from digital surface model (DSM) data using a set of carefully designed parametric 3D building blocks. Building of various types can be represented by the combination of these blocks. First, with the support of feature extraction methods, the 2D structure lines of buildings can be extracted either interactively or automatically. Then, 3D blocks are placed on the 2D-supports using the Gibbs model to control both the block assemblage and the fitting to data. Bayesian decision making is applied to find the optimal configuration of 3D blocks using the Markov Chain Monte Carlo sampler associated with original proposition kernels. The algorithm is an important breakthrough that greatly extends the robustness of generic modelling for complex building. Even though the input data of Citation(29) is a nosy DSM from stereo image matching, the method shows very good reconstruction quality. This algorithm can be applied to common LiDAR data. Inspired by Citation(29), the idea of generic models and the Reversible Jump Markov Chain Monte Carlo method is referenced and adopted by many related researchers Citation(39 , Citation 40).

5 Building model quality evaluation

After the reconstruction of the geometry model, the quality of the model needs to be evaluated, though this is very difficult to implement. Usually, the percentage of roofs detected, deviation of boundaries, over-detection and omission rate are used as the indicators to evaluate the quality of reconstructed models Citation(19 , Citation 38). In the ISPRS building reconstruction test project, the comparisons of results are based on the detection rate of planes Citation(41). Oude Elberink and Vosselman Citation(42) published a study on how the quality of original point cloud data will influence the final 3D model. It starts from the quality of input point clouds and goes through to the errors of detected features, then to the final model. They conducted the reconstruction quantity analysis simply by comparing the distances of the original data with the reconstructed models at point level. Only analysing the distances between data and models at point level is not enough to give a comprehensive evaluation of the modelling quality. Akca et al. Citation(43) proposed a method to co-register the input model to the verification data by means of the Least Squares 3D surface matching method and evaluate the Euclidean distances between the verification and input data-sets. The good point of this method is that the quality evaluation method is independent of the data sources and coordinate systems because the co-registration can finally transform the verification and input data-sets into the same coordinate system. Similar to Citation(42 , Citation 43) has difficulties in evaluating the completeness of the model and analysing a needed correction of the roof topology. Until now, it is not easy to give standard methods and quantified parameters that can satisfy the industrial requirements of 3D building model quality evaluation.

6 Conclusion

The fully automatic system that fulfils detailed model reconstruction task is not yet visible at the horizon. Many systems and approaches are able to run a fully automatic building model reconstruction, although they only succeed for polyhedral models with simple roofs. There is still growing demand for highly detailed reality based 3D models, which will drive the efforts of improvement of new techniques and methodology.

On one hand, with the increase in pointing accuracy and image quality brought from new sensors, the reliability of detected features improves as well as the degree of feature details. These advancements provide the potential for an improved future high-quality model reconstruction. On the other hand, the rapid development of mobile mapping systems and location-based services inspires the requirements for fine-detailed street models, which needs to combine the data from top view and side views together to reconstruct building models as completely as possible. That demands the integration of multiple challenging techniques, such as seamless texture processing, 3D facade reconstruction and multi-scale data fusion.

Notes on contributor

Huang Xianfeng is senior researcher of Future Cities Lab of ETH Zuerich in Singapore and associate professor of LIESMARS, Wuhan University. He is working on LiDAR data processing, digital cultural heritage and urban computing.

Acknowledgements

This study was performed at the Singapore-ETH Centre for Global Environmental Sustainability (SEC), co-funded by the Singapore National Research Foundation (NRF) and ETH Zurich. The author thanks Prof. Armin Gruen for many valuable comments, suggestions and discussions to this article.

References

  • Förstner, W. 3D-City Models: Automatic and Semiautomatic Acquisition Methods. In Photogrammetric Week ‘99; Fritsch, D.; Spiller, R., Eds.; Wichmann Verlag: Heidelberg, 1999; pp. 291–303.
  • Gruen , A. 2000 . Semi-automated Approaches to Site Recording and Modeling . Invited Pap. Int. Arch. Photogramm. Remote Sens. , 33 ( 5/1 ) : 309 – 318 .
  • Brenner , C. 2005 . Building Reconstruction from Images and Laser Scanning . Int. J. Appl. Earth Obs. Geoinf. , 6 ( 3 ) : 187 – 198 .
  • Haala , N. and Kada , M. 2010 . An Update on Automatic 3D Building Reconstruction . ISPRS J. Photogramm. Remote Sens. , 65 ( 6 ) : 570 – 580 .
  • Hu , Y. and Tao , C.V. 2005 . Hierarchical Recovery of Digital Terrain Models from Single and Multiple Return Lidar Data . Photogramm. Eng. Remote Sens. , 71 ( 4 ) : 425 – 433 .
  • Chen , L. , Zhao , S. , Han , W. and Li , Y. 2012 . Building Detection in an Urban Area Using Lidar Data and QuickBird Imagery . Int. J. Remote Sens. , 33 ( 16 ) : 5135 – 5148 .
  • Lemp, D.; Weidner, U. In Improvements of Roof Surface Classification Using Hyperspectral and Laser Scanning Data, Proceedings of the ISPRS Joint Conference: 3rd International Symposium Remote Sensing Data Fusion Over Urban Areas (URBAN), 5th International Symposium Remote Sensing Urban Areas (URS), 2005; pp. 14–16.
  • Haala , N. and Brenner , C. 1999 . Extraction of Buildings and Trees in Urban Environments . ISPRS J. Photogramm. Remote Sens. , 54 ( 2 ) : 130 – 137 .
  • Suveg , I. and Vosselman , G. 2004 . Reconstruction of 3D Building Models from Aerial Images and Maps . ISPRS J. Photogramm. Remote Sens. , 58 ( 3 ) : 202 – 224 .
  • Sithole , G. and Vosselman , G. 2004 . Experimental Comparison of Filter Algorithms for Bare-Earth Extraction from Airborne Laser Scanning Point Clouds . ISPRS J. Photogramm. Remote Sens. , 59 ( 1 ) : 85 – 101 .
  • Axelsson , P. 1999 . Processing of Laser Scanner Data – Algorithms and Applications . ISPRS J. Photogramm. Remote Sens. , 54 ( 2 ) : 138 – 147 .
  • Rottensteiner , F. and Briese , C. 2002 . A New Method for Building Extraction in Urban Areas from High-resolution LIDAR Data . Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. , 34 ( 3/A ) : 295 – 301 .
  • Meng , X. , Wang , L. and Currit , N. 2009 . Morphology-based Building Detection from Airborne LIDAR Data . Photogramm. Eng. Remote Sens. , 75 ( 4 ) : 437 – 442 .
  • Vu , T.T. , Yamazaki , F. and Matsuoka , M. 2009 . Multi-scale Solution for Building Extraction from LiDAR and Image Data . Int. J. Appl. Earth Obs. Geoinf. , 11 ( 4 ) : 281 – 289 .
  • Huang , X. , Zhang , L. and Gong , W. 2011 . Information Fusion of Aerial Images and LIDAR Data in Urban Areas: Vector-stacking, Re-classification and Post-processing Approaches . Int. J. Remote Sens. , 32 ( 1 ) : 69 – 84 .
  • Sohn , G. and Dowman , I. 2007 . Data Fusion of High-resolution Satellite Imagery and LiDAR Data for Automatic Building Extraction . ISPRS J. Photogramm. Remote Sens. , 62 ( 1 ) : 43 – 63 .
  • Vosselman, G. Fusion of Laser Scanning Data, Maps, and Aerial Photographs for Building Reconstruction. IEEE International Geoscience and Remote Sensing Symposium, Toronto, 2002; 85–88.
  • Teo , T. 2008 . Parametric Reconstruction for Complex Building from Lidar and Vector Maps Using a Divide-and-Conquer Strategy . Int. Arch. Photogramm., Remote Sens. Spatial Inf. Sci. , 37 : 133 – 138 .
  • Sohn , G. , Huang , X. and Tao , V. 2008 . Using a Binary Space Partitioning Tree for Reconstructing Polyhedral Building Models from Airborne Lidar Data . Photogramm. Eng. Remote Sens. , 74 ( 11 ) : 1425 – 1440 .
  • Habib , A.F. , Zai , R. and Kim , C. 2010 . Generation of Complex Polyhedral Building Models by Integrating Stereo-aerial Imagery and Lidar Data . Photogramm. Eng. Remote Sens. , 76 ( 5 ) : 609 – 623 .
  • Overby, J.; Bodum, L.; Kjems, E.; Iisoe, P. In Automatic 3D Building Reconstruction from Airborne Laser Scanning and Cadastral Data Using Hough Transform, ISPRS Congress, Istanbul, July, 2004.
  • Tarsha-Kurdi , F. , Landes , T. and Grussenmeyer , P. 2007 . Hough-transform and Extended Ransac Algorithms for Automatic Detection of 3D Building Roof Planes from Lidar Data . Int. Arch. Photogramm., Remote Sens. Spatial Inf. Syst. , 36 : 407 – 412 .
  • Huang, H.; Brenner, C. In Rule-based Roof Plane Detection and Segmentation from Laser Point Clouds, Joint Urban Remote Sensing Event (JURSE), April 2011; 293–296.
  • Vosselman , G. 1999 . Building Reconstruction Using Planar Faces in Very High Density Height Data . Int. Arch. Photogramm. Remote Sens. , 32 ( 3; SECT 2W5 ) : 87 – 94 .
  • Huang , X. 2008 . A Competition Based Roof Detection Algorithm from Airborne LiDAR Data . Int. Arch. Photogramm., Remote Sens. Spatial Inf. Sci. , 37 ( 4 ) : 319 – 324 .
  • Sampath , A. and Shan , J. 2010 . Segmentation and Reconstruction of Polyhedral Building Roofs from Aerial Lidar Point Clouds . IEEE Trans. Geosci. Remote Sens. , 48 ( 3 ) : 1554 – 1567 .
  • Filin , S. and Pfeifer , N. 2005 . Neighborhood Systems for Airborne Laser Data . Photogramm. Eng. Remote Sens. , 71 ( 6 ) : 743 – 755 .
  • Maas , H.G. and Vosselman , G. 1999 . Two Algorithms for Extracting Building Models from Raw Laser Altimetry Data . ISPRS J. Photogramm. Remote Sens. , 54 ( 2 ) : 153 – 163 .
  • Lafarge , F. , Descombes , X. , Zerubia , J. and Pierrot-Deseilligny , M. 2010 . Structural Approach for Building Reconstruction from a Single DSM . IEEE Trans. Pattern Anal. Mach. Intell. , 32 ( 1 ) : 135 – 147 .
  • Pauly, M.; Mitra, N.J.; Wallner, J.; Pottmann, H.; Guibas, L.J. Discovering Structural Regularity in 3D Geometry. ACM Transactions on Graphics (Proc. SIGGRAPH). 2008, 37 (3), 43–54.
  • Gruen, A.; Wang, X. News from CyberCity-Modeler. In Automatic Extraction of Man-Made Objects from Aerial and Space Images (III), Proceedings of the Monte Verita Workshop, June 10–15, 2001; Baltsavias, Gruen, vanGool, Eds; A.A. Balkema Publishers: Ascona; pp. 93–101.
  • Zhang , K. , Yan , J. and Chen , S.C. 2006 . Automatic Construction of Building Footprints from Airborne LIDAR Data . IEEE Trans. Geosci. Remote Sens. , 44 ( 9 ) : 2523 – 2533 .
  • Sampatlt , A. and Shan , J. 2007 . Building Boundary Tracing and Regularization from Airborne LiDAR Point Clouds . Photogramm. Eng. Remote Sens. , 73 ( 7 ) : 805 – 812 .
  • Huang , X. , Cheng , X. , Zhang , F. and Gong , J. 2008 . Side Ratio Constrain Based Precise Boundary Tracing Algorithm for Discrete Point Clouds . Int. Arch. Photogramm., Remote Sens. Spatial Inf. Sci. , 37 : 349 – 354 .
  • Rottensteiner , F. 2003 . Automatic Generation of High-quality Building Models from Lidar Data . IEEE Comput. Graphics Appl. , 23 ( 6 ) : 42 – 50 .
  • Rau , J.Y. and Chen , L.C. 2003 . Robust Reconstruction of Building Models from Three-dimensional Line Segments . Photogramm. Eng. Remote Sens. , 69 ( 2 ) : 181 – 188 .
  • Chen , L.C. , Teo , T.A. , Kuo , C.Y. and Rau , J.Y. 2008 . Shaping Polyhedral Buildings by the Fusion of Vector Maps and Lidar Point Clouds . Photogramm. Eng. Remote Sens. , 74 ( 5 ) : 1147 – 1157 .
  • Cheng , L. , Gong , J. , Li , M. and Liu , Y. 2011 . 3D Building Model Reconstruction from Multi-view Aerial Imagery and Lidar Data . Photogramm. Eng. Remote Sens. , 77 ( 2 ) : 125 – 139 .
  • Huang, H.; Brenner, C.; Sester, M. In 3D Building Roof Reconstruction from Point Clouds via Generative Models, Proceedings of the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, 2011; pp. 16–24.
  • Toshev, A.; Mordohai, P.; Taskar, B. In Detecting and Parsing Architecture at City Scale from Range Data. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010; pp. 398–405.
  • Rottensteiner, F.; Sohn, G.; Jung, J.; Gerke, M.; Baillard, C.; Benitez, S.; Breitkopf, U. The ISPRS Benchmark on Urban Object Classification and 3D Building Reconstruction. ISPRS Commission III, WG III, 2012.http://www.isprs2012.org/abstract/1113.asp
  • Oude Elberink , S. and Vosselman , G. 2011 . Quality Analysis on 3D Building Models Reconstructed from Airborne Laser Scanning Data . ISPRS J. Photogramm. Remote Sens. , 66 ( 2 ) : 157 – 165 .
  • Akca , D. , Freeman , M. , Sargent , I. and Gruen , A. 2010 . Quality Assessment of 3D Building Data . Photogramm. Rec. , 25 ( 132 ) : 339 – 355 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.