Graphical Abstract
![](/cms/asset/a7db17b3-395d-4030-be28-9b918dfdaaf9/tadr_a_1250675_uf0001_oc.jpg)
Abstract
Complex robotic tasks require the use of knowledge that cannot be acquired with the sensor repertoire of a mobile, autonomous robot alone. For robots navigating in urban environments, geospatial open data repositories such as OpenStreetMap (OSM) provide a source for such knowledge. We propose the integration of a 3D metric environment representation with the semantic knowledge from such a database. The application we describe uses street network information from OSM to improve street geometry information determined from laser data. This approach is evaluated on a challenging data-set of the Munich inner city.
Acknowledgements
The authors would like to thank Mustafa Sezer, who worked on registration of the individual laser scans to a joint point cloud, and Sheraz Khan for providing assistance with the handling of the point clouds and the RMAP library.
Notes
No potential conflict of interest was reported by the authors.
Part of the material covered in this article has been presented in workshop form in [Citation1]. This article augments the previously published material by a more thorough description of the model used to infer street segment geometries, which has also been augmented by taking into account intersegment dependencies. Furthermore, the data-set used for the evaluation in this paper is greatly extended in comparison to the prior version.