1,748
Views
8
CrossRef citations to date
0
Altmetric
Research Articles

Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation

, , ORCID Icon, , & ORCID Icon
Pages 1317-1342 | Received 15 Mar 2021, Accepted 13 Sep 2021, Published online: 06 Oct 2021
 

ABSTRACT

Street view imagery such as Google Street View is widely used in people’s daily lives. Many studies have been conducted to detect and map objects such as traffic signs and sidewalks for urban built-up environment analysis. While mapping objects in the horizontal dimension is common in those studies, automatic vertical measuring in large areas is underexploited. Vertical information from street view imagery can benefit a variety of studies. One notable application is estimating the lowest floor elevation, which is critical for building flood vulnerability assessment and insurance premium calculation. In this article, we explored the vertical measurement in street view imagery using the principle of tacheometric surveying. In the case study of lowest floor elevation estimation using Google Street View images, we trained a neural network (YOLO-v5) for door detection and used the fixed height of doors to measure doors’ elevation. The results suggest that the average error of estimated elevation is 0.218 m. The depthmaps of Google Street View were utilized to traverse the elevation from the roadway surface to target objects. The proposed pipeline provides a novel approach for automatic elevation estimation from street view imagery and is expected to benefit future terrain-related studies for large areas.

Acknowledgements

The research was supported by National Science Foundation (NSF) under grants SMA- 2122054, University of South Carolina ASPIRE program under grant 135400-20-53382, USDOT/North Jersey Transportation Planning Authority, Texas A&M University Harold Adams Interdisciplinary Professorship Research Fund, and College of Architecture Faculty Startup Fund. The funders had no role in the study design, data collection, analysis, or preparation of this article.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data and codes availability statement

The data and code that support the findings of this study are openly available at github.com/gladcolor/lowest_floor_elevation.

Author contributions

Conceptualization: H.N, Z.L., and X.Y; Investigation: H.N. and Z.L.; Funding acquisition, X.Y, S.W., and Z.L; Validation, X.H. and W.W.; Writing: H.N., Z.L, and X.H.; Review, S.W. and W.W.

Notes

1. YOLOv5, 25 October 2020. https://github.com/ultralytics/yolov5.

2. Kentao Wada, labelme: Image Polygonal Annotation with Python. https://github.com/wkentaro/labelme.

3. Elevation Certificates, data downloaded on 22 October 2020. https://www.hrgeo.org/datasets/elevation-certificates-building-footprints-navd88.

5. NOAA Sea Level Rise Viewer DEM, 24 October 2020. https://coast.noaa.gov/htdata/raster2/elevation/SLR_viewer_DEM_6230/.

6. This Python module shows how to obtain historical panoramas: Roboyst, Streetview, https://github.com/robolyst/streetview.

Additional information

Notes on contributors

Huan Ning

Huan Ning is a Ph.D. student in the Department of Geography at the University of South Carolina. His research areas include GeoAI and big data analysis with publication on image understanding and data mining for city management using advanced computing technology.

Zhenlong Li

Zhenlong Li is an Associate Professor in the Department of Geography at the University of South Carolina, where he established and leads the Geoinformation and Big Data Research Laboratory (GIBD). He received B.S. degree (2006) in Geographic Information Science from Wuhan University, and Ph.D. (2015, with distinction) in Geography and Geoinformation Sciences from George Mason University. His primary research field is GIScience with a focus on geospatial big data analytics, spatial computing, cyberGIS, and geospatial artificial intelligence with applications to disaster management, public health, and human dynamics.

Xinyue Ye

Xinyue Ye is a Harold Adams Endowed Associate Professor at the Department of Landscape Architecture and Urban Planning at Texas A&M University. He holds a Ph.D. degree in Geographic Information Science from Joint Program between University of California at Santa Barbara and San Diego State University, a M.S. in Geographic Information Systems from Eastern Michigan University, and a M.A. in Human Geography from University of Wisconsin at Milwaukee. His research focuses on geospatial artificial intelligence, smart cities, spatial econometrics and urban computing.

Shaohua Wang

Dr. Shaohua Wang is an assistant professor in the Department of Informatics, College of Computing, New Jersey Institute of Technology. His research interests include software engineering, program analysis, and artificial intelligence. Dr. Wang has published research on top computer science conferences and journals, such as ICSE, FSE, ASE, OOPSLA and TSC.

Wenbo Wang

Wenbo Wang is a Ph.D. student in the Department of Informatics, College of Computing, New Jersey Institute of Technology. His research interests include Natural Language Processing, Machine Learning, Vulnerability Detection, published in Transactions in GIS.

Xiao Huang

Xiao Huang is an Assistant Professor at the Department of Geosciences at the University of Arkansas. He holds a Ph.D. degree in Geography from the University of South Carolina and a Master’s degree in Geographic Information Science and Technology from Georgia Institute of Technology. His research interests involve GeoAI, Remote Sensing, and Social Science.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.