652
Views
8
CrossRef citations to date
0
Altmetric
Research Articles

Deploying machine learning to assist digital humanitarians: making image annotation in OpenStreetMap more efficient

, &
Pages 1725-1745 | Received 03 Jun 2019, Accepted 20 Aug 2020, Published online: 28 Aug 2020
 

ABSTRACT

Locating populations in rural areas of developing countries has attracted the attention of humanitarian mapping projects since it is important to plan actions that affect vulnerable areas. Recent efforts have tackled this problem as the detection of buildings in aerial images. However, the quality and the amount of rural building annotated data in open mapping services like OpenStreetMap (OSM) is not sufficient for training accurate models for such detection. Although these methods have the potential of aiding in the update of rural building information, they are not accurate enough to automatically update the rural building maps. In this paper, we explore a human-computer interaction approach and propose an interactive method to support and optimize the work of volunteers in OSM. The user is asked to verify/correct the annotation of selected tiles during several iterations and therefore improving the model with the new annotated data. The experimental results, with simulated and real user annotation corrections, show that the proposed method greatly reduces the amount of data that the volunteers of OSM need to verify/correct. The proposed methodology could benefit humanitarian mapping projects, not only by making more efficient the process of annotation but also by improving the engagement of volunteers.

Acknowledgments

The authors would like to thank Bing maps and OpenStreetMap for the access to the imagery and geographical objects’ footprints respectively through their APIs.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data and codes availability statement

The data and codes that support the findings of this study are available with a DOI at https://doi.org/10.6084/m9.figshare.12367532. Bing imagery cannot be made publicly available because of Bing’s imagery data policies.

Notes

Additional information

Funding

This research was funded by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP, grant 2016/14760-5 and 2014/12236-1), Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq, grant 303808/2018-7), Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES, finance code 001) and by the Swiss National Science Foundation (grant PP00P2-150593).

Notes on contributors

John E. Vargas Muñoz

John E. Vargas Muñoz received the B.Sc. degree in informatics engineering from the National University of San Antonio Abad in Cusco, Cusco, Peru, in 2010, and the master's degree in computer science from the University of Campinas, Campinas, Brazil, in 2015. During 2017-2018, he worked on applications of machine learning to open geographical data, as a visiting Ph.D. student, in the Laboratory of Geo-information Science and Remote Sensing at Wageningen University, the Netherlands. In 2019, he received a Ph.D. in computer science from the University of Campinas, Campinas, Brazil. His research interests include machine learning, image processing, remote sensing image classification, and crowdsourced geographic information analysis.

Devis Tuia

Devis Tuia (S'07, M’09, SM’15) received the Ph.D in environmental sciences at the University of Lausanne, Switzerland, in 2009. He was a Postdoc at the University of Valencia, the University of Colorado, Boulder, CO and EPFL Lausanne. Between 2014 and 2017, he was Assistant Professor at the University of Zurich. He is now Full Professor at the Geo-Information Science and Remote Sensing Laboratory at Wageningen University, the Netherlands. He is interested in algorithms for information extraction and data fusion of geospatial data (including remote sensing) using machine learning and computer vision. He serves as Associate Editor for IEEE TGRS and the Journal of the ISPRS. More info on http://devis.tuia.googlepages.com/

Alexandre X. Falcão

Alexandre X. Falcão is full professor at the Institute of Computing, University of Campinas, Campinas, SP, Brazil. He received a B.Sc. in Electrical Engineering from the Federal University of Pernambuco, Recife, PE, Brazil, in 1988. He has worked in biomedical image processing, visualization and analysis since 1991. In 1993, he received a M.Sc. in Electrical Engineering from the University of Campinas, Campinas, SP, Brazil. During 1994-1996, he worked with the Medical Image Processing Group at the Department of Radiology, University of Pennsylvania, PA, USA, on interactive image segmentation for his doctorate. He got his doctorate in Electrical Engineering from the University of Campinas in 1996. In 1997, he worked in a project for Globo TV at a research center, CPqD-TELEBRAS in Campinas, developing methods for video quality assessment. His experience as professor of Computer Science and Engineering started in 1998 at the University of Campinas. His main research interests include image/video processing, visualization, and analysis; graph algorithms and dynamic programming; image annotation, organization, and retrieval; machine learning and pattern recognition; and image analysis applications in Biology, Medicine, Biometrics, Geology, and Agriculture.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 704.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.