2,544
Views
19
CrossRef citations to date
0
Altmetric
Editorial

Where are we now? Re-visiting the Digital Earth through human-centered virtual and augmented reality geovisualization environments

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

The original Digital Earth concept, formulated by Al Gore (Citation1998), is essentially a virtual reality system. In this (imagined) system, users are able to freely explore all possible recorded knowledge or information about the Earth though an interactive interface. While we imagine such an interface primarily as visual for now, it can be expected that in the future other senses will be engaged, allowing for even more realistic virtual experiences. Even though ‘realism’ in the experience is desirable (i.e., it feels real), immersive experiences provided by visualization environments can go beyond reality, as they can be enhanced with queryable information. Of course one can also create fictitious experiences and simulations in such environments; including information about possible pasts (e.g., ancient Rome) and futures (e.g., a planned neighborhood); or spaces that we cannot (easily) directly experience (e.g., the Moon, Mars, other far-away spaces, under the oceans, core of the earth, etc.).

At the time of this writing, there is a new wave of excitement across scientific communities and commercial markets about virtual, augmented and mixed reality technologies. Various forms of virtual reality (VR), such as stereoscopic displays, have seen such waves of excitement in the past (see Gartner’s hype cycles, Linden and Fenn Citation2003). VR experiences appear to inspire people, but the excitement about VR displays tend to not last too long, perhaps partly because of the cumbersome hardware and the amount of involvement their setup requires. Current VR technologies offer better visual quality, somewhat easier-to-setup hardware, and more realistic virtual experiences; and are more affordable compared to previous generations. Nonetheless, only time will tell if VR will remain a mainstream display technology this time around.

Augmented reality (AR) on the other hand, has only been experimental until recently, and is new to mainstream attention. Dedicated headsets, such as Microsoft Hololens (https://www.microsoft.com/en-us/hololens), MagicLeap (https://www.magicleap.com/) or similar, offer full state-of-the-art AR experience with hand- and head-tracking, but importantly, there are already convincing and functioning AR applications for smartphones without having to wear a ‘helmet’ (e.g., see augmentedcreativity.ch, https://highlights.ikea.com/2017/ikea-place/). These AR applications for mobile phones make a strong case for the durability of AR, because this way, technology addresses a much wider audience. In other words, perhaps not many people will invest in dedicated head-mounted displays, whereas many people already have smartphones. This prediction will depend on the developments of the next generation of smartphones, but we envision that with continued development of the ‘Internet of Things’ (IoT), such mobile AR applications will eventually allow us to point our device to any object containing a chip and data connection to show visually enriched information.

Mixed reality (MR, or sometimes XR), also referred to as ‘hybrid reality’, stands between the VR and AR in the spectrum (Milgram and Kishino Citation1994), where physical and virtual objects are ‘mixed’ in a scene (including ‘augmented virtuality’). Therefore, the fate of MR depends on the developments on VR and AR technologies.

Because developments related to virtual, augmented, and mixed reality often come from technological progress, resulting scientific inquiry in these areas is often characterized as technology-driven, as opposed to hypothesis- or curiosity-driven. However, as with many conceptual categorizations, the lines between the tech-driven and hypothesis-driven science get blurry very quickly. Tech-driven efforts lead to interesting applied science outcomes, but also to fundamental insights, and inspire hypothesis-driven studies. A similar synergy exists also from the lens of fundamental science: findings from hypothesis-driven studies inform applied science efforts and technological developments, completing the full circle. Therefore, in this special issue, we feature both hypothesis- and technology-driven studies on VR/AR/MR, with a special focus on geographic information science and related fields, linking the research in this area to the original Digital Earth concept.

The first paper in the issue by Hruby, Ressl, and de la Borbolla del Valle (Citation2019) titled ‘Geovisualization with Immersive Virtual Environments in Theory and Practice’, as the title already implies, provides an overview of the current state of the art both from a theoretical and a practical perspective with a focus on user-centric thinking. It illustrates a ‘geovisualization immersion pipeline’ and presents a case study in environmental science. Hruby et al. enhance their theoretical framework by reflecting on the criteria to be met that facilitate the experience of spatial presence. Through their captivating case study on a coral reef ecosystem, the authors illustrate how virtual environments can overcome the costs and complexities of a ‘field trip’ that allows experts to study coral reefs. The paper, thus, offers insights from multiple perspectives of technology, cognition, and application, touching upon all the important aspects of the VR research in connection with geovisualization.

An important dimension in the VR discourse is the value of individual versus group differences. Who benefits from using VR most? Why are there such differences, and can we design the content in a way that benefits everyone? Lokka and Çöltekin (Citation2019) take an interest in this question, from an abstract-and-realism lens in their paper titled ‘Toward optimizing the design of virtual environments for route learning: empirically assessing the effects of changing levels of realism on memory’. In this comprehensive controlled experiment, Lokka & Çöltekin compare three levels of realism in the context of route learning, demonstrating that using the photo-textures selectively in a virtual environment meant for route learning is beneficial in terms of short- and long-term recall of routes. Besides their three visualization factors (three different levels of realism), they have three levels in terms of tasks (visual, visuospatial and spatial), two levels of user abilities based on two different measures (high- and low spatial abilities; high- and low memory capacity), and three temporal factors (immediate recall, slightly-delayed recall, one-week delayed recall). The findings clearly demonstrate that too much and too little realism is not ideal in most tasks, where the purely spatial task stands out because it is designed to not rely on the photographic information. Thus it is understandably different than all others that benefit from photo-textures. The good news is, by adjusting the levels of realism for a given context, recall accuracy can be improved for everyone. Thus, this interesting experiment offers robust evidence that the visual content should be carefully considered for creating memorable VR experiences, especially if the goal is learning, specifically, route learning.

Another paper that takes an interest in individual and group differences is that by Kubíček et al. (Citation2019) titled ‘Identification of altitude profiles in 3D geovisualizations: The role of interaction and spatial abilities’. Kubicek et al. examine a virtual 3D space that does not require dedicated hardware; a desktop 3D geovisualization software environment. The authors introduce their investigation of system factors for 3D geovisualization including the use of stereoscopy and navigational interactivity on spatial task performance. The paper provides authors’ insights into issues of 3D display technology, considering whether more cost-effective means reduce the gap in user’s spatial ability compared to more expensive systems. Kubicek et al. provide a focused assessment of task performance through a systematic experiment. The merits of the study further the notion that more empirical investigations of navigational interactivity within these 3D geovisualizations need to be conducted.

While there is strong interest in creating immersive geovisualization and analysis applications across many disciplines, it is widely acknowledged that creating these is challenging, and requires carefully-crafted research and technological progress. The article by Havenith, Cerfontaine, and Mreyen (Citation2019) titled ‘How Virtual Reality can help visualise and access geohazards’, discusses existing work, current limitations, and challenges from the perspective of geohazards research and VR applications. Their arguments center on the notion of a 4D geospace with the six essential qualities of being multidimensional: spatiotemporal, integrating, fully interactive, (tele)immersive, and collaborative. Based on their own experience with providing platforms for the analysis of landslides, earthquakes, and other seismic hazards, they emphasize real-time processing capabilities, uncertainty visualization, model integration, and collaboration support as areas in which many hurdles still need to be overcome. The authors’ insights apply to other domains and disciplines, as does their appeal for future studies that prove (or disprove) the benefits of employing immersive technologies in different application domains.

The next paper in this special issue is on a related subject (emergency management), but this time we take a closer look at the MR. The potential of mixed and augmented reality technologies for emergency management in the form of situated experiences can bridge the gap between simulated experiences and real world ones. The proliferation of this technology in emergency management furthers the necessity for understanding features and level of representation with regards to human movement. In their article titled Lochhead and Hedley (Citation2019) ‘Mixed reality emergency management: Bringing virtual evacuation simulations into real-world built environments’, Lockhead and Hedley investigate the role of this technology in complex building evacuation proccesses using simulations, and propose novel mobile interface prototypes. The authors detail the workflow for developing these prototype interfaces for assessing human movement at different scales. While the authors warn that such interfaces cannot replace current use of GIS and egress modeling, the merits of this investigation suggest it can supplement these analyses for more informed emergency preparedness.

Studying urban environments with VR and AR allow accounting for 3D structures (e.g., building heights), and a virtual experience of such an environment might offer a stronger emotional connection to the represented phenomena. To be able to achieve this (and similar tasks), however, proper technical infrastructure and high-fidelity data are needed (whereas high-fidelity representation can be considered a different question, as demonstrated by the results of the Lokka & Coltekin study). Yang and Lee (Citation2019) present their approach to ‘Improving Accuracy of Automated 3-D Building Models for Smart Cities’, where they contextualize their work in a related topic to the two of the preceding papers: disaster management in urban environments. The paper proposes an automated 3D city modeling application, combining data from aerial photographs, terrestrial LiDAR, and field surveying measurements for particularly dense urban areas. As a consequence, 3D city models are generated, mainly based on the aerial photographs, enriched by the other measurements. The authors demonstrate that their approach improves the positional accuracy, compared to a more standard approach, offering an interpretation of the suitability of the produced model for commercial and scientific applications.

While various technical and human-centered concerns are covered by the previous papers, any VR (or AR, MR) is not complete without a consideration of the interaction modalities. The standard mouse-and-keyboard interaction is fundamentally not compatible with the applications of the VR family, and there are various efforts considering different modalities (e.g., Çöltekin et al. Citation2016). The development of practical, comfortable, and intuitive navigation approaches in large scale VR environments is, therefore, a key topic that is currently receiving a lot of research attention. Huang and Chen (Citation2019) offer a contribution in this area in their paper titled ‘A Multi-Scale VR Navigation Method for VR Globes’. The authors propose (and computationally evaluate) a novel method for realizing navigation on a virtual globe, based on movements within a limited area of real space. The presented algorithms offer solutions to problems of mapping an actual locomotion in the tracked area to movements and perspective change in the vastly larger VR environment and efficiently preventing the viewpoint from passing through object boundaries. The feasibility and merits of the approach are analyzed in a set of experiments, which demonstrate that their method enables effective navigation across a large range of scales, from full-globe to street-level or indoor view. Our special issue thus closes with this methodological paper.

With these papers, we believe we have captured the current state of the art through original contributions from scientists who work in the domains related to the Digital Earth agenda. We hope the featured papers will inspire our readers for asking and answering more of the challenging (but equally interesting) questions in this exciting area. Virtual, augmented and mixed reality visualization environments might be the next generation displays for interacting with information as the spatial computing paradigms work ever better, the processing power for smaller devices are ever stronger today, and it seems that the trend will continue.

References

  • Çöltekin, A., J. Hempel, A. Brychtova, I. Giannopoulos, S. Stellmach, and R. Dachselt. 2016. “Gaze and Feet as Additional Input Modalities for Interaction with Geospatial Interfaces.” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-2: 113–20. http://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/III-2/113/2016/ doi: 10.5194/isprsannals-III-2-113-2016
  • Gore, Al. 1998. “The Digital Earth: Understanding Our Planet in the 21st Century.” Australian Surveyor 43 (2): 89–91. doi: 10.1080/00050348.1998.10558728
  • Havenith, Hans-Balder, Philippe Cerfontaine, and Anne-Sophie Mreyen. 2019. “How Virtual Reality Can Help Visualise and Assess Geohazards.” International Journal of Digital Earth 1–17. doi:10.1080/17538947.2017.1365960.
  • Hruby, Florian, Rainer Ressl, and Genghis de la Borbolla del Valle. 2019. “Geovisualization with Immersive Virtual Environments in Theory and Practice.” International Journal of Digital Earth 1–14. doi:10.1080/17538947.2018.1501106.
  • Huang, Wumeng, and Jing Chen. 2019. “A Multi-Scale VR Navigation Method for VR Globes.” International Journal of Digital Earth 1–22. doi:10.1080/17538947.2018.1426646.
  • Kubíček, P., Č. Šašinka, Z. Stachoň, L. Herman, V. Juřík, T. Urbánek, and J. Chmelík. 2019. “Identification of Altitude Profiles in 3D Geovisualizations: The Role of Interaction and Spatial Abilities.” International Journal of Digital Earth 1–17. doi:10.1080/17538947.2017.1382581.
  • Linden, A., and J. Fenn. 2003. “Understanding Gartner’s Hype Cycles.” Strategic Analysis Report N° R-20-1971. Gartner, Inc.
  • Lochhead, Ian, and Nick Hedley. 2019. “Mixed Reality Emergency Management: Bringing Virtual Evacuation Simulations into Real-World Built Environments.” International Journal of Digital Earth 1–19. doi:10.1080/17538947.2018.1425489.
  • Lokka, Ismini E., and Arzu Çöltekin. 2019. “Toward Optimizing the Design of Virtual Environments for Route Learning: Empirically Assessing the Effects of Changing Levels of Realism on Memory.” International Journal of Digital Earth: 1–19. doi:10.1080/17538947.2017.1349842.
  • Milgram, P., and F. Kishino. 1994. “A Taxonomy of Mixed Reality Visual Displays.” IEICE TRANSACTIONS on Information and Systems 77 (2): 1321–29.
  • Yang, Byungyun, and Jungil Lee. 2019. “Improving Accuracy of Automated 3-D Building Models for Smart Cities.” International Journal of Digital Earth 1–19. doi:10.1080/17538947.2017.1395089.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.