2,294
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Virtual forests: a review on emerging questions in the use and application of 3D data in forestry

ORCID Icon, , , , , & show all
Pages 29-42 | Received 07 Feb 2023, Accepted 17 May 2023, Published online: 14 Jun 2023

ABSTRACT

Digital 3D technologies are emerging methods for recording and visualizing forests. Therefore, it is not surprising that these technologies have seen many applications and developments in recent years. In this study, we conducted a comprehensive review of existing 3D technologies within the context of forestry and how they interact with users and stakeholders. We present a summary of the requirements, visualization, and application of virtual forests. This includes an overview of state-of-the-art 3D reconstruction and visualization tools, which have seen a major increase in interest in the past few years, as evidenced by a preliminary analysis on research keywords. Based on the reviewed studies, we present the current trend and emerging questions, as well as challenges in the field of virtual forests. Further, we discuss the identified trends and challenges related to data acquisition, along with existing and potential future interactions between the 3D data and more specific demands from the forestry sector. We conclude that the use of digital 3D data in forestry is on the rise and that such novel methods show great potential and merit further attention.

This article is part of the following collections:
Digitalization of Forest Operations

Introduction

The great importance of forest ecosystems is undeniable: they provide timber and valuable ecosystem services, such as clean air, clean water, and biodiversity, and they help mitigate climate change effects (Brockerhoff et al. Citation2017). Forest ecosystems are subject to change, be it from natural disturbances, management activities including harvest operations, or climate change (Overpeck et al. Citation1990). They are, however, slowly changing systems; the effects of decisions regarding their management are not always immediately visible, and alterations due to climate change are not directly experienced by humans (Lindner et al. Citation2014). The people managing forests today, including various stakeholders, are likely not the same people who will benefit from them decades from now. This creates a discrepancy in communication and complicates management. The advent of digital technologies has opened the door to the virtual representation of objects, such as trees and forests, as well as their development over time. Novel methods can be used to visualize ecosystems and represent them in immersive ways and could have great potential to support and enhance communication between stakeholders and decision makers in forestry.

Bringing new technologies into a rather traditional field, such as forestry, comes with many challenges and opportunities. While the use of new technologies for purposes such as game development, architecture, or construction has become more and more common, their use in forest and landscape management is still very limited (Hejtmánek et al. Citation2022). Additionally, forestry has unique characteristics that sets it apart from the fields mentioned above, and standard methods applied in other fields might not work here because the dynamic nature of forested ecosystems is crucial for an adequate representation of management options. This presents an interesting dynamic to the subject of virtual forests and how they are implemented and accepted.

The concept of virtual forest has a broad definition; indeed, any digitized data related to forests may be considered a virtual representation of the forest. In this paper, the term virtual forest includes both geometric and semantic factors. Geometry-wise, the emphasis in this paper is on the use of 3D data to create the virtual forest environment. This 3D data may be either reality-based or design-based (Remondino and Rizzi Citation2010; Yang et al. Citation2018). In reality-based methods, 3D measurement techniques such as lidar and photogrammetry are considered, while procedural modeling is an example of design-based 3D data. This geometric representation of the forest may thereafter be augmented by semantic information, e.g. textures and species information. The virtual forest concept, as discussed here, includes studies relating to the various interactions between these two factors and how they are used to fulfill the different needs of the forestry sector. Virtual forests are therefore 3D digital representations of forests that can be used for a variety of applications, such as research, training, gaming, and forest planning.

In order to develop a virtual forest based on real forest data, multiple steps are needed. First, the necessary data must be acquired and processed. The data can come from various sources, such as lidar (Calders et al. Citation2016), photogrammetry (Kalacska et al. Citation2021), digital elevation models (Uusitalo and Orland Citation2001), aerial photography (Bates-Brkljac and Dupras Citation2001), or traditional manual forest inventories (Huang et al. Citation2020). This data needs to be processed to be used in virtual forest applications (Szabó et al. Citation2012). Raw data from multiple sources must be combined and converted into commonly used formats for digital applications, e.g. 3D point clouds or 3D meshes. Finally, the application needs to be developed considering user-friendliness and functionality (Bozgeyikli et al. Citation2016).

As regards to data visualization, one novel technology is Extended Reality (XR). It refers to a virtual environment in which users may interact in an immersive manner (Milgram and Kishino Citation1994). This may be done by augmenting the real world with virtual content (Augmented Reality or AR), immersing the user completely in the virtual world with a certain degree of haptic capability (Virtual Reality or VR), or a combination of the two (Mixed Reality or MR).

To form a basis for the development and use of these technologies in the field of forestry and landscape management, we therefore aimed to identify questions of emerging interest. The purpose of this study was to fill knowledge gaps which hinder a wider application of such technologies in forestry research, practice, and communication between stakeholders. Mapping the various topics in virtual forests will help focus efforts on research and development and identify new opportunities.

Materials and methods

To ascertain the recent trend in the field of virtual forests, we conducted a simple search of the Clarivate Web of Science for several keyword combinations in the topics of digital and virtual forests. shows the number of search results for different keywords and combinations thereof. No restriction was made as to the fields of the searches. We considered not only the interest in the topic, but also how it is applied at the cutting edge of forestry and the key challenges in creating a 3D representation of a forest.

Figure 1. Bar chart describing the number of certain keywords related to virtual forest found on Clarivate Web of Science.

Figure 1. Bar chart describing the number of certain keywords related to virtual forest found on Clarivate Web of Science.

While almost 13,000 articles matched our search for “digital forest,” the topic “virtual forest” yielded only about one-fifth as many results (3,004). Both keyword searches showed an upward trend in the number of related articles over the last years. However, “digital forest” showed a much steeper incline. The keyword search for “digital twin” yielded many articles in an unrestricted search, but the number was drastically reduced when restricted to forestry, only 13 results (). This indicates that while the concept of virtual forest is potentially an important and emerging topic of interest, not much has yet been published related to forestry.

From this first overview of the literature, several sub-topics emerged from the concept of virtual forests. First, the creation of a virtual forest environment has gained a lot of attention in recent years, concurrent with developments in 3D reconstruction technologies (Calders et al. Citation2016; Abegg et al. Citation2017; Mokroš et al. Citation2018). Lidar, or laser scanning, has seen a significant process of democratization as more manufacturers have emerged, thus creating a competitive market for both terrestrial laser scanners (TLS) and aerial laser scanners (ALS). The latter term is also often used interchangeably with the term lidar itself. Passive remote sensing has likewise seen important growth, with developments in photogrammetry and multi-view stereopsis (MVS), as well as passive sensors, such as multispectral and hyperspectral imagery. These novel data acquisition techniques are instrumental in promoting the notion of a virtual forest.

A second sub-topic emerging from the existing literature focuses more on the exploitation of the data generated by the first point for the purpose of creating a virtual forest. Novel visualization methods, such as VR and MR, have been used to represent virtual forests in an immersive manner. Other work in the literature involved using 3D data to create information systems akin to GIS (Geographical Information Systems), albeit by exploiting the information provided by the third dimension as opposed to the 2.5D approach used in traditional GIS. This is more evident in the case of urban forests, where 3D trees interact with an existing 3D GIS of the urban environment in various analyses related to micro-climate management. The third sub-topic identified relates to how the created virtual forest is used. The use of immersive technologies opens the potential for various types of uses, ranging from technical applications, such as remote forest monitoring and modeling, to human-related analyses, such as the effect of a virtual forest on users.

Results and analysis

This section will describe the following topics based on the results of our review:

  • Requirements for virtual forests, where we discuss state-of-the-art methods for 3D data acquisition and processing, as well as other requirements for the use of 3D data within the context of forestry.

  • Visualization of 3D forests, where we look at the creation and design of virtual forest applications, including the different available visualization platforms.

  • Applications of virtual forests, where we then describe the most common applications of such digital forestry systems.

Requirements for virtual forests

3D data reconstruction techniques

Various 3D sensors and techniques are available today, aided by significant developments in both hardware and software. In general, these techniques can be divided into passive and active sensors, depending on the way they acquire spatial data (Granshaw Citation2020). For both passive and active sensors, a further distinction can be made based on the distance between the object and sensor: terrestrial, aerial (including drones and planes), and extraterrestrial (satellite-based).

One of the most common passive remote sensing methods is photogrammetry. In a wider sense, photogrammetry is a technique that can reveal 3D information from 2D images using optics and projective geometry (Schenk Citation2005). For 3D data, photogrammetry refers to the conversion of 2D images into 3D data. Images for photogrammetry can be acquired by different means, including drones (Frey et al. Citation2018), planes and satellites (Rupnik et al. Citation2018). Terrestrial close-range photogrammetry is also used frequently in various applications, due to its potential to deliver very precise and photo-realistic results, but it is less often used in forestry. This is due to photogrammetry’s limiting factors, which include the need for multiple overlapping images from various points of view and for a convergent network of images (Schenk Citation2005). The heterogeneous nature of forest terrain means that terrestrial photogrammetry is not a suitable method acquisition-wise, although it has been used for single-tree 3D reconstruction in some studies (Fol et al. Citation2022). In the near future, emerging low-cost sensors, such as fish-eye (Kükenbrink et al. Citation2022) and panoramic 360° (Hristova et al. Citation2022) cameras, may be contenders for TLS in forest applications.

Major developments in the computer vision domain in the past two decades have catapulted photogrammetry from a mainly aerial mapping technique to an important digital 3D reconstruction method. Advancements in image matching and camera pose estimation using the Structure from Motion (SfM) method (Wu et al. Citation2011) have enabled fast and automatic processing. Meanwhile, dense matching techniques such as Semi-Global Matching (SGM; Hirschmüller Citation2011) have made it possible for photogrammetry to generate dense 3D point clouds not unlike those created by lidar. More recent developments have increased photogrammetry’s 3D reconstruction capabilities further, such as the introduction of constraints derived from Artificial Intelligence (AI) in dense matching (Stathopoulou and Remondino Citation2020) and novel 3D structure generation paradigms, e.g. neural radiance fields (Mildenhall et al. Citation2021).

In terms of active sensing methods, lidar is the technology most encountered in the literature today. Lidar is a method for determining ranges (variable distances) by targeting an object or a surface using a laser beam and measuring the time required for the reflected light to return to the receiver. This laser sensing technology may further be divided into ground-based (TLS; Rehush et al. Citation2018) and aerial applications (ALS; Nex et al. Citation2015). In current parlance, the latter is often associated with the term “lidar” itself, while the former may also be known by the term “laser scanning.” Furthermore, technological developments in miniaturization have led to the invention of other types of TLS, such as mobile laser scanners (MLS; Kaartinen et al. Citation2012; Cabo et al. Citation2018) and even handheld solid-state lidar (SSL; Luetzenburg et al. Citation2021; Murtiyoso et al. Citation2021). Lidar is also characterized by the capability to acquire multiple returns. In aerial lidar, the first return is the highest object detected, while the last return, depending on the scenario, characterizes the ground surface. Full-waveform lidar records the complete waveform of each backscattered pulse (Mallet and Bretar Citation2009). It is thus possible to use this technique to generate both digital terrain models (DTMs) and digital surface models (DSMs), even in the presence of dense vegetation. This is particularly useful in forest scenes, as it generates a 3D representation of both the forest bed and the canopy (Dalponte et al. Citation2009). It is therefore used more often than passive sensors in forestry applications. shows a categorization of these 3D techniques.

Table 1. Categories of state-of-the-art 3D techniques and example studies.

Notions on levels of detail and suitability of techniques

While various techniques exist for the creation of 3D data, the inevitable question is which method is preferable for forestry. The answer to this question, as is the case with most multi-modal geospatial applications, depends on the scale of the object in question. Earlier studies in forestry sub-divided this problem into area-based approaches (ABA; Næsset Citation2002) and individual tree detection (ITD; Hyyppa Citation1999; Kaartinen et al. Citation2012).

In the ITD-based approach, the 3D data is first segmented; this means that objects with similar properties are grouped together (Wang and Shan Citation2009b). Manual segmentation is possible but highly time consuming (Burt et al. Citation2019). Various methods have been presented to automate the segmentation process and derive metrics used for forest planning, such as stem location, tree height, stem diameter at breast height (DBH), stem density, and timber volume (Li et al. Citation2012; Vega et al. Citation2014; Hackenberg et al. Citation2015; Parkan and Tuia Citation2015; Burt et al. Citation2019). Recently, AI in the form of machine learning techniques has also been applied to help with this task (Mazzacca et al. Citation2022).

In light of the various sensors available on the market, a more detailed categorization of their use may be beneficial for future users. For example, close-range photogrammetry has been used to study specific features of trees (Mokroš et al. Citation2018), while the use of TLS, MLS and fish-eye cameras has been discussed as tools to support forest inventories (Kükenbrink et al. Citation2021). Meanwhile, Gollob et al. (Citation2021) experimented with SSL sensors in the Apple iPad to derive tree parameters. Applications of unmanned aerial vehicles (UAV) and ALS generally involve ABA-based approaches (e.g. Sankey et al. Citation2017). is a summary of the relationships between the 3D techniques identified and their uses in the literature.

Figure 2. Categorization of 3D forest mapping techniques based on scene size and complexity.

Figure 2. Categorization of 3D forest mapping techniques based on scene size and complexity.

In , the X-axis divides the 3D data requirements based on the needs of the forestry domain. This division is partly inspired by a preexisting classification of level of detail (LoD) in urban mapping, standardized using the CityGML paradigm (Biljecki et al. Citation2016). The LoD concept, as implemented in CityGML, is based on geometric LoDs, which we attempt to emulate in – starting at the micro-scale and ending with very large scenes. Meanwhile, the Y-axis presents the complexity of the scenes in terms of geometric details.

3D data representation

In terms of 3D data generated by reconstruction techniques, several data formats may be encountered (). The first format is the 3D point cloud, which is often, but not always (Nocerino et al. Citation2020), the direct result of the 3D reconstruction process, whether via passive or active sensors. The point cloud is therefore the most direct measurement of the scene and constitutes a high-quality voluminous representation of real-world objects (Rusu and Cousins Citation2011). A 3D mesh, on the other hand, is a form of surface reconstruction most often generated by triangulating a point cloud. The mesh can also be textured, either from interpolated colors from its original point cloud or from the projection of image pixels when the projective relationship between the 2D image and 3D model has been established (such as in most cases of photogrammetric application). Meshes have advantages in data handling, as they usually require a smaller data volume compared with raw 3D point clouds (Zerman et al. Citation2020). However, in some cases using a mesh involves a loss of the original detail, due to simplifications or interpolations, although this loss is usually very small (Park and Lee Citation2019).

Figure 3. Different representations of 3D data for trees: (a) point cloud, (b) 3D mesh using the Poisson method, (c) parametric quantitative structure model (QSM), and (d) stylistic design-based 3D model drawn using Blender (https://www.blender.org/accessed 12 June 2023).

Figure 3. Different representations of 3D data for trees: (a) point cloud, (b) 3D mesh using the Poisson method, (c) parametric quantitative structure model (QSM), and (d) stylistic design-based 3D model drawn using Blender (https://www.blender.org/accessed 12 June 2023).

While meshes still retain a level of fidelity to the original form of the object, parametric models sacrifice this in place of simplified primitives which are easier to integrate into mathematical models (Ochmann et al. Citation2016; Yang et al. Citation2018). Such primitives are often simple forms (e.g. cylinders, planes) that are efficient for parametric modeling and computation, whereas mathematical models may struggle to use point clouds and meshes as input. In tree modeling, the use of parametric quantitative structure modeling (QSM) is widespread (Hackenberg et al. Citation2015; Lau et al. Citation2018). Another option is design-based 3D modeling. These stylized 3D models emphasize aesthetic values to deliver realistic renders, and this form of modeling is therefore most commonly used in media such as films and video games.

Within the context of forestry, the choice of 3D data format is subjective and depends on the application. For example, point clouds and meshes are preferable for performing measurements and extracting tree species and other important forest parameters (e.g. DBH, clear wood), owing to their fidelity to real-world dimensions. On the other hand, parametric models may be more effective/appropriate for predictions about future forest conditions.

Regarding 3D data visualization for research and academic purposes, the development of XR applications is often done using a game engine, which provides tools and libraries to create and deploy the software. Popular game engines include Unreal (https://www.unrealengine.com/fr, accessed 10 March 2023) and Unity (https://unity.com/, accessed 10 March 2023). The file format required for 3D models in game engines is usually a 3D mesh or a point cloud, which presents advantages in terms of both rendering and visual aesthetics compared with parametric models. Interactive applications can be developed using these 3D model formats, physical processes can be implemented in the virtual environment, such as collision, gravity, light, and wind.

Several case studies regarding the conversion of point cloud data to meshes can be found in geology (Bonnaffe et al. Citation2007) and civil engineering (Wang and Chu Citation2009; Laksono and Aditya Citation2019). A method for processing point cloud data from natural environments was presented by Risse et al. (Citation2018). They developed a workflow that can generate meshes from lidar scans with very high accuracy (millimeter range) for the visualization of ant habitats.

Typically, 3D models of trees created by lidar scanning, photogrammetry, or artistic tools are static. However, there are significant advantages to using dynamic models that are generated procedurally. Procedural modeling means that models are created using a set of constructive and generative rules (Ullrich et al. Citation2010). Dynamic tree models that are affected by environmental parameters can be used to show the influence of various environmental models and interactions on a forest scene. A vast set of 3D models is needed for the creation of compelling 3D scenes. Their development can be highly time consuming when done manually (Talton et al. Citation2011). In contrast, once procedural rules have been developed, the subsequent generation of models can be scaled up at minimal cost.

Procedural modeling could also be used to generate trees and forests. Because trees are biological organisms, their growth and appearance are determined by biological rules, forming a favorable foundation for attempts to recreate these rules for 3D models (Ullrich et al. Citation2010). Notably, as most of the methods currently available for the procedural generation of forest scenes are primarily used in gaming, their main objective is to create visually appealing results rather than ecologically correct ones.

A method to recreate a single 3D tree model with procedural methods was discussed by Stava et al. (Citation2014). They used different types of static models as starting points, e.g. models from lidar scans, Xfrog (a procedural organic 3D modeler for plants; Lintermann and Deussen Citation1999), or SpeedTree (a vegetation modeling program; https://store.speedtree.com/, accessed 9 March 2023), as well as trees generated by Open L-Systems (a type of formal grammar to describe plant growth; Lintermann and Deussen Citation1999). Stava et al. (Citation2014) created a model controlled by 24 parameters. The parameters were found by maximizing the similarities between the input trees and the generated trees. Controlling the models with parameters made it possible to create trees that were sensitive to their environment; for example, a tree standing next to another tree would look different than a free-standing tree. Populating virtual forests with procedurally generated trees not only makes it possible to create diverse and compelling 3D scenes but also provides a means to change the generated models using parameters.

Visualization of 3D forests

History of forest visualization

The visualization of forests has been a research topic for more than 30 years. Orland (Citation1988) stated that video imaging (“cut-and-paste” or “paint” over digital photographic images) can be used to speculate about the visible consequences of different forest management practices. Since then, the goal of using computer simulations to visualize a forest stand, either as it currently exists or as it will look in the future depending on specific management activities, was mentioned in many publications related to forest visualization. However, what has changed over time is the visualization quality and fidelity and the degree of interactivity and immersiveness, which have developed alongside technical advances in modern computer hardware and software.

In 1990s, projects about forest visualizations used manipulated photographs (e.g. Orland Citation1994) or individual computer-generated 3D images from fixed key viewpoints (e.g. Buckley et al. Citation1998) and Bergen et al. Citation1998). These techniques were compared by McGaughey (Citation1998), Uusitalo and Orland (Citation2001), Tang and Bishop (Citation2002), and Karjalainen and Tyrväinen (Citation2002), while the effects of image quality were investigated by Daniel and Meitner (Citation2001).

In the 21st century, many studies have still involved the use of single images (Stoltman et al. Citation2004, Citation2007; Sheppard and Meitner Citation2005), sometimes with the possibility of changing viewpoints at “several frames per seconds” (Lim and Honjo Citation2003) or using animations based on predefined camera flight paths (Dunbar et al. Citation2004; Wang et al. Citation2006). A further approach involved a collection of omnidirectional photographs taken in a real forest, which then enabled the user to virtually (using a computer) walk through the forest on a predefined path (Abe et al. Citation2005). Interactivity with the forest visualizations improved in these years, leading to visualizations where flying through the forest was possible in real-time (Pretzsch et al. Citation2008; Cournède et al. Citation2009).

In subsequent years, the immersiveness of forest visualizations started to increase through the use of multiple screens to create a 135° (Stock and Bishop Citation2006) or 260° (Boukherroub et al. Citation2018) experience, or through the CAVE system (Fujisaki et al. Citation2008; Fabrika et al. Citation2018; Fabrika Citation2021). In the past five years, immersiveness was enhanced further by the introduction of head-mounted displays (HMDs; Astner Citation2018; Botev and Viegas Citation2020; Holopainen et al. Citation2020; Huang et al. Citation2021; Chandler et al. Citation2022).

In many of the most recent publications, a game engine was used for the visualization of the forest, a concept that was already proposed 20 years ago (e.g. Herwig and Paar Citation2002) but could not be implemented at that point, mainly for technical reasons. Besides the numerous recent publications about forest visualization, there is also at least one commercial virtual forest project, where VR can be used to evaluate forest treatment strategies in Finnish forests (https://www.metsagroup.com/, accessed 13 March 2023).

Traditional screen-based visualization

The visualization of 3D data is naturally conducted using digital screens, which are currently the most common conduit for digital data (Klippel et al. Citation2019). The earliest example of 3D data rendering using computers and a 2D screen display is the Sketchpad software (Sutherland Citation1964), which can be used to draw simple 3D primitives. In modern 3D rendering, the graphics processing unit (GPU) plays an important role. By using a dedicated GPU instead of a computer’s processor or central processing unit (CPU), the amount of rendered data can be increased significantly and rendering can be completed much faster (Palha et al. Citation2017). Two modern rendering engines are OpenGL (Johansson et al. Citation2015) and DirectX (Baek and Yoo Citation2020). Operations within the 3D space also require interaction with the GPU, with the two most common interfaces being NVIDIA’s CUDA and the open standard OpenCL (Ghorpade et al. Citation2012).

While greater 3D rendering capability has been the most prominent development in the last decade, due to demand from the entertainment industry, other 3D applications have also seen important developments, such as the rendering of point clouds. For example, the Point Cloud Library (PCL; Rusu and Cousins Citation2011) and open3D library (Zhou et al. Citation2018), both written for C++, enable users to benefit from GPU-aided 3D rendering of point clouds. In the Python language, open3D and the Point Cloud Processing Toolkit (PPTK; https://github.com/heremaps/pptk, accessed 10 March 2023) are two notable open-source examples.

Several ready-to-use software packages with user interfaces also exist. CloudCompare (http://www.danielgm.net/cc/, accessed 10 March 2023) is one example of a free software that has seen numerous applications in the 3D domain in general. Potree (Schütz et al. Citation2020) is another 3D visualizer which uses a web-based approach for portability. While these two examples have always been general-use 3D visualizers, other solutions exist which are more geared toward the forestry sector. For example, 3DForest (https://www.3dforest.eu/, accessed 10 March 2023) and SimpleForest (https://simpleforest.org/, accessed 10 March 2023) render tree point clouds while providing specific functions for their analysis, e.g. computation of the tree diameter or volume.

Modern immersive visualization

shows the major distinctions in modern immersive visualization technologies. In this regard, VR may be considered as a highly immersive way to experience digital forests. Users experience VR applications with a head-mounted display (headset) and can interact with the virtual environment via tracked controlling devices or hand tracking.

Figure 4. The differences between Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) within the context of virtual forests. AR adds a multimedia content into the real world, usually with minimal interaction with the environment. VR immerses the user completely in a virtual environment. MR mixes AR and VR by enabling users to interact with the real world, virtual world, and the multimedia augmentation simultaneously.

Figure 4. The differences between Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) within the context of virtual forests. AR adds a multimedia content into the real world, usually with minimal interaction with the environment. VR immerses the user completely in a virtual environment. MR mixes AR and VR by enabling users to interact with the real world, virtual world, and the multimedia augmentation simultaneously.

In contrast to conventional applications, VR provides the user with a deep absorption directly into the visualization (Milgram and Kishino Citation1994). In VR, users are brought into a simulated reality with which they can interact. This mimics interactions with the real world more closely than conventional 2D or 3D approaches. The sense of immersion comes from the experience of being completely inside the virtual environment, where real-world movement is translated into movement in the virtual world. The commercial focus in VR is mainly on gaming platforms; however, there is great potential for VR applications in a vast variety of fields in academia, education, training, and communication, including in forestry (Mantovani et al. Citation2003; Dede Citation2009; Kavanagh et al. Citation2017; Huang et al. Citation2020).

Unlike VR, AR and MR do not try to immerse users or separate them from the real world (). Instead, in the case of AR, the real world is enhanced by adding and overlaying virtual features. In MR, the real and virtual worlds are blended (Liu et al. Citation2017; Çöltekin et al. Citation2020). AR and MR could thus potentially be used in the forest for overlaying data about trees in the forest, such as tree species, timber volume, and other metrics. Collectively, AR, VR, and MR are sometimes grouped as XR.

Even though forms of XR have existed for about 40 years (Burdea Citation2003), recent technical advancements and a general decrease in cost for consumer headsets have made the technology accessible to the public. In particular, VR has been reported to enhance engagement and learning (Dede Citation2009), and the technology has been applied successfully to the treatment of mental health issues, such as to combat-induced post-traumatic stress disorder and fear of flying (Lindner et al. Citation2017). Health improvements from forest visits have been explored extensively (Sonntag-Öström et al. Citation2014), and even virtual visits have been shown to improve wellbeing (Yu et al. Citation2018; Mostajeran et al. Citation2021). Finally, an immersive environment can provide a novel and intuitive way to look at complex 3D data (Matthews Citation2018), such as lidar point clouds from forest scans.

Developing an immersive 3D environment for XR brings challenges that differ from other types of digital applications. Entering a virtual world can lead to nausea and sickness when not executed properly, especially when there is motion involved (Hettinger and Riccio Citation1992). Ergonomics and user wellbeing should therefore be considered in the development of VR applications.

Modern game engines often come with out-of-the-box VR development capabilities (Jerald et al. Citation2014), but adaptations are necessary due to the wide range of hardware available. The creation of large-scale realistic 3D environments, such as forests, requires a lot of time and skill from the developers, as well as computational power. There is always a trade-off between good graphics and performance. This calls for a careful examination as to how much realism and detail are needed for a given application.

While these game engines basically employ the same rendering techniques as classical 2D screen rendering, the setup of the XR system enables an interactive environment for the user. The user may experience the rendered point cloud in real-time, although computationally speaking the 3D object remains static, thus not requiring a continuous render. This may change when the system is faced with larger point clouds, for example the lidar data from entire forests. The limited computing power of XR tools means that this is a particular challenge. Several studies have been conducted to try to solve this problem using various strategies, for example by harnessing the GPU’s full capacity or by subdividing the large data into quadtrees (Palha et al. Citation2017) or octrees (Kharroubi et al. Citation2019).

Applications of virtual forests

Virtual forests as a tool for education and training

Virtual forests could be of high value for education and training purposes. Visualizing the decisions made in the field makes it easier to assess the consequences of alternative decisions. This technique is ready for use, for example in marteloscopes, which have been installed all over Europe in the past years. A marteloscope is a finite forest stand with trees that are numbered, mapped, and surveyed. Training sessions are conducted by allowing people to walk through the physical forest and mark trees for cutting, identify possible habitat trees, or collect other relevant information (Krumm et al. Citation2019; Thormann et al. Citation2019). As a result, reports can be generated about the types of trees selected, tree diameter distribution, and other characteristics of the selections made, and individual choices can then be discussed with a group. For some marteloscopes, predictions of future development based on silvicultural models are available. However, so far the only way of presenting the resulting data has been via graphs and reports, and recently via 360° cameras. Connecting this approach to VR would allow people to experience the impacts of management interventions directly and in an immersive way, leading to a better understanding of forestry ().

Figure 5. Illustration of a marteloscope represented in Virtual Reality (VR) using the Unity game engine (https://unity.com/accessed 12 June 2023).

Figure 5. Illustration of a marteloscope represented in Virtual Reality (VR) using the Unity game engine (https://unity.com/accessed 12 June 2023).

Another application of VR is in forest engineering, i.e. in simulators. For example, VR could be helpful with manual activities, such as pruning for value enhancement using chainsaws, or in the logging profession. It is important that young people who start their profession as machine operators have the skills needed to operate the equipment safely and efficiently. For instance, harvesters and forwarders are highly automated, and operators need to be familiar with them to avoid accidents and to cut the trees in an adequate manner without damaging soils or remaining stands.

In Wang et al. (Citation2009a), 3D parametric data were used to model optimal bucking operations. Meanwhile Wang and Shan (Citation2009b) generated 3D representation of trees to represent forest stands and used the result as a decision-support system. In both of these applications, the 3D data used were parametric models generated from manually measured tree parameters. Similarly, Lin and Wang (Citation2012) used this approach to optimize log processing, but mentioned that TLS data are beginning to be used and future integration of such 3D scanning method was planned. More recently, Pichler et al. (Citation2017) employed drone and TLS-based survey to generate a virtual forest, before using it to help tree marking for timber harvesting.

Simulators are highly beneficial in training of forest machine operators (Ranta Citation2009). Harvesters and forwarders are highly automated, expensive hardware. Operators must be familiar with the machine to operate it safely and in an efficient manner. Simulators offer a stress free, controllable environment for gaining practice in the control of the machine and proper handling techniques. A study by Ranta (Citation2009) showed that introducing simulator training clearly added value for harvester operator training. Moreover, continuous cover forest management is applied in many countries, which leads to diverse and structured forests that are also much more complex for machine operators; training simulators are therefore of high value in this setting.

Virtual forests as a tool for forest management

Visiting forests virtually opens several possibilities in forest management and operational forestry. In typical forestry practice, forest owners, foresters, buyers or sellers of timber, logging operators, and other stakeholders all need to make site visits to the same stands, often multiple times in a short time window. Here, accurate, virtual representations of forests could be shared virtually among the stakeholders, thus reducing the need for frequent in situ visits and increasing the level of information and awareness. Furthermore, the digital twin of the forest is always easily accessible. In , a point cloud of a forest was visualized in a web-based user interface. The left map shows a canopy height model overlaid with detected individual trees, which are visualized as points, where color represents the tree species. The size of the point represents the tree dimensions in the mapped forest. For further information a tile-based 3D representation of the forest can be viewed as high-resolution 3D point cloud by clicking an area from the left map allowing users to see the digital twin.

Figure 6. 3D point cloud visualized in web-based User Interface (courtesy of PreFor Oy).

Figure 6. 3D point cloud visualized in web-based User Interface (courtesy of PreFor Oy).

Virtual representations also make it possible to view forests from multiple perspectives and at different scales. This can help, for example, in planning logging operations where decision making occurs at several scales, from individual trees to the landscape. Different forest management and logging scenarios can also be compared virtually before any tree is cut, for example regarding how thinning intensity or thinning method affects the surrounding landscape.

Due to the geospatial nature of state-of-the-art 3D scanning technologies, another logical application for the data is in performing spatial analysis. Virtual forests may be used, for example, in planning timber hauling operations (Picchi et al. Citation2022) or determining the range of forest fire hazard (Jaiswal et al. Citation2002).

Virtual forests as a tool for communication among stakeholders

Virtual forests may also be used as a tool for more general communication. For example, Huang et al. (Citation2021) used a digital forest environment to simulate climate change scenarios. Such applications could help diffuse key messages to larger audiences in a more immersive manner. Virtual forests have also been tested for therapeutic purposes, namely how immersing someone in a 3D-simulated forest environment may influence their stress level in the same manner as more conventional forest bathing or sylvotherapy (Mattila et al. Citation2020; Hejtmánek et al. Citation2022).

We also acknowledge the great potential of virtual forests to contribute to enhanced communication between stakeholders. Public debates about forest management and targeted ecosystem services have shown that there are multiple stakeholders, often having varying interests and targets but also different levels of forest expertise. For example, if oak is envisaged as a dominant tree species in the next generation, it is necessary to conduct intensive management interventions. These might easily be misinterpreted as clear cuts, which are often not accepted by the public. Visualizing the potential development of an oak forest under alternative management scenarios offers various stakeholders a way to visualize and better understand why a certain decision has been made. Considering the growing need for biodiversity and ecosystem services, it is becoming increasingly important to bring stakeholders together.

Discussion

Based on the literature study described in the previous sections, several interesting trends and challenges emerge. When looking at the reviewed work from a chronological perspective, a definite democratization of 3D recording techniques, and by extension the use of 3D data in forestry, can be observed. Indeed, in the span of around 10–15 years the accessibility of 3D technologies, such as lidar, has increased considerably. This has been aided considerably by the miniaturization of lidar sensors, which can now fit into more accessible and lower-cost drones. This miniaturization aspect has also played a role in TLS, albeit cost-wise no significant drop can be seen. Smaller and lighter TLSs, coupled with a more advanced registration process, has opened the way for the use of this technology in forests. Developments in MLS technology have also benefited the forestry domain by enabling a faster, almost real-time, data acquisition process.

The use of drones, and by extension photogrammetry, has also seen an important increase in the forestry domain. In this regard several studies have been focused on finding a low-cost solution to forest mapping (Mokroš et al. Citation2021; Fol et al. Citation2022; Murtiyoso et al. Citation2022). This push toward low-cost mapping has also been aided by the development of old and new technologies, such as spherical photogrammetry and solid-state lidar. The latter also highlights an interesting trend in forest data representation and communication: the preference for easy-to-use and user-friendly mediums, such as mobile phones. Indeed, many researchers prefer AR over other XR methods, i.e. VR or MR, for this very reason.

The use of 3D measurement techniques to create the forest’s digital environment has also shown a trend toward metric-based forest mapping, notably for national forest inventories. In turn, this geospatial data has led to efforts to create an integrated digital forest information system, in GIS or other forms. Data visualization continues to be done in a screen-based environment, as is usual for any 3D data, but a recent emerging trend can also be seen in the use of XR as an immersive alternative. At the moment VR seems to have the edge in terms of the volume of research dedicated to its use in forestry, but future migration to MR may be eventual if not imminent. Regarding point clouds vs meshes, at this point there seems to be agreement that each method has advantages and disadvantages and that the best option depends on the application. Researchers concerned with forest modeling may prefer meshes or geometric primitives, such as QSMs, for their lightweight nature, whereas point clouds remain popular for practitioners and users in general. In the future, the two data formats may be able to be combined and applied either individually or simultaneously according to the user’s needs. presents a quick summary of a few examples of these applications as identified in the previous section.

Table 2. Summary of a few examples of the applications of a virtual forest identified in this study.

As is the case with data science in general, the use of AI is increasing in both interest and use. Indeed, the interaction between multiple disciplines will be inevitable in the future, including in the forestry domain. The concept of a virtual forest is already gaining traction, as demonstrated in this paper, which means that interaction with AI is only logical. AI is currently used mainly for data processing purposes, e.g. point cloud segmentation and object identification. However, judging from trends in other domains, there is a great potential for other applications, such as modeling.

The review presented in this paper also highlights several challenges for the concept of a virtual forest. While the concept itself refers to a system to support forestry applications, its implementation is very much reliant on other disciplines, with computer science and geomatics being the two most prominent fields identified in the literature. Despite the ongoing increase in multidisciplinarity, a definite gap still exists between the more distinct scientific disciplines involved. The identification of requirements in forestry and geomatics may involve different aspects, albeit equally important, which may generate confusion in an external observer. A concrete example is the definition of scale or level of detail. In this regard, a formal definition or even a standardization (as in ) may help to deliver the same ideas across different stakeholders. Admittedly this is not an easy discussion, and scientific debate on the important definitions are inevitable and necessary, but also welcome.

Regardless of the technical and scientific aspects involved in a virtual forest, from the perspective of a potential user and/or forest stakeholder the issue of acceptability is a challenge that was identified from the reviewed studies. A virtual forest may present a completely different paradigm to users who are more familiar with “hands-on” approaches. Visualization methods such as XR present a very interesting and vast potential for forest studies, but in the current state of technology they are not as widespread as previously forecasted, thus limiting user familiarity and, by extension, ease of use. The impact of virtual environments on user behavior, such as decision making (Oberdörfer et al. Citation2021), is another challenging research question that should be addressed to truly validate the advantages of migrating into a virtual environment.

Finally, the use of 3D technology in complex and heterogeneous environments presents a particular challenge. As described previously, a myriad of 3D technologies now exist to support the creation of digital forests, yet their applications are usually restricted to certain uses. Considering the rising popularity of forest 3D reconstruction, harmonizing the use of the different sensors involved and, more importantly, integrating and choosing the best sensor to deliver the best overall result are topics that we predict will be investigated extensively in the near future. The choice of sensors often depends primarily on area size, geometric quality and/or point cloud density, and sensor cost.

Conclusions

We identified an ongoing trend of increasing digital data use in forestry, specifically geospatially faithful 3D data. With this trend, an integration or at least standardization of the different paradigms, technologies, and stakeholders is inevitable. This is already the case in other similar domains, such as the AEC (architecture, engineering and construction) sector, as evidenced by the rising popularity of building information models (BIMs). The use of AI is another aspect that is spreading in the processing of 3D forest data, with no signs of slowing down. Regarding visualization, screen-based representation is still dominant, but immersive techniques such as XR are gaining traction. The main challenges identified are directly related to these trends: how to achieve an integrated system, and how such a virtual forest system will change interactions between forest stakeholders. While many advances have been made in recent years, numerous questions still need to be answered regarding the use of digital data and its use for the virtual representation of forests and forested landscapes. Addressing such open questions will be critical to ensure that these technologies can be used optimally by researchers, practitioners, and stakeholders in the future. However, this also means that the topic of virtual forests has a very large potential to serve the forestry domain by presenting data effectively and thus helping stakeholders make important decisions regarding our forests. This paper thus presents an attempt to systematize existing knowledge in the domain, and thus to help other researchers in understanding the current state-of-the-art of the use of 3D data in forestry.

Acknowledgements

We thank the European Union for funding the Horizon Europe project “Digital Analytics and Robotics for Sustainable Forestry” (Digiforest) [101070405].

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the HORIZON EUROPE Framework Programme [101070405].

References

  • Abe M, Yoshimura T, Koizumi S, Hasegawa N, Osaki T, Yasukawa N, Koba K, Moriya K, Sakai T. 2005. Virtual forest: design and evaluation of a walk-through system for forest education. J For Res. 10(3):189–197. doi:10.1007/s10310-004-0131-x.
  • Abegg M, Kükenbrink D, Zell J, Schaepman ME, Morsdorf F. 2017. Terrestrial laser scanning for forest inventories—tree diameter distribution and scanner location impact on occlusion. Forests. 8(6):184. doi:10.3390/f8060184.
  • Astner T. 2018. Forest thinning in VR: a VR application with the theme of forest thinning. [Bachelor’s thesis]. Mid Sweden University.
  • Baek N, Yoo KH. 2020. An accelerated rendering scheme for massively large point cloud data. J Supercomput. 76(10):8313–8323. doi:10.1007/s11227-019-03114-y.
  • Bates-Brkljac N, Dupras M. 2001. The virtual forest: integrating VRML worlds and generative music. In: Proceedings of 4th International Conference on Generative Design, Politecnico di Milano University. Milan: CUMINCAD. p. 25–30.
  • Bergen SD, McGaughey RJ, Fridley JL. 1998. Data-driven simulation, dimensional accuracy and realism in a landscape visualization tool. Landsc Urban Plan. 40(4):283–293. doi:10.1016/S0169-2046(97)00091-1.
  • Biljecki F, Ledoux H, Stoter J. 2016. An improved LOD specification for 3D building models. Comput Environ Urban Syst. 59:25–37. doi:10.1016/j.compenvurbsys.2016.04.005.
  • Bonnaffe F, Jennette D, Andrews J. 2007. A method for acquiring and processing ground-based lidar data in difficult-to-access outcrops for use in three-dimensional, virtual-reality models. Geosphere. 3(6):501. doi:10.1130/GES00104.1.
  • Botev J, Viegas A. 2020. Forest SaVR - A Virtual-Reality Application to Raise Awareness of Deforestation. In: Weyers B, Lürig C, Zielasko D, editors. GI VR/AR Workshop; Sep 24–25; Trier. Trier (Germany): Gesellschaft für Informatik e.V.
  • Boukherroub T, D’amours S, Rönnqvist M. 2018. Sustainable forest management using decision theaters: rethinking participatory planning. J Clean Prod. 179:567–580. doi:10.1016/j.jclepro.2018.01.084.
  • Bozgeyikli E, Raij A, Katkoori S, Dubey R. 2016. Point & teleport locomotion technique for virtual reality. In: Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. Austin, Texas, USA: Association for Computing Machinery, New York (NY), United States. p. 205–216.
  • Brockerhoff EG, Barbaro L, Castagneyrol B, Forrester DI, Gardiner B, González-Olabarria JR, Lyver PO, Meurisse N, Oxbrough A, Taki H, et al. 2017. Forest biodiversity, ecosystem functioning and the provision of ecosystem services. Biodivers Conserv. 26(13):3005–3035. doi:10.1007/s10531-017-1453-2.
  • Buckley DJ, Ulbricht C, Berry J. 1998. The virtual forest: advanced 3-D visualization techniques for forest management and research. In: ESRI 1998 User Conference. Toronto, Ontario, Canada. p. 27–31.
  • Burdea G. 2003. Virtual reality technology. 2nd ed. Hoboken (N.J): Wiley-Interscience.
  • Burt A, Disney M, Calders K. 2019. Extracting individual trees from lidar point clouds using treeseg. Methods Ecol Evol. 10(3):438–445.
  • Cabo C, Del Pozo S, Rodríguez-Gonzálvez P, Ordóñez C, González-Aguilera D. 2018. Comparing terrestrial laser scanning (TLS) and wearable laser scanning (WLS) for individual tree modeling at plot level. Remote Sens. 10(4):540. doi:10.3390/rs10040540.
  • Calders K, Burt A, Origo N, Disney M, Nightingale J, Raumonen P, Lewis P. 2016. Large-area virtual forests from terrestrial laser scanning data. In: 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS). Beijing, China: IEEE. p. 1765–1767.
  • Chandler T, Richards AE, Jenny B, Dickson F, Huang J, Klippel A, Neylan M, Wang F, Prober SM. 2022. Immersive landscapes: modelling ecosystem reference conditions in virtual reality. Landsc Ecol. 37(5):1293–1309. doi:10.1007/s10980-021-01313-8.
  • Çöltekin A, Lochhead I, Madden M, Christophe S, Devaux A, Pettit C, Lock O, Shukla S, Herman L, Stachoň Z, et al. 2020. Extended reality in spatial sciences: a review of research challenges and future directions. ISPRS Int J Geoinf. 9(7):439. doi:10.3390/ijgi9070439.
  • Cournède P-H, Guyard T, Bayol B, Griffon S, De Coligny F, Borianne P, Jaeger M, De Reffye P. 2009. A forest growth simulator based on functional-structural modelling of individual trees. In: Li B, Jaeger M, Guo Y, editors. 2009 Third International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications. Beijing: IEEE. p. 34–41.
  • Dalponte M, Coops NC, Bruzzone L, Gianelle D. 2009. Analysis on the use of multiple returns LiDAR data for the estimation of tree stems volume. IEEE J Sel Top Appl Earth Obs Remote Sens. 2(4):310–318. doi:10.1109/JSTARS.2009.2037523.
  • Daniel TC, Meitner MM. 2001. Representational validity of landscape visualizations: the effects of graphical realism on perceived scenic beauty of forest vistas. J Environ Psychol. 21(1):61–72. doi:10.1006/jevp.2000.0182.
  • Dede C. 2009. Immersive interfaces for engagement and learning. Science. 323(5910):66–69. doi:10.1126/science.1167311.
  • Dunbar MD, Moskal LM, Jakubauskas ME. 2004. 3D visualization for the analysis of forest cover change. Geocarto Int. 19(2):103–112. doi:10.1080/10106040408542310.
  • Fabrika M. 2021. Interactive procedural forest in game engine environment as background for forest modelling. In: Beiträge zue Jahrestagung 2021 DVFFA - Sektion Ertragskunde. DVFFA. p. 86–95.
  • Fabrika M, Valent P, Scheer L. 2018. Thinning trainer based on forest-growth model, virtual reality and computer-aided virtual environment. Environ Model Softw. 100:11–23. doi:10.1016/j.envsoft.2017.11.015.
  • Fol CR, Murtiyoso A, Griess VC. 2022. Evaluation of azure kinect derived point clouds to determine the presence of microhabitats on single trees based on the swiss standard parameters. Int Arch Photogram Remote Sens Spatial Inform Sci. 43(B2–2022):989–994.
  • Frey J, Kovach K, Stemmler S, Koch B. 2018. UAV photogrammetry of forests as a vulnerable process. a sensitivity analysis for a structure from motion RGB-image pipeline. Remote Sens. 10(6):912. doi:10.3390/rs10060912.
  • Fujisaki I, Evans DL, Moorhead RJ, Irby DW, Mohammadi-Aragh MJ, Roberts SD, Gerard PD. 2008. Stand assessment through lidar-based forest visualization using immersive virtual environment technology. For Sci. 54(1):1–7. doi:10.1093/forestscience/54.1.1.
  • Ghorpade J, Parande J, Kulkarni M, Bawaskar A. 2012. GPGPU processing in CUDA architecture. advanced computing. Int J. 3(1):105–120.
  • Gollob C, Ritter T, Kraßnitzer R, Tockner A, Nothdurft A. 2021. Measurement of forest inventory parameters with Apple iPad pro and integrated LiDAR technology. Remote Sens. 13(16):3129. doi:10.3390/rs13163129.
  • Granshaw SI. 2020. Photogrammetric terminology: fourth edition. Photogramm Record. 35(170):143–288. doi:10.1111/phor.12314.
  • Hackenberg J, Spiecker H, Calders K, Disney M, Raumonen P. 2015. SimpleTree —An efficient open source tool to build tree models from TLS clouds. Forests. 6(11):4245–4294. doi:10.3390/f6114245.
  • Hejtmánek L, Hůla M, Herrová A, Surový P. 2022. Forest digital twin as a relaxation environment: a pilot study. Front Virtual Real. 3:1033708. doi:10.3389/frvir.2022.1033708.
  • Herwig A, Paar P. 2002. Game engines: tools for landscape visualization and planning. GIS Trends in Environmental Planning: Internet GIS and Web Mapping, Mobile- and Virtual GIS; May 2; Anhalt, Germany. Wichmann. p. 172.
  • Hettinger LJ, Riccio GE. 1992. Visually induced motion sickness in virtual environments. Presence. 1(3):306–310. doi:10.1162/pres.1992.1.3.306.
  • Hirschmüller H. 2011. Semi-global matching motivation, developments and applications. Photogrammetric Week; Stuttgart, Germany. p. 173–184.
  • Holopainen J, Mattila O, Pöyry E, Parvinen P. 2020. Applying design science research methodology in the development of virtual reality forest management services. For Policy Econ. 116:102190. doi:10.1016/j.forpol.2020.102190.
  • Hristova H, Abegg M, Fischer C, Rehush N. 2022. Monocular depth estimation in forest environments. International archives of the photogrammetry, remote sensing and spatial information sciences. ISPRS Arch. 43(B2–2022):1017–1023.
  • Huang J, Lucash MS, Scheller RM, Klippel A. 2020. Walking through the forests of the future: using data-driven virtual reality to visualize forests under climate change. Int J Geograph Inform Sci. 35(6):1155–1178. doi:10.1080/13658816.2020.1830997.
  • Huang J, Lucash MS, Scheller RM, Klippel A. 2021. Walking through the forests of the future: using data-driven virtual reality to visualize forests under climate change. Int J Geograph Inform Sci. 35(6):1155–1178.
  • Hyyppa J. 1999. Detecting and estimating attributes for single trees using laser scanner. Photogramm J Finland. 16:27–42.
  • Jaiswal RK, Mukherjee S, Raju KD, Saxena R. 2002. Forest fire risk zone mapping from satellite imagery and GIS. Int J Appl Earth Observ Geoinf. 4(1):1–10. doi:10.1016/S0303-2434(02)00006-5.
  • Jerald J, Giokaris P, Woodall D, Hartbolt A, Chandak A, Kuntz S. 2014. Developing virtual reality applications with unity. In: 2014 IEEE Virtual Reality (VR); Mar 29–Apr 2; Minneapolis, MN, USA. p. 1–3.
  • Johansson M, Roupé M, Bosch-Sijtsema P. 2015. Real-time visualization of building information models (BIM). Autom Constr. 54:69–82. doi:10.1016/j.autcon.2015.03.018.
  • Kaartinen H, Hyyppä J, Yu X, Vastaranta M, Hyyppä H, Kukko A, Holopainen M, Heipke C, Hirschmugl M, Morsdorf F, et al. 2012. An international comparison of individual tree detection and extraction using airborne laser scanning. Remote Sens. 4(4):950–974. doi:10.3390/rs4040950.
  • Kalacska M, Arroyo-Mora JP, Lucanus O. 2021. Comparing UAS LiDAR and structure-from-motion photogrammetry for peatland mapping and virtual reality (VR) visualization. Drones. 5(2):36. doi:10.3390/drones5020036.
  • Karjalainen E, Tyrväinen L. 2002. Visualization in forest landscape preference research: a Finnish perspective. Landsc Urban Plan. 59(1):13–28. doi:10.1016/S0169-2046(01)00244-4.
  • Kavanagh S, Luxton-Reilly A, Wuensche B, Plimmer B. 2017. A systematic review of virtual reality in education. Themes Sci Technol Educ. 10(2):85–119.
  • Kharroubi A, Hajji R, Billen R, Poux F. 2019. Classification and integration of massive 3D points clouds in a virtual reality (VR) environment. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. XLII-2/W17(W17):165–171. doi:10.5194/isprs-archives-XLII-2-W17-165-2019.
  • Klippel A, Zhao J, Jackson KL, La Femina P, Stubbs C, Wetzel R, Blair J, Wallgrün JO, Oprean D. 2019. Transforming earth science education through immersive experiences: delivering on a long held promise. J Educ Comput Res. 57(7):1745–1771. doi:10.1177/0735633119854025.
  • Krumm F, Lachat T, Schuck A, Bütler R, Kraus D. 2019. Marteloskope als Trainingstools zur Erhaltungund Förderung von Habitatbäumen im Wald. Schweiz Z Forstwes. 170(2):86–93. doi:10.3188/szf.2019.0086.
  • Kükenbrink D, Gardi O, Morsdorf F, Thürig E, Schellenberger A, Mathys L. 2021. Above-ground biomass references for urban trees from terrestrial laser scanning data. Ann Bot. 128(6):709–724. doi:10.1093/aob/mcab002.
  • Kükenbrink D, Marty M, Bösch R, Ginzler C. 2022. Benchmarking laser scanning and terrestrial photogrammetry to extract forest inventory parameters in a complex temperate forest. Int J Appl Earth Observ Geoinf. 113(March):102999. doi:10.1016/j.jag.2022.102999.
  • Laksono D, Aditya T. 2019. Utilizing a game engine for interactive 3D topographic data visualization. ISPRS Int J Geoinf. 8(8):361. doi:10.3390/ijgi8080361.
  • Lau A, Bentley LP, Martius C, Shenkin A, Bartholomeus H, Raumonen P, Malhi Y, Jackson T, Herold M. 2018. Quantifying branch architecture of tropical trees using terrestrial LiDAR and 3D modelling. Trees. 32(5):1219–1231. doi:10.1007/s00468-018-1704-1.
  • Li W, Guo Q, Jakubowski MK, Kelly M. 2012. A new method for segmenting individual trees from the LiDAR point cloud. Photogramm Eng Remote Sensing. 78(1):75–84. doi:10.14358/PERS.78.1.75.
  • Lim EM, Honjo T. 2003. Three-dimensional visualization forest of landscapes by VRML. Landsc Urban Plan. 63(3):175–186. doi:10.1016/S0169-2046(02)00189-5.
  • Lin W, Wang J. 2012. An integrated 3D log processing optimization system for hardwood sawmills in central Appalachia, USA. Comput Electron Agric. 82:61–74. doi:10.1016/j.compag.2011.12.014.
  • Lindner M, Fitzgerald JB, Zimmermann NE, Reyer C, Delzon S, van der Maaten E, Schelhaas M-J, Lasch P, Eggers J, van der Maaten-Theunissen M, et al. 2014. Climate change and European forests: what do we know, what are the uncertainties, and what are the implications for forest management? J Environ Manage. 146:69–83. doi:10.1016/j.jenvman.2014.07.030.
  • Lindner P, Miloff A, Hamilton W, Reuterskiöld L, Andersson G, Powers MB, Carlbring P. 2017. Creating state of the art, next-generation virtual reality exposure therapies for anxiety disorders using consumer hardware platforms: design considerations and future directions. Cogn Behav Ther. 46(5):404–420. doi:10.1080/16506073.2017.1280843.
  • Lintermann B, Deussen O. 1999. Interactive modeling of plants. IEEE Comput Graph Appl. 19(1):56–65. doi:10.1109/38.736469.
  • Liu D, Dede C, Huang R, Richards J. 2017. Virtual, augmented, and mixed realities in education. Singapore: Springer. p. 247.
  • Luetzenburg G, Kroon A, Bjørk AA. 2021. Evaluation of the Apple iPhone 12 Pro LiDAR for an application in geosciences. Sci Rep. 11(1):1–9. doi:10.1038/s41598-021-01763-9.
  • Mallet C, Bretar F. 2009. Full-waveform topographic lidar: state-of-the-art. ISPRS J Photogramm Remote Sens. 64(1):1–16. doi:10.1016/j.isprsjprs.2008.09.007.
  • Mantovani F, Castelnuovo G, Gaggioli A, Riva G. 2003. Virtual reality training for health-care professionals. Cyberpsychol Behav. 6(4):389–395. doi:10.1089/109493103322278772.
  • Matthews D. 2018. Virtual-reality applications give science a new dimension. Nature. 557(7703):127–128. doi:10.1038/d41586-018-04997-2.
  • Mattila O, Korhonen A, Pöyry E, Hauru K, Holopainen J, Parvinen P. 2020. Restoration in a virtual reality forest environment. Comput Human Behav. 107:106295. doi:10.1016/j.chb.2020.106295.
  • Mazzacca G, Grilli E, Cirigliano GP, Remondino F, Campana S. 2022. Seeing among foliage with LIDAR and machine learning: towards a transferable archaeological pipeline. In: 9th International Workshop 3D-ARCH; Mar 2–4; Mantova, Italy. p. 365–372.
  • McGaughey RJ. 1998. Techniques for visualizing the appearance of forestry operations. J For. 96(6):9–14.
  • Mildenhall B, Srinivasan PP, Tancik M, Barron JT, Ramamoorthi R, Ng R. 2021. Nerf: representing scenes as neural radiance fields for view synthesis. Commun ACM. 65(1):99–106. doi:10.1145/3503250.
  • Milgram P, Kishino F. 1994. A taxonomy of mixed reality visual displays. IEICE Trans Inform Syst. E77–D:1321–1329.
  • Mokroš M, Mikita T, Singh A, Tomaštík J, Chudá J, Wężyk P, Kuželka K, Surový P, Klimánek M, Zięba-Kulawik K, et al. 2021. Novel low-cost mobile mapping systems for forest inventories as terrestrial laser scanning alternatives. Int J Appl Earth Observ Geoinf. 104:102512. doi:10.1016/j.jag.2021.102512.
  • Mokroš M, Výbošťok J, Tomaštík J, Grznárová A, Valent P, Slavík M, Merganič J. 2018. High precision individual tree diameter and perimeter estimation from close-range photogrammetry. Forests. 9(11):696. doi:10.3390/f9110696.
  • Mostajeran F, Krzikawski J, Steinicke F, Kühn S. 2021. Effects of exposure to immersive videos and photo slideshows of forest and urban environments. Sci Rep. 11(1):1–14. doi:10.1038/s41598-021-83277-y.
  • Murtiyoso A, Grussenmeyer P, Landes T, Macher H. 2021. First assessments into the use of commercial-grade solid state lidar for low cost heritage documentation. Int Arch Photogram Remote Sens Spatial Inform Sci. 43(B2–2021):599–604.
  • Murtiyoso A, Hristova H, Rehush N, Griess VC. 2022. Low-cost mapping of forest under-storey vegetation using spherical photogrammetry. The international archives of the photogrammetry, remote sensing and spatial information sciences. XLVIII-2/W1-2022:185–190.
  • Næsset E. 2002. Predicting forest stand characteristics with airborne scanning laser using a practical two-stage procedure and field data. Remote Sens Environ. 80(1):88–99. doi:10.1016/S0034-4257(01)00290-5.
  • Nex F, Gerke M, Remondino F, Przybilla H-J, Bäumker M, Zurhorst A. 2015. ISPRS annals of the photogrammetry, remote sensing and spatial information sciences. II3/W4:135–142.
  • Nocerino E, Stathopoulou EK, Rigon S, Remondino F. 2020. Surface reconstruction assessment in photogrammetric applications. Sensors. 20(20):5863. doi:10.3390/s20205863.
  • Oberdörfer S, Heidrich D, Birnstiel S, Latoschik ME. 2021. Enchanted by your surrounding? Measuring the effects of immersion and design of virtual environments on decision-making. Front Virtual Real. 2:679277. doi:10.3389/frvir.2021.679277.
  • Ochmann S, Vock R, Wessel R, Klein R. 2016. Automatic reconstruction of parametric building models from indoor point clouds. Comput Graph. 54:94–103. doi:10.1016/j.cag.2015.07.008.
  • Orland B. 1988. Video imaging: a powerful tool for visualization and analysis. Landscape Archit. 78(5):78–88.
  • Orland B. 1994. Visualization techniques for incorporation in forest planning geographic information systems. Landsc Urban Plan. 30(1–2):83–97. doi:10.1016/0169-2046(94)90069-8.
  • Overpeck JT, Rind D, Goldberg R. 1990. Climate-induced changes in forest disturbance and vegetation. Nature. 343(6253):51–53. doi:10.1038/343051a0.
  • Palha A, Murtiyoso A, Michelin J-C, Alby E, Grussenmeyer P. 2017. Open source first person view 3D point cloud visualizer for large data sets. In: Ivan I, Singleton A, Horak J, Inspektor T, editors. The rise of big spatial data. Ostrava, Czech Republic: Springer; p. 27–39.
  • Park H, Lee D. 2019. Comparison between point cloud and mesh models using images from an unmanned aerial vehicle. Measurement. 138:461–466. doi:10.1016/j.measurement.2019.02.023.
  • Parkan M, Tuia D. 2015. Individual tree segmentation in deciduous forests using geodesic voting. 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS); Milan, Italy. p. 637–640.
  • Picchi G, Sandak J, Grigolato S, Panzacchi P, Tognetti R. 2022. Smart Harvest Operations and Timber Processing for Improved Forest Management. In Tognetti R, Smith M, Panzacchi P, editors. Vol. 40, Climate-Smart Forestry in Mountain Regions. Managing Forest Ecosystems. Cham, Switzerland: Springer, Cham; p. 317–359.
  • Pichler G, Poveda Lopez JA, Picchi G, Nolan E, Kastner M, Stampfer K, Kühmaier M. 2017. Comparison of remote sensing based RFID and standard tree marking for timber harvesting. Comput Electron Agric. 140:214–226. doi:10.1016/j.compag.2017.05.030.
  • Pretzsch H, Grote R, Reineking B, Rötzer TH, Seifert ST. 2008. Models for forest ecosystem management: a European perspective. Ann Bot. 101(8):1065–1087. doi:10.1093/aob/mcm246.
  • Ranta P. 2009. Added values of forestry machine simulator based training. In: Proceedings of the International conference on multimedia and ICT education; Apr 22–24; Lisbon, Portugal. p. 1–6.
  • Rehush N, Abegg M, Waser LT, Brändli U-B. 2018. Identifying tree-related microhabitats in TLS point clouds using machine learning. Remote Sens. 10(11):1735. doi:10.3390/rs10111735.
  • Remondino F, Rizzi A. 2010. Reality-based 3D documentation of natural and cultural heritage sites-techniques, problems, and examples. Appl Geomat. 2(3):85–100. doi:10.1007/s12518-010-0025-x.
  • Risse B, Mangan M, Stürzl W, Webb B. 2018. Software to convert terrestrial LiDAR scans of natural environments into photorealistic meshes. Environ Model Softw. 99:88–100. doi:10.1016/j.envsoft.2017.09.018.
  • Rupnik E, Pierrot-Deseilligny M, Delorme A. 2018. 3D reconstruction from multi-view VHR-satellite images in MicMac. ISPRS J Photogramm Remote Sens. 139:201–211. doi:10.1016/j.isprsjprs.2018.03.016.
  • Rusu RB, Cousins S. 2011. 3D is here: Point Cloud Library (PCL). In: 2011 IEEE International Conference on Robotics and Automation; Shanghai, China. p. 1–4.
  • Sankey T, Donager J, McVay J, Sankey JB. 2017. UAV lidar and hyperspectral fusion for forest monitoring in the southwestern USA. Remote Sens Environ. 195:30–43. doi:10.1016/j.rse.2017.04.007.
  • Schenk T. 2005. Introduction to photogrammetry. Columbus, OH, USA: The Ohio State University.
  • Schütz M, Ohrhallinger S, Wimmer M. 2020. Fast out-of-core octree generation for massive point clouds. Comput Graph Forum. 39(7):155–167. doi:10.1111/cgf.14134.
  • Sheppard SRJ, Meitner M. 2005. Using multi-criteria analysis and visualisation for sustainable forest management planning with stakeholder groups. For Ecol Manage. 207(1–2):171–187. doi:10.1016/j.foreco.2004.10.032.
  • Sonntag-Öström E, Nordin M, Lundell Y, Dolling A, Wiklund U, Karlsson M, Carlberg B, Järvholm LS. 2014. Restorative effects of visits to urban and forest environments in patients with exhaustion disorder. Urban Green. 13(2):344–354. doi:10.1016/j.ufug.2013.12.007.
  • Stathopoulou EK, Remondino F. 2020. Multi view stereo with semantic priors. The international archives of the photogrammetry, remote sensing and spatial information sciences. XLII-2/W15:1135–1140.
  • Stava O, Pirk S, Kratt J, Chen B, Měch R, Deussen O, Benes B. 2014. Inverse procedural modelling of trees. Comput Graph Forum. 33(6):118–131. doi:10.1111/cgf.12282.
  • Stock C, Bishop ID. 2006. Linking GIS with real-time visualisation for exploration of landscape changes in rural community workshops. Virtual Real. 9(4):260–270. doi:10.1007/s10055-006-0023-9.
  • Stoltman AM, Radeloff VC, Mladenoff DJ. 2004. Forest visualization for management and planning in Wisconsin. J For. 102(4):7–13.
  • Stoltman AM, Radeloff VC, Mladenoff DJ. 2007. Computer visualization of pre-settlement and current forests in Wisconsin. For Ecol Manage. 246(2–3):135–143. doi:10.1016/j.foreco.2007.02.029.
  • Sutherland IE. 1964. Sketchpad a man-machine graphical communication system.Cambridge, MA, USA: Sage Publications Sage CA.
  • Szabó C, Š K, Sobota B. 2012. Data processing for virtual reality. In: Advances in robotics and virtual reality. Heidelberg, Germany: Springer Berlin Heidelberg; p. 333–361.
  • Talton JO, Lou Y, Lesser S, Duke J, Měch R, Koltun V. 2011. Metropolis procedural modeling. ACM Trans Graph. 30(2):11
  • Tang H, Bishop ID. 2002. Integration methodologies for interactive forest modelling and visualization systems. Cartogr J. 39(1):27–35. doi:10.1179/caj.2002.39.1.27.
  • Thormann -J-J, Allenspach-Schliessbach K, Bugmann H, Frehner M, Junod P, Rosset C, Kühne K. 2019. Bedeutung von Marteloskopen für Praxis und Lehre in der Schweiz. Schweizerische Zeitschrift fur Forstwesen. 170(2):60–68. doi:10.3188/szf.2019.0060.
  • Ullrich T, Schinko C, Fellner W-D. 2010. Procedural modeling in theory and practice. In: Proceeding of the 18th WSCG International Conference on Computer Graphics, Visualization and Computer Vision; Pilsen, Czech Republic. p. 5–8.
  • Uusitalo J, Orland B. 2001. Virtual forest management: possibilities and challenges. Int J For Eng. 12(2):57–66. doi:10.1080/14942119.2001.10702447.
  • Vega C, Hamrouni A, Mokhtari AE, Morel M, Bock J, Renaud JP, Bouvier M, Durrieue S. 2014. PTrees: a point-based approach to forest tree extractionfrom lidar data. Int J Appl Earth Observ Geoinf. 33(1):98–108. doi:10.1016/j.jag.2014.05.001.
  • Wang J, Liu J, LeDoux CB. 2009a. A three-dimensional bucking system for optimal bucking of central appalachian hardwoods. Int J For Eng. 20(2):26–35. doi:10.1080/14942119.2009.10702580.
  • Wang J, Shan J 2009b. Segmentation of LiDAR point clouds for building extraction. In: American Society for Photogramm. Remote Sens. Annual Conference, Baltimore, MD. p. 9–13.
  • Wang J, Sharma BD, Li Y, Miller GW. 2009. Modeling and validating spatial patterns of a 3D stand generator for central Appalachian hardwood forests. Comput Electron Agric. 68(2):141–149. doi:10.1016/j.compag.2009.05.005.
  • Wang L, Chu CH. 2009. 3D building reconstruction from LiDAR data. In: 2009 IEEE International Conference on Systems, Man and Cybernetics; San Antonio, TX, USA. p. 3054–3059.
  • Wang X, Song B, Chen J, Zheng D, Crow TR. 2006. Visualizing forest landscapes using public data sources. Landsc Urban Plan. 75(1–2):111–124. doi:10.1016/j.landurbplan.2004.12.010.
  • Wu C, Agarwal S, Curless B, Seitz SM. 2011. Multicore bundle adjustment. CVPR 2011; Colorado Springs, CO, USA. p. 3057–3064.
  • Yang X, Koehl M, Grussenmeyer P. 2018. Automating parametric modelling from reality-based data by revit api development. In: Remondino F, Georgopoulos A, González-Aguilera D, Agrafiotis P, editors. Latest developments in reality-based 3D surveying and modelling. Basel (Switzerland): MDPI; p. 307–325.
  • Yu CP, Lee HY, Luo XY. 2018. The effect of virtual reality forest and urban environments on physiological and psychological responses. Urban Green. 35:106–114. doi:10.1016/j.ufug.2018.08.013.
  • Zerman E, Ozcinar C, Gao P, Smolic A. 2020. Textured mesh vs coloured point cloud: a subjective study for volumetric video compression. In: 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX); Athlone, Ireland. p. 1–6.
  • Zhou Q-Y, Park J, Koltun V. 2018. Open3D: a modern library for 3D data processing. arXiv preprint. arXiv:1801.09847.