2,236
Views
4
CrossRef citations to date
0
Altmetric
Articles

Applying photogrammetry to reconstruct 3D luminance point clouds of indoor environments

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, , ORCID Icon & ORCID Icon show all
Pages 56-72 | Received 29 May 2020, Accepted 07 Dec 2020, Published online: 22 Dec 2020

ABSTRACT

Computer-aided 3D modelling is the standard design method in the architecture, engineering, construction, owner, operator (AECOO) industry and in lighting design. Applying a photogrammetric process, a sequence of images is used to reconstruct the geometry of a space or an object in a 3D model. Likewise, a calibrated digital camera is utilized to measure the surface luminance values of an environment or an object. We propose a workflow in which the geometry and the luminance values are measured simultaneously by combining these two measurement methods. The pipeline has been assessed and validated through the application to a case study, the Aalto Hall (Aalto University, Espoo, Finland), in order to understand its potential.

Introduction

In the architecture, engineering, construction, owner, operator (AECOO) industry, three-dimensional (3D) computer-aided modelling is applied to design in order to oversee construction and to maintain a building or built environment. Moreover, computer-aided 3D modelling is the standard method in lighting design (Roy, Citation2000; Shikder, Price, & Mourshed, Citation2009). Lighting distribution in an environment is a crucial attribute in both the ergonomics and the attractiveness of a space (Bellia, Bisegna, & Spada, Citation2011; Öztürk, Citation2003; So & Leung, Citation1998; Tiller & Veitch, Citation1995). In addition to occupant satisfaction, well-designed lighting can reduce the energy consumption of indoor lighting without compromising quality (Loe & Rowlands, Citation1996; Muhamad, Zain, Wahab, Aziz, & Kadir, Citation2010; Van Den Wymelenberg, Inanici, & Johnson, Citation2010). Architects and lighting designers need to understand the characteristics of existing lighting in order to design new lighting configurations (Rodrigue, Demers, & Parsaee, Citation2020).

Our aim is to develop and present (through a case study) a process allowing both the 3D reconstruction of interior geometry and luminance mapping from the same image set. In other words, by applying photogrammetry we reconstruct a 3D luminance point cloud and present the measuring of a 3D luminance map in indoor environments. Section 2 describes the principles of the imaging luminance photometry and photogrammetric reconstruction relevant to this article. In Section 3, Materials and methods, the workflow is illustrated with particular attention to the combination of imaging luminance photometry and photogrammetry. In addition, the indoor space specifications of our case study are presented in Section 3. The Results section shows the outcomes of our method: the luminance image locations and orientations, the 3D surface model and the 3D luminance point cloud. The luminance values in the created luminance point cloud and single luminance images are compared, and their differences listed. In the Discussion section, we discuss the potential of a luminance point cloud when used in lighting assessment and visualization and propose possible further research.

Related work

The human eye and the visual system adapt to lighting conditions via a variety of mechanisms. The luminance distribution of a visual environment governs the level of adaptation in the observer’s retina. The adaptation luminance in the field of view affects contrast sensitivity, visual acuity, accommodation, pupillary contraction and eye movements. Furthermore, the luminance distribution also affects certain aspects of visual comfort such as glare sensitivity, visual fatigue and dullness due to a visually non-stimulating environment (CEN, Citation2002).

The luminances of a space can be measured point-by-point by employing a spot luminance metre. For a thorough analysis of luminance distribution, however, the point-by-point method is not pragmatic. Hence, a commonly used method for luminance distribution measurement is imaging luminance photometry (Borisuit, Scartezzini, & Thanachareonkit, Citation2010; Hiscocks & Eng, Citation2011; Rea & Jeffrey, Citation1990). In imaging luminance photometry, a radiometrically calibrated digital camera is used to capture the luminance values of a measurement area in an image or series of images. Even though the measured spaces are three dimensional, the measurement images are only two dimensional. This limits the applicability of imaging luminance photometry in spatial analyses and creates data management issues if larger environments are studied.

The history of image-based 3D building measurement (i.e. architectural photogrammetry) is nearly as old as photography itself (Albota, Citation1976). From the 1990s, digital photogrammetry enabled automatic image measurements, camera calibration and exterior orientation (Haggrén & Niini, Citation1990; Heikkilä & Silven, Citation1997; Lowe, Citation1999; Pollefeys, Koch, & Van Gool, Citation1999; Stathopoulou, Welponer, & Remondino, Citation2019). Current photogrammetric software can automatically reconstruct 3D mesh models (Furukawa, Curless, Seitz, & Szeliski, Citation2009; Jancosek & Pajdla, Citation2011; Romanoni, Delaunoy, Pollefeys, & Matteucci, Citation2016). For indoor modelling, various photogrammetric methods have been used, such as 3D mapping systems (El-Hakim, Boulanger, Blais, & Beraldin, Citation1997), videogrammetry-based 3D modelling (Haggrén & Mattila, Citation1997), structured indoor modelling (Ikehata, Yang, & Furukawa, Citation2015), cloud-based indoor 3D modelling (Ingman, Virtanen, Vaaja, & Hyyppä, Citation2020) and other indoor measuring methods (Georgantas, Brédif, & Pierrot-Desseilligny, Citation2012; Lehtola et al., Citation2017; León-Vega & Rodríguez-Laitón, Citation2019). In addition, high dynamic range (HDR) photogrammetry has been used for luminance mapping of the sky and the sun (Cai, Citation2015) and a laser-scanned point cloud have been coloured with luminance values in a nighttime road environment (Vaaja et al., Citation2015; Vaaja et al., Citation2018).

While both imaging luminance measurement and indoor 3D reconstruction have been extensively presented with conventional digital cameras in prior research, their integration remains to be demonstrated. Laser scanning and HDR imaging have been proposed for surveying and visualizing lighting in indoor spaces, since current lighting measurement methods are too limited for 3D design (Rodrigue et al., Citation2020). The study also showed that the characteristics of the existing lighting need to be understood by architects and lighting designers in order to redesign the lighting. However, when Rodrigue et al. (Citation2020) performed such measurements, they only transmitted the colour values for visual inspection of the 3D point cloud, as the values were obtained from an uncalibrated scanner camera and not from a luminance-calibrated camera. In addition, their research revealed that 2D luminance photometry alone is not appropriate for surveying an indoor space with six degrees of freedom, and image locations must be managed manually. Since the geometry and the luminance maps remain in separate data sets, the use of the data is impractical. Therefore, the current research gap is the integration and the simultaneous execution of luminance imaging and photogrammetric 3D reconstruction, and the automatic management of image locations and orientations. Furthermore, the main 3D reconstruction software, such as Agisoft Metashape,Footnote1 COLMAP,Footnote2 iWittnessPRO,Footnote3 Meshroom,Footnote4 RealityCapture,Footnote5 VisualSFMFootnote6 and WebODMFootnote7, cannot be conveniently applied to create a 3D luminance model. The main inconvenience is the inflexibility of the usable bit-depth (8 bits) in 3D reconstruction software, and the bit-depth (16 bits) needed to preserve the dynamic resolution of luminance imaging. Hiscocks (Citation2016) presents instructions on how to calibrate a camera in terms of photometry using open source software (LumaFootnote8). However, following the mentioned instructions and using Luma requires more or less the same knowledge and experience as performing the calibration and creating the luminance image management software by oneself. Hence, the 3D luminance measurement is not currently an easily approachable technology for professionals such as architects and lighting designers. Obviously, 3D luminance can be simulated when using CAD models or building information models (BIMs) (Foldbjerg et al., Citation2012). However, CAD models or BIMs of old buildings are not always available. Moreover, there exist several software solutions for lighting calculation and lighting design (e.g. AGi32,Footnote9 AutoLUX,Footnote10 DiaLUX,Footnote11 Radiance,Footnote12 ReluxFootnote13).

Principles of luminance imaging and photogrammetric reconstruction

Luminance imaging

Imaging luminance photometry refers to capturing the two-dimensional (2D) presentation of an environment. The photometry is performed by applying a digital camera that has been radiometrically calibrated (Anaokar & Moeck, Citation2005; Hiscocks & Eng, Citation2013; Meyer, Gibbons, & Edwards, Citation2009; Wüller & Gabele, Citation2007). Hence, each picture element presents an individual luminance value of the measured scene. The digital R, G and B pixel values can be interpreted as relative luminance values when applying Equation 1 (IEC, Citation1999): (1) Lr=0.2126R+0.7152G+0.0722B.(1) The relative luminance, Lr, can be converted into absolute luminance (cd/m2) by applying a calibration constant. The calibration constant is unique for each camera, and it can be achieved by calibrating the camera using a reference luminance source. Furthermore, the vignetting of the camera must be solved as part of the luminance calibration (Kurkela et al., Citation2017).

There are commercial imaging luminance photometers such as TechnoTeam LMK Mobile Advanced.Footnote14 presents a screenshot of a pseudo-coloured luminance measurement from TechnoTeam LMK LabSoftFootnote15 software. However, none of the commercial luminance imaging solutions were applicable for this study, as we needed complete control throughout the image processing. Hence, we used a camera which we calibrated ourselves, and we programmed all of the luminance image processing ourselves.

Figure 1. A pseudo-coloured luminance mapping measured with an imaging luminance photometer (LMK LabSoft). The legend shows the absolute luminance values (in cdm−2) on a logarithmic scale.

Figure 1. A pseudo-coloured luminance mapping measured with an imaging luminance photometer (LMK LabSoft). The legend shows the absolute luminance values (in cdm−2) on a logarithmic scale.

Photogrammetric reconstruction

Photogrammetry can be defined as a set of methods for interpreting and measuring images that can be used to determine the shape of an object and the location of images taken of the object (Luhmann, Robson, Kyle, & Harley, Citation2006, p. 2).

shows the simplified photogrammetric reconstruction process. The first task after image capturing is to identify the characteristic features Pn,m that describe the objects within the picture. For this purpose, the most common algorithm is the scale invariant feature transform (SIFT), which recognizes scale-invariant features in an image (Lowe, Citation1999). The features identified in the input images are used as reference elements in order to recognize the same objects within different images; therefore, the matching is realized first among features and, consequently, between images.

Figure 2. The principal procedures of close range photogrammetry. The dotted line indicates the connection between image features Pn,m, external image orientations Rn,tn and the tie point P1.

Figure 2. The principal procedures of close range photogrammetry. The dotted line indicates the connection between image features Pn,m, external image orientations Rn,tn and the tie point P1.

Solving internal and external image orientations is the step that determines the relationship between image observations Pn,m and the 3D points Pn of the object. External orientation determines the orientation and position of the camera in the global coordinate system. The rotation matrix R defines the angular orientation and the vector t is the spatial position of the camera.

Interior orientation represents the intrinsic geometric properties of the camera and the lens system. The parameters of interior orientation illustrate the position of the perspective centre, the focal length and the location of the principal point. The lens system causes errors in the image, which compensates for major geometric distortions such as radial and tangential distortions. The sensor deviations are corrected for affinity and shear, which represent the orthogonality of the image plane and the scale of the image coordinates (e.g. Brown, Citation1971; Fryer & Brown, Citation1986; Heikkilä, Citation2000).

The process of solving internal and external image orientations is iterative and usually begins with the orientation of a pair of images and the reconstruction of the scene (Nister, Citation2004). By increasing the number of images, the orientation of other images will also be solved. The orientations are approximate and the results are improved by using a bundle adjustment that optimizes the external and internal orientation of all images as well as the sparse point cloud consisting of the tie point Pn (e.g. Brown, Citation1976). The scale can be determined by measuring the distance between two measured points Pn.

There are several methods and algorithms that can be used to reconstruct a high detail 3D surface model. For example, it can be retrieved directly from the depth maps (Furukawa et al., Citation2009) or from the sparse point cloud. However, such a model is often an insufficient representation of reality because it lacks information. Otherwise, a dense point cloud can be used. In all these alternatives, a depth map is reconstructed for all oriented images (i.e. a depth value is determined for each pixel) using methods such as Semiglobal Matching (SGM) (Hirschmuller, Citation2008) or AD-Census (Mei et al., Citation2011). In addition, the tetrahedralization method proposed by Jancosek and Pajdla (Citation2011) can be adopted because it performs well for reconstructing large, uniform, and mono-coloured surfaces. Finally, the 3D model is textured.

Materials and methods

Camera equipment and image data

As the aim was to facilitate both luminance mapping and 3D reconstruction from the same image set, the requirements from both of these had to be taken into account in data acquisition. Imaging luminance photometry is usually accomplished by capturing multiple images from one location using a tripod. Photogrammetric imaging is performed by capturing images from different locations and a tripod could be used especially for low lighting situations. In our case, however, the complex environment would have made the use of a tripod inconvenient and time-consuming. Hence, we captured the image set with a hand-held camera. As the optimal camera system and settings for these two measurement methods differ, a compromise is needed. In practice, this means that the exposure is determined by the lowest luminance value we want to measure, and using HDR imaging was not applicable.

We used a Nikon D800E camera with a Nikkor AF-S 14-24 mm f/2.8G ED lens locked to a 14 mm focal length. For hand-held image capturing, we applied a 1/250 s shutter speed. The aperture of f/5.6 and the ISO sensitivity of 3,200 were selected for optimal depth of field and signal-to-noise ratio. The camera had been calibrated both radiometrically and geometrically, and the vignetting correction function for the camera system was determined (Kurkela et al., Citation2017).

The full measurement data set consisted of 453 Nikon electronic format (NEF) raw images. The adopted 3D reconstruction software slightly altered the 16-bit images, while the 8-bit images remain unchanged. Hence, the following pre-processing was performed on the measurement series in order to preserve its bit depth when using 8-bit images. The luminance information is a single scalar that is obtained from the measured R, G, and B values by applying Equation 1. The original bit depth of the raw measurement was between 12 and 13 bits, which was converted to a 16-bit image for luminance calculation. In the photogrammetric reconstruction, we had three 8-bit channels – R, G and B – that we could use for each 16-bit luminance scalar. Hence, the single 16-bit luminance information was mapped over the three 8-bit channels. In this way, the usable dynamic range of 3 * 8-bits was attained, with the ability to distinguish 224 different numeric values. Thus, the dynamic resolution of the image (approximately 13 bits) can be preserved through the 3D reconstruction process.

Workflow for 3D luminance mapping

The workflow (see ) includes the documentation of the luminance image locations and orientations, the 3D surface model and the 3D luminance mapping.

Figure 3. The workflow for producing (a) image locations and orientations, (b) the 3D surface model and (c) the luminance point cloud.

Figure 3. The workflow for producing (a) image locations and orientations, (b) the 3D surface model and (c) the luminance point cloud.

Images captured in the NEF image format were developed into linear 16-bit TIFF images using the DCRawFootnote16 image processing programme. In the conversion, the Nikon raw colour space was used, and the pixels were interpolated using adaptive homogeneity-directed (AHD) interpolation. We corrected vignetting, and then the processing branched into the photogrammetry part and the luminance photometry part (). The 16-bit images were processed in two different 8-bit versions. For the photogrammetric image version, the RGB images were processed so that as many feature points as possible were identified. For the luminance photometry image version, the 16-bit linear RGB images were transformed into monochromatic relative luminance images by applying Equation 1. The 16-bit monochromatic luminance data was coded over the three 8-bit RGB channels of an 8-bit image to obtain the extended bit depth, as described in Section 3.1 (Camera equipment and image data).

Agisoft Metashape software version 1.6.0.9925 was utilized for the 3D reconstruction (). Internal and external image orientations were solved using the following alignment parameters: accuracy was set to ‘highest’, generic preselection was used, and the tie point limit was set to 100,000. For internal orientation, optimization parameters focal length (f), principal point coordinates (cx, cy), radial distortion coefficients (k1, k2, k3) and tangential distortion coefficients (p1, p2) were used.

In Agisoft Metashape, there are three methods to reconstruct a surface mesh model: directly from the sparse cloud, using the dense cloud and using the depth maps. We created the 3D surface model by applying the depth maps based mesh reconstruction method (Agisoft Metashape User ManualFootnote17). Depth map generation parameters were set ‘ultra high quality’ with ‘mild filtering’. At this point, a dense mesh model textured with the RGB images was exported for visual assessment. However, the exported RGB 3D model was not used for creating the luminance point cloud. Instead, the RGB images used for 3D reconstruction were replaced with the luminance images (see ). The surface model was textured with luminance images using the blending mode value: average, which uses the weighted average value of all pixels from individual photos. The mesh model textured with the luminance images was exported from Agisoft Metashape in the.obj format. The model was opened with CloudCompareFootnote18 version 2.9.1 and sampled into a point cloud using the Sample Points tool. The point cloud was exported as a text file in the format XYZRGB. The text file was processed by a Python programme written by us. For each point, a 16-bit relative luminance value was calculated from the R, G and B values. An absolute luminance value was calculated from each relative luminance value by applying the luminance calibration constant. The constant was obtained by capturing images of an Optronic Laboratories, Inc., model 455–6–1 reference luminance source’s exit port. The luminance value of the exit port was simultaneously measured with a Konica Minolta CS-2000 spectroradiometer. Finally, the luminance calibration constant was achieved by comparing the camera measurement to the spectroradiometer measurement. Each calculated absolute luminance value was stored as a scalar for each 3D point, and the point cloud was saved in the format XYZL, where L is the absolute luminance value. The luminance point cloud visualizations were created in CloudCompare.

Case study

In the presented case, the measured space is a lecture hall (Aalto Hall, Aalto University, Espoo, Finland), designed by architect Alvar Aalto. This interior and the entire building are considered culturally valuable and are protected under the Finnish Act on the Protection of the Built Heritage. The protection implies that any altering of the space is strictly regulated. However, the light sources can be updated as better lighting technologies emerge and the original light sources are no longer available. Due to the geometry of the space (see ), both its geometry and lighting can be considered difficult to measure. The area of Aalto Hall is 493 m2, and the maximum allowed number of persons inside is 570.

Figure 4. The measured lecture hall.

Figure 4. The measured lecture hall.

Results

Luminance image location documentation

When applying the photogrammetric reconstruction workflow described in Section 3.2 (Workflow for 3D luminance mapping), the locations and orientations of the captured images were archived as metadata. This was executed in the internal and external camera orientation phase of the photogrammetric process, as illustrated in . shows the 145,301 tie points used for camera alignment at which the 453 images were captured. Luminance images incorporated the same internal and external orientations as the RGB images captured for photogrammetric reconstruction.

Figure 5. The tie points used for camera alignment.

Figure 5. The tie points used for camera alignment.

The reconstructed 3D surface model

shows the 3D surface model reconstructed from RGB images. The 3D model was reconstructed using 453 images in Agisoft Metashape. The mesh density varied considerably, depending on the number of unique features in the surface texture. shows information about the 3D model. illustrates a detail from the textured surface model.

Figure 6. The textured 3D surface model.

Figure 6. The textured 3D surface model.

Figure 7. A detail from the textured surface model shown both without texture (left) and with image texture (right).

Figure 7. A detail from the textured surface model shown both without texture (left) and with image texture (right).

Table 1. The processing parameters.

The luminance point cloud

The pseudo-coloured luminance point cloud

presents the luminance point cloud of the measurement area. Luminance point clouds were utilizable for visualizing or analyzing the luminance distribution of an interior in 3D, and the pseudo-colours represented the absolute luminance value range of 0.0–1142.1 cd/m2.

Figure 8. A luminance point cloud.

Figure 8. A luminance point cloud.

The luminance point cloud was sampled from the textured surface model and consisted of 100,002,877 points. Hence, the point density of the point cloud had an even distribution of approximately 80,000 points per square metre. Every point in the point cloud was coloured with its respective luminance value, as described in Section 3.2 (Workflow for 3D luminance mapping).

illustrates the RGB point cloud of the measurement area. The point cloud contains both the luminance values and the RGB presentation. Switching between the luminance and RGB presentations assists in visual lighting assessment.

Figure 9. An RGB point cloud.

Figure 9. An RGB point cloud.

Quantifiable comparison of the reflective surface

During the 3D reconstruction of the measured environment, both the geometry and the radiometry are approximated to a certain extent. In order to assess the correspondence between single-image luminance measurements and the reconstructed luminance point cloud, seven surface areas (areas A to G) were chosen. illustrates the areas. As the measurement image set consisted of 453 images, each surface area existed in tens of images. We chose to present three single measurements per surface area selected from different measurement locations. presents the luminance values from single measurements from the luminance point cloud and the relative differences between the average luminance in three single images and the point cloud. The luminance values of the single images were used as the ground truth to which the luminance values of the point cloud were compared.

Figure 10. The luminance measurement areas A to G.

Figure 10. The luminance measurement areas A to G.

Table 2. The median luminance values measured from the selected reference areas and analyzed for single images and the reconstructed point cloud.

Discussion

We presented simultaneous imaging luminance photometry and photogrammetry applied to an indoor space through a case study and the 3D luminance point cloud created from the measurement. The 3D reconstruction automatically created documentation of the imaging luminance measurements where the positions and the orientations of the camera were placed in a 3D point cloud, as described in Section 3.2 (Workflow for 3D luminance mapping). The measured luminance values remained constant through the 3D reconstruction with satisfactory confidence. The absolute luminance values of the reconstructed 3D point cloud were similar to the values in the single luminance images of the calibrated camera, with an average relative difference of 8.9%.

The presented 3D luminance mapping methods offer several potential benefits for the AECOO industry. As the presented method constitutes both geometric reconstruction and luminance mapping, the method can also be applied in cases where no pre-existing 3D digital documentation is available. Therefore, 3D luminance mapping is also compatible with buildings for which BIMs, do not exist. Hence, the method can also be applied with older buildings and, for example, cultural heritage sites that may not have any up-to-date digital documentation available. In such cases, the evaluation of current lighting conditions can be carried out without needing a full building survey by terrestrial laser scanning and manual modelling of the existing building. For complex sites, the manual modelling may alone require several weeks of working time (e.g. Kersten et al., Citation2017), making lighting estimation via 3D modelling and simulation rather inefficient. Potentially, the presented method can simultaneously serve the needs of the 3D documentation of building geometry and mapping the lighting conditions, leading to further cost savings via the use of more affordable instruments. In comparison with TLS, close range photogrammetry utilizing Agisoft Metashape has been found to be feasible for the 3D documentation of the built environment (Kersten, Mechelke, & Maziull, Citation2015). As the photogrammetric reconstruction simultaneously solves the imaging locations, they can also be included in the building documentation and queried according to the position, thus simplifying data management in projects. Furthermore, applying a luminance-calibrated camera does not necessarily increase the workload of photogrammetric measurement, and the luminance calculations can be implemented in the photogrammetric reconstruction process without a significant increase in computing time.

In retrofitting projects, the method allows for the verification of the existing lighting systems’ performance, and identification of the most significant problem areas in the building, where modifications to the lighting system have to be considered. This can support energy savings via the use of natural light (Gago, Muneer, Knez, & Köster, Citation2015) or an estimation of visual comfort (Konis, Citation2013). Future work should address the approach in a real-life AECOO scenario and measure the amount of time and money that could be saved during the retrofitting phase of the building (e.g. installing new windows or lights in a specific area).

Large, smooth, uniform and mono-coloured surfaces are problematic to capture and reconstruct accurately with photogrammetry (Lehtola, Kurkela, & Hyyppä, Citation2014). Such surfaces lack the unique features that are essential for photogrammetric reconstruction. In addition, obstructed and dark areas can be difficult to reconstruct. Good measurement practices can help these problems to a certain extent. By ensuring sufficient and diffuse illumination, the measurer can use a low ISO value with a digital camera in order to have as high a signal-to-noise ratio as possible. In the case of 3D lighting measurement, however, lighting cannot be adjusted. The best practices should ensure the presence of each feature in two or more input images. A tripod could be used when capturing the whole photogrammetric image series, but this would be slow and impractical.

Large uniform surfaces are convenient to measure with a laser scanner, if such a device is available. Compared to photogrammetry, laser scanning benefits from better measuring accuracy and precision on flat and featureless surfaces and from direct scale determination, among other things. Laser scanning is an active measuring method that emits either visible or infrared radiation, and thus it cannot solely be used in measuring luminance. Moreover, laser scanners often include a digital camera for colouring the scanned point clouds. If the camera in the laser scanner is controllable and luminance is calibrated, it is possible to capture luminance point clouds with the scanner; otherwise the luminance measurements can only be used for visual inspection of the 3D point cloud if an uncalibrated scanner camera is used. Nevertheless, photogrammetric reconstruction combined with laser scanning benefits from the advantages of both methods (Julin et al., Citation2019). These benefits include the dimensional accuracy of laser scanning and the dynamic range and resolution of photogrammetry. Presumably, the integration of laser scanning and photogrammetry is even a necessity, unless all of the surfaces of the measurement environment have features that are suitable in terms of photogrammetry. Hence for future research, we strongly recommend considering laser scanning as a part of 3D luminance measurement.

It is possible to integrate the presented 3D reconstruction method into conventional indoor luminance measurements as they are described in CEN or IES measurement standards or guidelines (CEN, Citation2002; IES, Citation2014). In such an integration, the 3D model works as a catalogue of luminance measurements performed from the locations and measurement directions according to the guidelines. Moreover, HDR measurement for glare assessment is also possible in this scenario. Obviously, this would require a lot of front-end software development, as no current off-the-shelf 3D reconstruction software is especially designed for luminance measurement. A possible topic for further research could include a hybrid method approach, where the luminance and HDR glare measurements are taken from a tripod following the standardized measurement guidelines, and the same imaging luminance photometer is used for non-HDR photogrammetry. In this way, it is possible to obtain the standardized measurements in order to register the 3D point cloud and to perform the photogrammetry in a pragmatic time frame.

Conclusions

In this study, we presented a workflow where photogrammetric 3D reconstruction and imaging luminance photometry are performed from the same image set captured with a luminance-calibrated camera. Furthermore, we demonstrated the workflow via a case study. We assessed the luminance data quality of the created luminance point cloud by comparing the luminance values in the point cloud to single luminance images used as ground truth data. The relative average difference between the luminance point cloud and the single luminance images was 8.9%. Both 3D measurement and reality-based 3D modelling are spreading from a niche position to various forms of applications. The workflow introduced in this article could already be repeated with low-cost equipment such as compact cameras. In even more open-ended predictions of the future, we will hopefully see nighttime 3D city models in addition to today’s daytime models. For creating such nighttime 3D city models, the authors believe that luminance photogrammetry could certainly be a utilizable method. Even though this research focused on indoor applications, the presented workflow can be generalized for various 3D lighting measuring and modelling concepts.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Academy of Finland – Strategic Research Council under Grant [No. 293389, 314312].

Notes

1 Agisoft Metashape. Agisoft. Available at: https://www.agisoft.com/ (accessed February 4, 2020)

2 COLMAP. Johannes L. Schoenberger. Available at: https://colmap.github.io/ (accessed August 28, 2020)

3 iWittnessPRO. Photometrix Photogrammetry Software. Available at: https://www.photometrix.com.au/ (accessed August 28, 2020)

4 Meshroom. ALICEVISION association. Available at: https://alicevision.org/ (accessed August 28, 2020)

5 RealityCapture. Capturing Reality s.r.o. Available at: https://www.capturingreality.com/ (accessed February 4, 2020)

6 VisualSFM. Changchang Wu. Available at: http://ccwu.me/vsfm/ (accessed August 28, 2020)

7 WebODM. OpenDroneMap. Available at: https://www.opendronemap.org/ (accessed August 28, 2020)

8 LUMA: Luminance Analysis. Syscomp Electronic Design Limited. Available at: https://www.ee.ryerson.ca/~phiscock/astronomy/luma/luma-58.zip (accessed August 28, 2020)

9 AGi32. Lighting Analysts. Available at: https://lightinganalysts.com/ (accessed August 28, 2020)

10 AutoLUX. Keysoft Solutions. Available at: https://www.keysoftsolutions.co.uk/bim-products/keysoft-traffic/keylights/ (accessed August 28, 2020)

11 DIALux. DIAL. Available at: https://www.dial.de/en/home/ (accessed August 28, 2020)

12 Radiance. Lawrence Berkeley National Laboratory. Available at: https://www.radiance-online.org/ (accessed August 28, 2020)

13 Relux. Relux Informatik AG. Available at: https://relux.com/en/ (accessed August 28, 2020)

14 TechnoTeam LMK Mobile Advanced. TechnoTeam Bildverarbeitung GmbH. Available at: https://www.technoteam.de/apool/tnt/content/e5183/e5432/e5733/e5736/LMK_mobile_manual_EOS550D_eng.pdf (accessed February 4, 2020)

15 LMK LabSoft. TechnoTeam Bildverarbeitung GmbH. Available at: https://www.technoteam.de/product_overview/lmk/software/lmk_labsoft/index_eng.html (accessed February 4, 2020)8.

16 DCRaw. Dave Coffin. Available at: https://www.dechifro.org/dcraw/ (accessed February 4, 2020)

17 Agisoft Metashape User Manual: Professional Edition, Version 1.6. 2020 Agisoft LLC. Available at: https://www.agisoft.com/pdf/metashape-pro_1_6_en.pdf (accessed October 25, 2020)

18 CloudCompare. CloudCompare project. Available at: http://www.cloudcompare.org/ (accessed February 4, 2020)

References