Figures & data
Figure 1. Concept of imaging physics. is the image,
is the normal of the object,
,
are the upwelling and downwelling transmitting process,
are the incident angle and reflection angle, respectively.
![Figure 1. Concept of imaging physics. I is the image, N is the normal of the object, τdown, τup are the upwelling and downwelling transmitting process, θi,θr are the incident angle and reflection angle, respectively.](/cms/asset/294114ca-6533-409c-9971-5016343ee1a1/tgsi_a_1730712_f0001_c.jpg)
Figure 2. Examples of image segmentation of the ISPRS dataset (Gerke Citation2014).
![Figure 2. Examples of image segmentation of the ISPRS dataset (Gerke Citation2014).](/cms/asset/c66668c8-fd3f-4797-b6ba-f71559712b56/tgsi_a_1730712_f0002_c.jpg)
Figure 3. An example of dense matching of two images of Wuhan University campus. (a) Left input image; (b) Left epipolar image; (c) Right input image; (d) Right epipolar image; (e) Matching result (disparity map) of the left epipolar image.
![Figure 3. An example of dense matching of two images of Wuhan University campus. (a) Left input image; (b) Left epipolar image; (c) Right input image; (d) Right epipolar image; (e) Matching result (disparity map) of the left epipolar image.](/cms/asset/28418c08-a95e-4122-9c63-191023e9d033/tgsi_a_1730712_f0003_c.jpg)
Figure 4. An example of bundle adjustment. (a) The input: original images and matched feature points; (b) The position and posture of the camera and the 3D structure of the building are recovered successfully using Agisoft MetaShape (https://www.agisoft.com).
![Figure 4. An example of bundle adjustment. (a) The input: original images and matched feature points; (b) The position and posture of the camera and the 3D structure of the building are recovered successfully using Agisoft MetaShape (https://www.agisoft.com).](/cms/asset/61005137-aa94-4d6b-afe1-ee6eca6d7fa3/tgsi_a_1730712_f0004_c.jpg)
Figure 5. An example of vSLAM from a multi-camera rig (Ji et al. Citation2020).
![Figure 5. An example of vSLAM from a multi-camera rig (Ji et al. Citation2020).](/cms/asset/cac8cf5b-a08a-40a8-8759-a5ecd0af1734/tgsi_a_1730712_f0005_c.jpg)
Figure 6. An example of shape from shading for surface refinement. With (a) the GaoFen-1 image, the original SRTM is refined from a resolution of (b) 90 m to (c) 15 m using shape from shading.
![Figure 6. An example of shape from shading for surface refinement. With (a) the GaoFen-1 image, the original SRTM is refined from a resolution of (b) 90 m to (c) 15 m using shape from shading.](/cms/asset/882b83cb-baaa-45ea-b8bf-50376694153e/tgsi_a_1730712_f0006_c.jpg)
Figure 7. A segmentation example of image matching derived point cloud (Nex et al. Citation2015): (a) point cloud generated by image matching (Schonberger and Frahm Citation2016; Schönberger et al. Citation2016), (b) optimized segmentation. Building point clouds are colored by segmented planar segments.
![Figure 7. A segmentation example of image matching derived point cloud (Nex et al. Citation2015): (a) point cloud generated by image matching (Schonberger and Frahm Citation2016; Schönberger et al. Citation2016), (b) optimized segmentation. Building point clouds are colored by segmented planar segments.](/cms/asset/f8dee81d-c849-4754-93d9-affd41f33311/tgsi_a_1730712_f0007_c.jpg)