Figures & data
Figure 1. Operating principle of Shape Inversion GAN for shape completion (Zhang et al., Citation2021).
![Figure 1. Operating principle of Shape Inversion GAN for shape completion (Zhang et al., Citation2021).](/cms/asset/c5c96520-9f49-4559-8e31-2816833cf815/tfws_a_2319796_f0001_c.jpg)
Figure 2. Operating principle of PoinTr for shape completion (Yu et al., Citation2021).
![Figure 2. Operating principle of PoinTr for shape completion (Yu et al., Citation2021).](/cms/asset/bd867acd-4ac1-4eba-8dca-519af02b72ed/tfws_a_2319796_f0002_c.jpg)
Figure 3. Operating principle of VRCNet for shape completion (L. Pan et al., Citation2021).
![Figure 3. Operating principle of VRCNet for shape completion (L. Pan et al., Citation2021).](/cms/asset/a805b3b3-37fd-4918-8bea-8b7bb87c933a/tfws_a_2319796_f0003_c.jpg)
Figure 4. Operating principle of SnowflakeNet for shape completion (Xiang et al., Citation2021).
![Figure 4. Operating principle of SnowflakeNet for shape completion (Xiang et al., Citation2021).](/cms/asset/ff0d05d9-4d0e-470a-b107-69a2353d8c11/tfws_a_2319796_f0004_c.jpg)
Figure 5. Operating principle of PointAttN for shape completion (J. Wang et al., Citation2022).
![Figure 5. Operating principle of PointAttN for shape completion (J. Wang et al., Citation2022).](/cms/asset/dd6492eb-fc5a-42fe-9b35-9404f6663deb/tfws_a_2319796_f0005_c.jpg)
Table 2. Quantitative results for all networks.
Figure 8. Qualitative results of three point cloud completions from the different networks. Each row shows a different foot from the corpus.e dataset from two different views. The first column (ground truth (GT)) shows the best possible result of the reconstruction. The second column (input) displays what the network gets as a starting point for the reconstruction. The third column shows the baseline model, which represents an average foot. The following columns show the reconstructions of the different networks from the respective input.
![Figure 8. Qualitative results of three point cloud completions from the different networks. Each row shows a different foot from the corpus.e dataset from two different views. The first column (ground truth (GT)) shows the best possible result of the reconstruction. The second column (input) displays what the network gets as a starting point for the reconstruction. The third column shows the baseline model, which represents an average foot. The following columns show the reconstructions of the different networks from the respective input.](/cms/asset/9d396bd3-79d2-4d8a-b24d-9de08aa196c2/tfws_a_2319796_f0008_c.jpg)
Table 1. Hyperparameters for Shape Inversion (Zhang et al., Citation2021), PoinTr (Yu et al., Citation2021), VRCNet (L. Pan et al., Citation2021), SnowflakeNet (Xiang et al., Citation2021) and PointAttN (J. Wang et al., Citation2022).