293
Views
0
CrossRef citations to date
0
Altmetric
Articles

Texture recognition under scale and illumination variations

& ORCID Icon
Pages 130-148 | Received 19 Dec 2022, Accepted 26 Sep 2023, Published online: 07 Oct 2023

Figures & data

Figure 1. Unilateral contextual neighbourhood Ir of sixth-order used for CAR model. X marks the current pixel, the bullets are pixels in the neighbourhood, the arrow shows movement direction, and the grey area indicates acceptable neighbourhood pixels.

Figure 1. Unilateral contextual neighbourhood Ir of sixth-order used for CAR model. X marks the current pixel, the bullets are pixels in the neighbourhood, the arrow shows movement direction, and the grey area indicates acceptable neighbourhood pixels.

Figure 2. The texture analysis algorithm flowchart uses 2D random field models; the K-L transformation step is optional.

Figure 2. The texture analysis algorithm flowchart uses 2D random field models; the K-L transformation step is optional.

Table 1. The sizes of feature vectors of compared textural features.

Figure 5. The appearance of patterns from the UEA database with varying scales, from the left, the scale factor is 50%, 60%, 75%, 90%, and 100%.

Figure 5. The appearance of patterns from the UEA database with varying scales, from the left, the scale factor is 50%, 60%, 75%, 90%, and 100%.

Figure 11. The appearance of two veneers from the Wood UTIA BTF database in varying scales, from the left, the scale factor is 50%, 60%, 75%, 90%, and 100%.

Figure 11. The appearance of two veneers from the Wood UTIA BTF database in varying scales, from the left, the scale factor is 50%, 60%, 75%, 90%, and 100%.

Figure 3. Examples of patterns included the UEA database.

Figure 3. Examples of patterns included the UEA database.

Figure 4. The appearance of patterns from the UEA database with varying illumination spectra (3 columns on the left) and additional acquisition devices (2 columns from the right).

Figure 4. The appearance of patterns from the UEA database with varying illumination spectra (3 columns on the left) and additional acquisition devices (2 columns from the right).

Table 2. Classification accuracy [%] averaged over all scales and illumination conditions on the UEA dataset.

Figure 6. The illustration of the classification accuracy [%] progresses with decreasing scale differences among training and test sets (UEA dataset). On the left, for one training sample, and on the right, for six training samples per class.

Figure 6. The illustration of the classification accuracy [%] progresses with decreasing scale differences among training and test sets (UEA dataset). On the left, for one training sample, and on the right, for six training samples per class.

Figure 7. The classification accuracy [%] for all combinations of scales among training and test sets on the UEA dataset, one training sample per class was used.

Figure 7. The classification accuracy [%] for all combinations of scales among training and test sets on the UEA dataset, one training sample per class was used.

Figure 8. Classification accuracy [%] on the UEA dataset with one training sample, on the left for the training sample with the scale factor of 1 and on the right with the scale factor of 0.5.

Figure 8. Classification accuracy [%] on the UEA dataset with one training sample, on the left for the training sample with the scale factor of 1 and on the right with the scale factor of 0.5.

Figure 9. Examples of wood veneers included the Wood UTIA BTF database.

Figure 9. Examples of wood veneers included the Wood UTIA BTF database.

Figure 10. The illustration of the appearance of four veneers from the Wood UTIA BTF database in varying illumination directions. The left column is illuminated from the surface normal, and the direction of illumination tilt increases to the right: 0, 30, 60, 60, and 75 degrees, illumination azimuth is 0, 90, 180, 252, and 345 degrees, respectively.

Figure 10. The illustration of the appearance of four veneers from the Wood UTIA BTF database in varying illumination directions. The left column is illuminated from the surface normal, and the direction of illumination tilt increases to the right: 0, 30, 60, 60, and 75 degrees, illumination azimuth is 0, 90, 180, 252, and 345 degrees, respectively.

Table 3. Classification accuracy [%] averaged over all scales and illumination angles on the Wood UTIA BTF dataset.

Figure 12. Classification accuracy [%] progresses with decreasing scale differences among training and test sets (Wood UTIA BTF). On the left, for one training sample, and on the right, for six training samples per class.

Figure 12. Classification accuracy [%] progresses with decreasing scale differences among training and test sets (Wood UTIA BTF). On the left, for one training sample, and on the right, for six training samples per class.

Figure 13. The classification accuracy [%] for all combinations of scales among training and test sets on the Wood UTIA BTF dataset, one training sample per class was used.

Figure 13. The classification accuracy [%] for all combinations of scales among training and test sets on the Wood UTIA BTF dataset, one training sample per class was used.

Figure 14. Classification accuracy [%] on the Wood UTIA BTF dataset with one training sample, on the left for the training sample with the scale factor of 1 and on the right with scale factor of 0.5.

Figure 14. Classification accuracy [%] on the Wood UTIA BTF dataset with one training sample, on the left for the training sample with the scale factor of 1 and on the right with scale factor of 0.5.

Table 4. Classification accuracy [%] is shown for different illumination tilts (declination angle from the surface normal) without any scale variation (Wood UTIA BTF).