Figures & data
Figure 1. Classification of images by VGG-16 net. Top row: original images from Caltech 101 dataset (Fei-Fei, Fergus, and Perona Citation2004); bottom row: the same images casted by random uniform illumination.
![Figure 1. Classification of images by VGG-16 net. Top row: original images from Caltech 101 dataset (Fei-Fei, Fergus, and Perona Citation2004); bottom row: the same images casted by random uniform illumination.](/cms/asset/43e03aab-9aa2-454d-bcd9-69b5647d7cda/uaai_a_1730630_f0001_oc.jpg)
Figure 2. Schematic representation of the GoogLeNet. Credits (Szegedy et al. Citation2015).
![Figure 2. Schematic representation of the GoogLeNet. Credits (Szegedy et al. Citation2015).](/cms/asset/77f0a1a0-2b64-48d8-b5fc-1592648ecd38/uaai_a_1730630_f0002_oc.jpg)
Table 1. The results obtained on SFU Grayball dataset, and comparison with state-of-the-art methods. First two sections correspond to statistic-based and learning-based methods.
Table 2. The results obtained on reprocessed ColorChecker dataset, and comparison with state-of-the-art methods. First two sections correspond to statistic-based and learning-based methods.
Figure 4. An example of images from Grayball dataset before (left) and after (right) removing illumination color cast using the algorithm presented in this paper.
*Figures are given separately
![Figure 4. An example of images from Grayball dataset before (left) and after (right) removing illumination color cast using the algorithm presented in this paper.*Figures are given separately](/cms/asset/947ec392-ea85-4c50-81c3-5ea95fd60e3f/uaai_a_1730630_f0004_oc.jpg)