Abstract
Single-view intrinsic decomposition of fabric images is a meaningful and challenging task for fabric analysis. However, it is costly to obtain enough ground truths for supervised training of intrinsic images. In this article, we explore a novel method to decompose the reflectance and shading from fabric images. With the introduction of wavelet transform into the proposed CNN model, this method enables us to exploit the information arising from inherent constraints during training and, by doing so, eliminate the need for ground truth labels. Based on three assumptions, we describe a new training framework for the network, including three types of loss function: prior loss, relative total variation loss, and a loss of constrained shared consistency. The trained model proved to be very efficient, requiring only 0.007 s to process one image. The results show that our method can separate the color and fine texture of fabric images to a certain extent. Finally, we propose several applications of the trained model.
Disclosure statement
No potential conflict of interest was reported by the author(s)