Generating Ukiyo-e Art using CycleGANs
Style transfer is a computer vision technique that allows us to recompose the content of an image in the style of another. Image to image translation aims to learn the mapping between an input image and an output image using a training set of aligned and paired image pairs. However, paired training data might not always be available.
CycleGANs enable learning a mapping from one domain X to another domain Y in the absence of paired training data.
This need for a paired image in the target domain is eliminated by making a two-step transformation of the source domain image - first by trying to map it to the target domain and then back to the original image. In addition to two generators and two discriminators used, the CycleGAN uses an additional extension to the architecture called cycle consistency. This is the idea that an image output by the first generator could be used as input to the second generator and the output of the second generator should match the original image. The reverse must also hold also true.
In this project, I implemented the CycleGANs paper to generate variations of a real-world image that looks like Ukiyo-e art. I trained a model on ~7000 unpaired images and tested on ~1000 images in both domains to get images shown on the right.
Input real-world and Ukiyo-e art output