Generative Adversarial Networks (GAN) have started a revolution in Deep Learning, and today GAN is one of the most researched topics in Artificial Intelligence. Generative Adversarial Networks for Image-to-Image Translation provides a comprehensive overview of the GAN (Generative Adversarial Network) concept starting from the original GAN network to various GAN-based systems such as Deep Convolutional GANs (DCGANs), Conditional GANs (cGANs), StackGAN, Wasserstein GANs (WGAN), cyclical GANs, and many more. The book also provides readers with detailed real-world applications and common projects built using the GAN system with respective Python code. A typical GAN system consists of two neural networks, i.e., generator and discriminator. Both of these networks contest with each other, similar to game theory. The generator is responsible for generating quality images that should resemble ground truth, and the discriminator is accountable for identifying whether the generated image is a real image or a fake image generated by the generator. Being one of the unsupervised learning-based architectures, GAN is a preferred method in cases where labeled data is not available. GAN can generate high-quality images, images of human faces developed from several sketches, convert images from one domain to another, enhance images, combine an image with the style of another image, change the appearance of a human face image to show the effects in the progression of aging, generate images from text, and many more applications. GAN is helpful in generating output very close to the output generated by humans in a fraction of second, and it can efficiently produce high-quality music, speech, and images.