The generative network generates candidates while the discriminative network evaluates them. The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution. The generative network's training objective is to increase the error rate of the discriminative network (i.e., "fool" the discriminator network by producing novel candidates that the discriminator thinks are not synthesized (are part of the true data distribution)).
A known dataset serves as the initial training data for the discriminator. Training it involves presenting it with samples from the training dataset, until it achieves acceptable accuracy. The generator trains based on whether it succeeds in fooling the discriminator. Typically the generator is seeded with randomized input that is sampled from a predefined latent space (e.g. a multivariate normal distribution). Thereafter, candidates synthesized by the generator are evaluated by the discriminator. Backpropagation is applied in both networks so that the generator produces better images, while the discriminator becomes more skilled at flagging synthetic images.The generator is typically a deconvolutional neural network, and the discriminator is a convolutional neural network.
Applications
GAN applications have increased rapidly.
Fashion and advertising[
GANs can be used to create photos of imaginary fashion models, with no need to hire a model, photographer, makeup artist, or pay for a studio and transportation. GANs can be used to create fashion advertising campaigns including more diverse groups of models, which may increase intent to buy among people resembling the models.
Science
GANs can improve astronomical images and simulate gravitational lensing for dark matter research. They were used in 2019 to successfully model the distribution of dark matter in a particular direction in space and to predict the gravitational lensing that will occur.
GANs have also been proposed as a fast and accurate way of generating simulated showers of particles in the calorimeters of high-energy physics experiments.
Video games
In 2018, GANs reached the video game modding community, as a method of up-scaling low-resolution 2D textures in old video games by recreating them in 4k or higher resolutions via image training, and then down-sampling them to fit the game's native resolution (with results resembling the supersampling method of anti-aliasing). With proper training, GANs provide a clearer and sharper 2D texture image magnitudes higher in quality than the original, while fully retaining the original's level of details, colors, etc. Known examples of extensive GAN usage include Final Fantasy VIII, Final Fantasy IX, Resident Evil REmake HD Remaster, and Max Payne.
Concerns about malicious applications
Concerns have been raised about the potential use of GAN-based human image synthesis for sinister purposes, e.g., to produce fake and/or incriminating photographs and videos. GANs can be used to generate unique, realistic profile photos of people who do not exist, in order to automate creation of fake social media profiles. As of May 2019, the state of California is considering a bill that would ban the use of GANs and related technologies to make fake pornography without the consent of the people depicted. DARPA's Media Forensics program studies ways to counteract fake media, including fake media produced using GANs.
Miscellaneous applications
GANs that produce photorealistic images can be used to visualize interior design, industrial design, shoes, bags, and clothing items or items for computer games' scenes.[citation needed] Such networks were reported to be used by Facebook.
GANs can reconstruct 3D models of objects from images, and model patterns of motion in video,
GANs can be used to age face photographs to show how an individual's appearance might change with age.
GANs can also be used to transfer map styles in Cartography.
StyleGAN itself is currently the sixth most trending Python project on GitHub.
A variation of the GANs are used in training a network to generate optimal control inputs to nonlinear dynamical systems. Where the discriminatory network is known as a critic that checks the optimality of the solution and the generative network is known as an Adaptive network that generates the optimal control. The critic and adaptive network train each other to approximate a nonlinear optimal control.[
GANs have been used to visualize the effect that climate change will have on specific houses.
A GAN model called Speech2Face can reconstruct an image of a person's face after listening to their voice.
History
Adversarial machine learning has other uses besides generative modeling and can be applied to models other than neural networks. The general idea of learning via competition between players dates back to at least 1959 with the influential work of Arthur Samuel, demonstrating that algorithms could learn to play checkers via adversarial self-play.
Ian Goodfellow is recognized by several sources as having invented GANs in 2014. This paper included the first working implementation of a generative model based on adversarial networks, as well as game theoretic analysis establishing that the method is sound. However, there are several independent debates over who invented which aspects of GANs.
Some review papers about GANs from academic sources, such as assign credit for the idea to Ian Goodfellow and make no mention of Juergen Schmidhuber. Ian Goodfellow's own peer-reviewed GAN paper mentions Schmidhuber's unsupervised adversarial technique called predictability minimization (PM), claiming that PM is not a minimax game. Schmidhuber disputes this claim, citing Equation 2 of the 1996 paper on PM. He also formulates GANs as special cases of his Adversarial Curiosity (1990). Schmidhuber interrupted a presentation by Goodfellow in 2016 demanding more credit for his earlier work. Goodfellow himself states that the most direct inspiration for GANs was noise-contrastive estimation, which uses the same loss function as GANs and which Goodfellow studied during his PhD in 2010-2014.
Other people had similar ideas but did not develop them similarly. An idea involving adversarial networks was published in a 2010 blog post by Olli Niemitalo. This idea was never implemented and did not involve stochasticity in the generator and thus was not a generative model. It is now known as a conditional GAN or cGAN. An idea similar to GANs was used to model animal behavior by Li, Gauci and Gross in 2013.
In 2017, a GAN was used for image enhancement focusing on realistic textures rather than pixel-accuracy, producing a higher image quality at high magnification. In 2017, the first faces were generated. These were exhibited in February 2018 at the Grand Palais. Faces generated by StyleGAN in 2019 drew comparisons with deepfakes.
Beginning in 2017, GAN technology began to make its presence felt in the fine arts arena with the appearance of a newly developed implementation which was said to have crossed the threshold of being able to generate unique and appealing abstract paintings, and thus dubbed a "CAN", for "creative adversarial network". A GAN system was used to create the 2018 painting Edmond de Belamy, which sold for US$432,500. An early 2019 article by members of the original CAN team discussed further progress with that system, and gave consideration as well to the overall prospects for an AI-enabled art.
In May 2019, researchers at Samsung demonstrated a GAN-based system that produces videos of a person speaking given only a single photo of that person.
Reference