Volume - 7 | Issue - 4 | december 2025
Published
07 November, 2025
Amongst generative models, GANs have been one of the most powerful families that have exerted an influence in domains such as artificial intelligence and deep learning. Since their discovery by Ian Goodfellow back in 2014, GANs have generated realistic data, from images and videos up to audio, in a wide range of applications. The adversarial training paradigm allows GANs to learn sophisticated data distributions without explicit supervision, including a generator and a discriminator. However, despite those benefits, training GANs is inherently problematic since instability, mode collapse, and issues of convergence naturally pop up. This contribution reviews how GAN architecture has evolved, critically discusses the main challenges regarding GAN training, revisits the most promising developments toward enhancing stability and performance, and addresses rising trends such as diffusion models and hybrid frameworks. Furthermore, this paper points to the directions in which further research should be oriented with a view to the improvement of theoretical grounds, stability, and universality of GANs for practical use.
KeywordsGANs- Generative Adversarial Networks Generative Models GAN Architectures Artificial Intelligence Deep Learning Adversarial Training

