Amongst generative models, GANs have been one of the most powerful families that have exerted an influence in domains such as artificial intelligence and deep learning. Since their discovery by Ian Goodfellow back in 2014, GANs have generated realistic data, from images and videos up to audio, in a wide range of applications. The adversarial training paradigm allows GANs to learn sophisticated data distributions without explicit supervision, including a generator and a discriminator. However, despite those benefits, training GANs is inherently problematic since instability, mode collapse, and issues of convergence naturally pop up. This contribution reviews how GAN architecture has evolved, critically discusses the main challenges regarding GAN training, revisits the most promising developments toward enhancing stability and performance, and addresses rising trends such as diffusion models and hybrid frameworks. Furthermore, this paper points to the directions in which further research should be oriented with a view to the improvement of theoretical grounds, stability, and universality of GANs for practical use.
@article{ramanizan2025,
author = {Balqis Nabilah Binti Ramanizan},
title = {{Generative Adversarial Networks (GANs): A Comprehensive Review of Architectures, Training Challenges, and Advancements}},
journal = {Journal of Artificial Intelligence and Capsule Networks},
volume = {7},
number = {4},
pages = {304-319},
year = {2025},
publisher = {Inventive Research Organization},
doi = {10.36548/jaicn.2025.4.001},
url = {https://doi.org/10.36548/jaicn.2025.4.001}
}
Copy Citation

