Volume - 7 | Issue - 4 | december 2025
Published
21 November, 2025
The spread of deepfake content has created distrust, misconception, and fraud all around the world. Swapping faces between individuals seamlessly or generating AI-created fake faces has become easier than ever with AI tools like DALL-E, Midjourney, ChatGPT, and Google Gemini. AI generated obscene and malicious content has become progressively prevalent and widely circulated on social media due to the misuse of generative adversarial techniques. To prevent AI generated fake images from causing harm to the identity and social integrity of a person or community, this research presents a deep learning model called ‘DeFakeNet’ based on the advanced ResNet50V2 CNN architecture, designed to detect and classify whether a person’s face is real or fake. While past research has relied extensively on pre-trained models and limited dataset, DeFakeNet was trained on a custom-developed dataset titled ‘Real vs Fake Faces Balanced Dataset with Multiple Dataset Splits’, a mixed dataset comprising 10,000 high quality balanced real and fake face images. Upon testing with unseen data, the proposed model obtained 91.95% test accuracy and an AUC score of 97.64%, setting new records in this field. A critical scrutiny of all the diverse evaluation metrics, ROC and Precision-Recall Curves is presented in this paper, which is rarely discussed thoroughly in previous research. Additionally, the model performance comparison with current methods shows robust real world reliability and application toward the detection of evolving deepfakes.
KeywordsDeFakeNet Deep Learning Deepfake Detection Peaceful Society Social Security Social Safety