Abstract
Deepfake technology has now become an actual menace in the digital media world, as it has the ability to generate highly realistic manipulated media. It poses significant questions regarding misinformation, identity impersonation, and cyber fraud against public personalities like politicians, celebrities, and influencers. Deepfakes are mainly produced by Generative Adversarial Networks (GANs), autoencoders, and Convolutional Neural Networks (CNNs). Even though GANs create synthetic visual data using adversarial training and competition between a discriminator and a generator, autoencoders are utilized to carry out face-swapping and feature extraction tasks. To foresee and deter the possible abuse of this technology, this study introduced a system for detecting deepfakes using a hybrid deep learning method. The system employs the Xception and EfficientNet models for image-based detection and LSTM networks for temporal inconsistency analysis. The FaceForensics++ database, which contains real and manipulated video samples, provides the training and testing base. The image-based detection module has been proven to be 95% accurate, and the video-based module achieved 87%, showcasing robust performance in differentiating real content from spurious manipulations. The model is also deployed on Streamlit to allow for real-time user interaction, thus making it suitable for use in real-world applications in digital forensics and media authentication. This work enhances the credibility of internet information and neutralizes the increasing menace to society posed by AI-generated fakes.
References
Suratkar, Shraddha, and Faruk Kazi. "Deep fake video detection using transfer learning approach." Arabian Journal for Science and Engineering 48, no. 8 (2023): 9727-9737.
Hashmi, Mohammad Farukh, B. Kiran Kumar Ashish, Avinash G. Keskar, Neeraj Dhanraj Bokde, Jin Hee Yoon, and Zong Woo Geem. "An exploratory analysis on visual counterfeits using conv-lstm hybrid architecture." IEEE Access 8 (2020): 101293-101308.
Nguyen, Huy H., Fuming Fang, Junichi Yamagishi, and Isao Echizen. "Multi-task learning for detecting and segmenting manipulated facial images and videos." In 2019 IEEE 10th international conference on biometrics theory, applications and systems (BTAS), IEEE, (2019): 1-8.
Agarwal, Shruti, Hany Farid, Ohad Fried, and Maneesh Agrawala. "Detecting deep-fake videos from phoneme-viseme mismatches." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 660-661. 2020.
Li, Yuezun, and Siwei Lyu. "Exposing deepfake videos by detecting face warping artifacts." arXiv preprint arXiv:1811.00656 (2018).
Sabir, Ekraam, Jiaxin Cheng, Ayush Jaiswal, Wael AbdAlmageed, Iacopo Masi, and Prem Natarajan. "Recurrent convolutional strategies for face manipulation detection in videos." Interfaces (GUI) 3, no. 1 (2019): 80-87.
K. Chugh, P. Gupta, A. Dhall, and R. Subramanian, “Audio-Visual Dissonance-Based Deepfake Detection and Localization,” in *ACM Symposium on Neural Gaze Detection*, 2018.
Rossler, Andreas, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. "Faceforensics++: Learning to detect manipulated facial images." In Proceedings of the IEEE/CVF international conference on computer vision, (2019): 1-11.
Mary, Amala, and Anitha Edison. "Deep fake Detection using deep learning techniques: A Literature Review." In 2023 International Conference on Control, Communication and Computing (ICCC), IEEE, (2023): 1-6.
Rana, Md Shohel, Mohammad Nur Nobi, Beddhu Murali, and Andrew H. Sung. "Deepfake detection: A systematic literature review." IEEE access 10 (2022): 25494-25513.
