Journal of Innovative Image Processing is accepted for inclusion in Scopus. click here
Home / Archives / Volume-7 / Issue-2 / Article-3

Volume - 7 | Issue - 2 | june 2025

An Interpretability Pipeline for Image Forgery Localization using GAN-Generated Forgeries and Grad-CAM Open Access
Samel M.  , Mallikarjuna Reddy A.  332
Pages: 290-314
Cite this article
M., Samel, and Mallikarjuna Reddy A.. "An Interpretability Pipeline for Image Forgery Localization using GAN-Generated Forgeries and Grad-CAM." Journal of Innovative Image Processing 7, no. 2 (2025): 290-314
Published
27 May, 2025
Abstract

This study presents a novel interpretability pipeline for image forgery localization by integrating GAN-generated adversarial forgeries with Grad-CAM visual explanations. The objective is to assess the capability of a deep learning classifier to not only detect but also spatially localize manipulated regions in digital images. A Deep Convolutional GAN is trained to generate realistic forged patches, which are synthetically embedded into clean images to simulate new forgery instances. These synthetic images are then analyzed using a proficient 1-based binary classifier. To elucidate the spatial focus of the model, Grad-CAM is employed to visualize class differences of interest. The analysis incorporates metrics such as attention scores, IoU, recall, F1 score, MSE, and SSIM, enabling comprehensive comparisons between heat maps and ground truth forged areas. Despite the high attention scores, the results indicate poor localization performance, with IoU and Pixel-Wise F1 scores at zero. These findings suggest that while the classifier can identify vulnerable areas, Grad-CAM lacks the accuracy necessary for precise manipulation indication. Layer-wise visualization analysis further reveals that the deep layers of the model capture high-level features but prioritize rapid localization over accuracy. This study provides evidence that GAN-generated examples can highlight significant interpretative boundaries. The findings emphasize a disconnect between visual saliency and actual spatial alignment, underscoring the necessity for more refined explanatory methods in image forensics. This framework offers a scalable testbed for future interpretability benchmarking in adversarial scenarios and contributes to the development of more explainable and robust AI models in high-stakes visual domains. The experimental results reveal a stark contrast between high Grad-CAM attention scores and low spatial IoU, indicating a disparity between focus and true localization. Although the classifier reliably detects forged images, its spatial interpretation lacks precision. These insights underscore the need for more granular explanatory tools to enhance forensic trustworthiness. This work establishes a precedent for adversarial interpretability evaluation using synthetic forgeries, with future research potentially focusing on embedding-aware Grad-CAM variants or localized training objectives.

Keywords

Image Forgery Detection GAN Grad-CAM Interpretability Deep Learning Attention Score Adversarial Forgeries Efficient Net Localization Explainable AI

×
Article Processing Charges

Journal of Innovative Image Processing (jiip) is an open access journal. When a paper is accepted for publication, authors are required to pay Article Processing Charges (APCs) to cover its editorial and production costs. The APC for each submission is 400 USD. There are no additional charges based on color, length, figures, or other elements.

Category Fee
Article Access Charge 30 USD
Article Processing Charge 400 USD
Annual Subscription Fee 200 USD
Payment Gateway
Paypal: click here
Townscript: click here
Razorpay: click here
After payment,
please send an email to irojournals.contact@gmail.com / journals@iroglobal.com requesting article access.
Subscription form: click here