Abstract
The heritage images are subjected to large and irregular missing regions due to long-term degradation. In order to repair the gaps, a filling process is used which creates visible seams. High computational expense and prolonged inference time are involved in diffusion and transformer models. A framework for restoration based on GAN is built to handle the irregular missing regions while preserving delicate brush strokes and structural lines with faster speed. The proposed framework makes use of the Adaptive Semantic Prior Injection (ASPI), Topology-Aware Semantic Refinement (TASR) and Entropy-Guided Self-Supervision (EGSS). ASPI contextualizes guidance while TASR provides line and edge continuity, and EGSS emphasizes uncertain regions during training. SSPL learns semantics from masked images with no labels and is used to aid the performance of irregular restoration. The Kaggle Damaged Paintings dataset is used to train the model with an image resolution of 256×256 for 50 epochs. Masked WikiArt images are utilized for self-supervised prior learning. The Indian Heritage Monuments pictures are validated for model performance. The model recorded a Peak Signal-to-Noise Ratio (PSNR) of 32.6 dB, a Structural Similarity Index Measure (SSIM) of 0.92, and a Fréchet Inception Distance (FID) of 18.7. According to the results, diffusion and transformer-based baselines are consistently outperformed. Addditionally, more texture continuity is gained when compared to exemplar based inpainting and Conditional GAN (cGAN) using partial convolution-based models. The ablation study shows that TASR mainly promotes structural fidelity while ASPI and EGSS provide additional benefits. The architecture remains efficient with 34.8 million parameters, 108.5 Giga Floating Point Operations (GFLOPs), and an inference time of 0.118 seconds per image Despite the improvements, the model remains unable to manage thin cracks and the edges of the restored region can hardly be seen. Additionally, significant omissions may not completely correspond to the original in the final restoration. Future studies will focus on training at greater resolutions and applying diffusion priors appropriately.
References
- Li, Yang, Chuanlin Zhang, Yacong Li, Dong Sui, and Maozu Guo. "An Improved Mural Image Restoration Method Based on Diffusion Model." npj Heritage Science 13, no. 1 (2025): 347. https://doi.org/10.1038/s40494-025-01914-5.
- Zhang, Junjie, Shuang Bai, Xianyi Zeng, Kaixuan Liu, and Hua Yuan. "Supporting Historic Mural Image Inpainting by Using Coordinate Attention Aggregated Transformations with U-Net-Based Discriminator." npj Heritage Science 13, no. 1 (2025): 305. https://doi.org/10.1038/s40494-025-01891-9.
- Lyu, Qiongshuai, Na Zhao, Junke Song, Yu Yang, and Yuehong Gong. "Mural Inpainting Via Two-Stage Generative Adversarial Network." npj Heritage Science 13, no. 1 (2025): 188. https://doi.org/10.1038/s40494-025-01710-1.
- Ding, Yuan, Kaijun Wu, and Bin Tian. "Frequency-Domain Information Guidance: Diffusion Models for the Inpainting of Dunhuang Murals." Knowledge-Based Systems 314 (2025): 113188. https://doi.org/10.1016/j.knosys.2025.113188.
- Zhou, Yumeng, Min Guo, and Miao Ma. "Mural Image Restoration with Spatial Geometric Perception and Progressive Context Refinement." Computers & Graphics 130 (2025): 104266. https://doi.org/10.1016/j.cag.2025.104266.
- Zhang, Xiaobo, Donghai Zhai, Tianrui Li, Yuxin Zhou, and Yang Lin. "Image Inpainting Based on Deep Learning: A Review." Information Fusion 90 (2023): 74-94. https://doi.org/10.1016/j.inffus.2022.08.033
- Sumathi, G., and M. Uma Devi. "Inpainting of Damaged Temple Murals Using Edge-And Line-Guided Diffusion Patch GAN." Frontiers in Artificial Intelligence 7 (2024): 1453847. https://doi.org/10.3389/frai.2024.1453847.
- Rakhimol, V., and P. Uma Maheswari. "Restoration of Ancient Temple Murals Using cGAN and PConv Networks." Computers & Graphics 109 (2022): 100-110. https://doi.org/10.1016/j.cag.2022.11.001
- Deng, Xiaochao, and Ying Yu. "Ancient Mural Inpainting Via Structure Information Guided Two-Branch Model." Heritage Science 11, no. 1 (2023): 1-17. https://doi.org/10.1186/s40494-023-00972-x.
- Hu, Qiyao, Weilu Huang, Yinyin Luo, Rui Cao, Xianlin Peng, Jinye Peng, and Jianping Fan. "Sgrgan: Sketch-Guided Restoration for Traditional Chinese Landscape Paintings." Heritage Science 12, no. 1 (2024): 1-28. https://doi.org/10.1186/s40494-024-01253-x.
- Zhao, Fanhua, Hui Ren, Ke Sun, and Xian Zhu. "GAN-Based Heterogeneous Network for Ancient Mural Restoration." Heritage Science 12, no. 1 (2024): 418. https://doi.org/10.1186/s40494-024-01517-6
- Stoean, Ruxandra, Nebojsa Bacanin, Catalin Stoean, and Leonard Ionescu. "Bridging the Past and Present: AI-Driven 3d Restoration of Degraded Artefacts for Museum Digital Display." Journal of Cultural Heritage 69 (2024): 18-26. https://doi.org/10.1016/j.culher.2024.07.008
- Xiang, Hongyue, Weidong Min, Qing Han, Cheng Zha, Qian Liu, and Meng Zhu. "Structure-Aware Multi-View Image Inpainting Using Dual Consistency Attention." Information Fusion 104 (2024): 102174. https://doi.org/10.1016/j.inffus.2023.102174.
- Xu, Zishan, Xiaofeng Zhang, Wei Chen, Minda Yao, Jueting Liu, Tingting Xu, and Zehua Wang. "A Review of Image Inpainting Methods Based on Deep Learning." Applied sciences 13, no. 20 (2023): 11189. https://doi.org/10.3390/app132011189
- Elharrouss, O., Damseh, R., Belkacem, A. N., et al. “Transformer-Based Image and Video Inpainting: Current Challenges and Future Directions.” Artificial Intelligence Review 58 (2025): 124. https://doi.org/10.1007/s10462-024-11075-9.
- Liu, Jialu, Maoguo Gong, Yuan Gao, Yiheng Lu, and Hao Li. "Bidirectional Interaction of CNN And Transformer for Image Inpainting." Knowledge-Based Systems 299 (2024): 112046. https://doi.org/10.1016/j.knosys.2024.112046.
- Wang, Yechen, Bin Song, and Zhiyong Zhang. "An Image Inpainting Method Based on Generative Adversarial Networks Inversion and Autoencoder." IET Image Processing 18, no. 4 (2024): 1042-1052. https://doi.org/10.1049/ipr2.13005.
- Chen, Yuantao, Runlong Xia, Kai Yang, and Ke Zou. "MICU: Image Super-Resolution Via Multi-Level Information Compensation and U-Net." Expert Systems with Applications 245 (2024): 123111. https://doi.org/10.1016/j.eswa.2023.123111.
- Li, Shuo, and Mehrdad Yaghoobi. "Self-Supervised Deep Hyperspectral Inpainting with Plug-and-Play and Deep Image Prior Models." Remote Sensing 17, no. 2 (2025): 288. https://doi.org/10.3390/rs17020288.
- Barcelos, Iany Macedo, Taís Bruno Rabelo, Flavia Bernardini, Rodrigo Salvador Monteiro, and Leandro Augusto Frata Fernandes. "From Past to Present: A Tertiary Investigation of Twenty-Four Years of Image Inpainting." Computers & Graphics 123 (2024): 104010. https://doi.org/10.1016/j.cag.2024.104010.
- Zhao, Li, Tongyang Zhu, Chuang Wang, Feng Tian, and Hongge Yao. "Image Inpainting Algorithm Based on Structure-Guided Generative Adversarial Network." Mathematics 13, no. 15 (2025): 2370. https://doi.org/10.3390/math13152370
- Criminisi, Antonio, Patrick Pérez, and Kentaro Toyama. "Region Filling and Object Removal by Exemplar-Based Image Inpainting." IEEE Transactions on image processing 13, no. 9 (2004): 1200-1212. https://doi.org/10.1109/TIP.2004.833105.
- Kaggle. “Damaged and Undamaged Artworks Dataset.” Accessed December 2025. https://www.kaggle.com/datasets/pes1ug22am047/damaged-and-undamaged-artworks.
- Kaggle. “WikiArt.” Accessed December 2025. https://www.kaggle.com/datasets/steubk/wikiartdas
- Kaggle. “Indian Monuments Image Dataset.” Accessed December 2025. https://www.kaggle.com/datasets/danushkumarv/indian-monuments-image-dataset.
