Volume - 7 | Issue - 2 | june 2025
Published
03 June, 2025
Self-supervised segmentation of panoramic dental radiographs often lacks interpretability, leading clinicians to be hesitant in relying on these predictions. The practical application of such segmentation is limited, as the results must be comprehensible for effective diagnosis and treatment planning. To address this issue, we implement Explainable AI (XAI) techniques, such as Grad-CAM, which generate heat maps that highlight significant areas within the radiographs, thereby enhancing the model's transparency. This approach mitigates the black-box nature of self-supervised models and provides a reliable AI-driven solution. We propose a hybrid strategy that integrates attention-based segmentation with both supervised and self-supervised learning. Self-supervised learning is utilized to analyse unlabelled radiographs to extract features, which are subsequently fine-tuned through supervised learning to achieve accurate segmentation. This model aids in the detection of anomalies such as cavities, fractures, and periodontal diseases, thereby improving diagnostic precision. Additionally, it facilitates treatment planning and bridges the gap between clinical trust and AI automation, rendering AI-based dental imaging more practical and acceptable.
KeywordsContraceptive Learning U-Net Grad-CAM Medical Imaging