Department of Computer and Information Sciences
Permanent URI for this communityhttp://itsupport.cu.edu.ng:4000/handle/123456789/28739
Welcome to the Page of Computer and Information Sciences
Browse
Item DEVELOPMENT OF AN ENHANCED EXPLAINABLE TRANSFORMER INPUT SAMPLING METHOD FOR NOISY IMAGE ENVIRONMENTS(Covenant University Ota, 2025-08) ADJAOKE, WO-O TERTIUS; Covenant University DissertationVision Transformers (ViTs) excel in computer vision tasks due to their ability to capture long-range dependencies, but their complex decision-making processes pose interpretability challenges, especially for low-quality images affected by noise, blur, or degradation. This study enhances the Transformer Input Sampling (TIS) method, introducing RobustTIS to improve explanation reliability in such conditions. The original TIS was evaluated on clean and degraded ImageNet images, revealing reduced performance under severe noise like Gaussian and motion blur. RobustTIS incorporates attention-guided clustering, adaptive token selection, noise-tolerant scoring, and sparsity regularization, achieving improved insertion (0.6755 vs. 0.6643), deletion (0.1849 vs. 0.1926), and sparseness (0.3360 vs. 0.3163) scores, with equivalent max-sensitivity (0.1177). Applied to PathMNIST and UC Merced datasets, RobustTIS generated precise saliency maps for medical and surveillance tasks despite low resolution and noise. However, it requires higher computational resources (31.40s, 1689.25 MB peak GPU) than TIS (8.85s, 801.04 MB). Quantitative and qualitative evaluations confirm RobustTIS’s enhanced robustness and interpretability, though its computational cost suggests a trade-off. This work advances ViT-based explainable AI, offering practical benefits for medical imaging and surveillance, and lays a foundation for future research into efficient, trustworthy AI systems in challenging imaging environments.