Computer Engineering, Volume. 51, Issue 8, 373(2025)
Attention Distillation Contrastive Mutual Learning Model for COVID-19 Image Diagnosis
COVID-19 is an illness caused by a strain of the novel coronavirus. Existing COVID-19 imaging diagnostic models face challenges such as the lack of high-quality samples and insufficient exploration of inter-sample relationships. This paper proposes a novel model called Attention Distillation Contrastive Mutual Learning (ADCML) for COVID-19 diagnosis, to address these two issues. First, a progressive data augmentation strategy is constructed, which includes AutoAugment and sample filtering, and the lack of quality samples is proactively addressed by expanding the number of images and ensuring their quality. Second, the ADCML framework is built, which employs attention distillation to motivate two heterogeneous networks to learn from each other the pathological knowledge concerned with their attention. The implicit contrastive relationships among the diverse samples are then fully mined to improve the discriminative ability of the extracted features. Finally, a new adaptive model-fusion module is designed to fully mine the complementarity between the heterogeneous networks and complete the COVID-19 image diagnosis. The proposed model is validated on three publicly available datasets-including Computed Tomography (CT) and X-ray images-with accuracies of 89.69%, 98.16%, and 98.91%; F1 values of 88.62%, 97.58%, and 98.47%; and Area Under the Curve (AUC) values of 88.95%, 97.77%, and 98.90%, respectively. These results show that the ADCML model outperforms the mainstream baselines and has strong robustness, and that progressive data augmentation, attention distillation, and contrastive mutual learning form a type of joint force that promotes the final classification performance.
Get Citation
Copy Citation Text
LÜ, HU Lang, LIANG Weinan, LI Guangli, ZHANG Hongbin. Attention Distillation Contrastive Mutual Learning Model for COVID-19 Image Diagnosis[J]. Computer Engineering, 2025, 51(8): 373
Category:
Received: Dec. 14, 2023
Accepted: Aug. 26, 2025
Published Online: Aug. 26, 2025
The Author Email: LÜ (jingqinlv@ecjtu.edu.cn)