Full text loading...
Enhancing seismic data resolution is a crucial step for geological interpretation and imaging. Deep learning‐driven resolution enhancement primarily depends on sophisticated network architectures and extensive datasets. A lightweight seismic super‐resolution model based on contrastive learning and knowledge distillation is proposed. Knowledge distillation is implemented by training a compact student network to mimic a powerful teacher model, thereby reducing reliance on extensive datasets and complex architectures. Contrastive learning is leveraged to align the bottleneck features encoded from the teacher network with the ones from the student network across different noisy inputs. The student network's total loss comprises a supervised loss with ground‐truth labels, a distillation loss with the teacher's pseudo‐labels and a feature‐matching loss derived from the bottleneck features of both networks. The comparative experiments were conducted on four field datasets and 3200 pairs of slices extracted from 800 pairs of synthetic three‐dimensional seismic cubes. Experimental results demonstrate that the proposed model achieves similar to or better performance than the comparison models for noise suppression and weak signal recovery, even with only parameters and training data compared to the reference model.