Full text loading...
Full‐waveform inversion (FWI) is a standard velocity model‐building (VMB) technique in seismic exploration, yet conventional implementations struggle to deliver high‐resolution three‐dimensional (3D) velocity models due to strong dependencies on accurate initial models – frequently unavailable in field applications. While data‐driven deep learning solutions show potential, they face significant challenges including prohibitive training data requirements, computational burdens and limited generalization. To overcome these limitations, we propose a low‐rank adaptive U‐Net with attention gates (LAUNet): a novel foundation‐model‐inspired two‐dimensional (2D) U‐Net framework integrating enhanced low‐rank adaptation (LoRA) and attention gates, enabling parameter‐efficient fine‐tuning for 3D seismic VMB. The method processes 2D initial velocity slices and the corresponding reverse time migration slices extracted from 3D volumes, leveraging self‐attention to fuse multimodal features for enhanced VMB precision. This approach resolves dimensionality inconsistencies inherent to acquisition geometries while reducing computational costs compared to 3D FWI. Additionally, we introduce an adaptive LoRA fine‐tuning strategy that incorporates well‐log data. By leveraging pre‐trained representations and multimodal inputs, LAUNet achieves high fidelity in velocity prediction with minimal labelled data, demonstrating strong generalization across complex geological scenarios. Experimental validation on 3D Overthrust and SEAM models demonstrates LAUNet's superiority over traditional FWI. Numerical results confirm enhanced inversion accuracy in deep, complex geological structures alongside improved computational efficiency and generalization capability.