-
oa Optimizing GAN Training for 3D Seismic Microstructure Generation
- Publisher: European Association of Geoscientists & Engineers
- Source: Conference Proceedings, Eighth EAGE High Performance Computing Workshop, Sep 2024, Volume 2024, p.1 - 3
Abstract
In computational geophysics, generating precise 3D microstructures from seismic data is crucial for detailed subsurface analysis. Traditional methods often fall short in achieving the necessary resolution, but Generative Adversarial Networks (GANs), specifically SliceGAN, have proven effective.
These networks allow for the generation of large volumes of statistically representative microstructures, improving the simulation of material properties based on their microstructural traits. However, the efficiency of GAN training is critical for both feasibility and accuracy. This study introduces an optimized GAN training method using a distributed data parallel (DDP) strategy within the PyTorch Lightning framework to leverage modern GPU computational power. Significant adaptations to the original SliceGAN code were made to incorporate PyTorch Lightning, allowing for DDP across multiple GPUs, significantly reducing training times and enhancing scalability. The method was tested on NVIDIA V100 and A100 GPUs, demonstrating near-linear scalability and a potential speedup of 48 times with eight A100 GPUs. This optimized training process notably improves the generation of complex, high-fidelity 3D microstructures essential for geophysical analysis, highlighting the advantages of PyTorch Lightning in scenarios requiring high scalability and rapid execution, thereby offering substantial benefits for geophysical research and exploration.