Full text loading...
-
Principal Component Analysis and Deep Learning along Directional Image Gathers for High-Definition Classification of Subsurface Features
- Publisher: European Association of Geoscientists & Engineers
- Source: Conference Proceedings, First EAGE Digitalization Conference and Exhibition, Nov 2020, Volume 2020, p.1 - 5
Abstract
Diffraction imaging has proven to be an attractive method for delivering high-resolution subsurface images containing different types and scales of continuous and discontinuous geometrical objects. For depth domain 3D subsurface models, Koren and Ravve (2011) described an imaging method which is based on the ability to decompose the full recorded seismic wavefield into continuous full-azimuth directivity components in situ at the subsurface image points. This method follows the concept of imaging and analysis in the “Local Angle Domain” and allows us to generate azimuthal directivity gathers, from which we can separate specular and diffracted energies.
As part of the ongoing effort to automatically enhance procedures for classifying directivity driven image data into N geometrical features such as continuous reflectors, faults, point diffractors, acquisition noise, and ambient noise, Itan et al. (2017) presented a Deep Learning (DL) approach to this challenging task. This work expands on this method, as in addition to vertical section image patches, we also train the net with horizontal patches. This leads to further improvements, particularly in areas masked by ambient and coherency noise for classifying different geometrical features. We demonstrate our method on seismic data from the Eagle Ford and Barnett unconventional shale plays.