1887

Abstract

Summary

Seismic images exhibit enormous diversity in structural complexity, resolution and signal to noise ratio across surveys. Consequently, convolutional neural network (CNN) based geologic interpretation often lack adequate generalization capabilities on such real data images. It is commonly observed that a CNN trained using data from one survey, exhibits significant degradation in interpretation accuracy when used on a new survey, never seen during the training stage. This makes production scale deployment of such models problematic and unreliable. In this paper we address the generalization issue by exploiting the presence of adversarial samples: defined as visually imperceptible, worst-case perturbations to an image that causes a CNN to misclassify the perturbed image with a high degree of confidence. We show that images from a new survey are likely close to adversarial points for a network optimally trained with legacy data. We then describe a training method which allows a CNN to develop robustness to such adversarial samples leading to significantly improved generalization capabilities. Using examples from salt interpretation during the model building stage on Gulf of Mexico (GOM) datasets, we demonstrate that our training strategy has very low generalization error and close to human accuracy on new, previously unseen surveys.

Loading

Article metrics loading...

/content/papers/10.3997/2214-4609.201901507
2019-06-03
2024-04-26
Loading full text...

Full text loading...

References

  1. Goodfellow, I., Shlens, J. and Szegedy, C.
    [2015] Explaining and harnessing adversarial examples. International Conference on Learning Representations. https://arxiv.org/abs/1412.6572
    [Google Scholar]
  2. Gramstad, O. and Nickel, M.
    [2018] Automated interpretation of top and base salt using deep-convolutional networks. 82nd Annual International Meeting, SEG, Expanded Abstracts, 1956–1960.
    [Google Scholar]
  3. Madry, A., Makelov, A., Scmidt, L., Tsipras, D. and Vladu, A.
    [2017] Towards deep learning models resistant to adversarial attacks. ICML workshop on Principles approaches to deep learning. https://arxiv.org/abs/1706.06083
    [Google Scholar]
  4. Morris, M., Brazell, S. and Lu, P.
    [2018] Machine derived seismic interpretation: Real worl examples and lessons learnt. 82nd Annual International Meeting, SEG, Expanded Abstracts, 4659–4663
    [Google Scholar]
  5. Sen, S., Kainkaryam, S., Ong.C., and SharmaA.
    , [2018] Ground truth label perturbations: a training strategy for improved accuracy of deep learning based salt model building, Manuscript submitted.
    [Google Scholar]
  6. Tramer, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D. and McDaniel, P.
    [2018] Ensemble adversarial training: attacks and defenses. International Conference on Learning Representations. https://arxiv.org/abs/1705.07204
    [Google Scholar]
  7. Xie, L., Wang, J., Wei, Z., Wang, M. and Tian, Q.
    [2016] DisturbLabel: Regularizing CNN on the loss layer. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4753–4762, https://arxiv.org/abs/1605.00055
    [Google Scholar]
http://instance.metastore.ingenta.com/content/papers/10.3997/2214-4609.201901507
Loading
/content/papers/10.3997/2214-4609.201901507
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error