1887

Abstract

Summary

Interpretability and uncertainty quantification is crucial in order to build machine learning models that are robust. One could argue there are at least two aspects where quantifying the uncertainty would help the practitioners. Firstly, it is useful to highlight where the machine learning model needs help, e.g. during the training process, so it has a better prediction accuracy. Secondly, it helps the practitioners to make an informed decision using the final predictions and the accompanying uncertainty measures. In this paper, we will show two geoscience applications, automatic fault prediction and 3D reservoir property prediction, where deep learning has been deployed and the corresponding uncertainty has been captured.

Loading

Article metrics loading...

/content/papers/10.3997/2214-4609.202075015
2020-09-07
2024-04-16
Loading full text...

Full text loading...

References

  1. Gal, Y., and Ghahramani, Z.
    , [2016], Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning, arXiv:1506.02142
    [Google Scholar]
http://instance.metastore.ingenta.com/content/papers/10.3997/2214-4609.202075015
Loading
/content/papers/10.3997/2214-4609.202075015
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error