1887

Abstract

Summary

Carbon storage sites are often developed in saline aquifers with limited local well data, rather than in data-rich abandoned oil and gas fields. We therefore need to make sure that reservoir characterization methods that we are using as time and cost efficient as well as accurate, reliable, and robust to work in such low-data environments. To achieve this, incorporating approaches to quantify uncertainty is vital.

Two uncertainty quantification methods are particularly relevant for this context: Monte Carlo dropout (Approach 1) and implicit model ensemble averaging (Approach 2). Monte Carlo dropout acts as a regularization technique, switching off neurons randomly during iterations, both in training and inference steps. This creates a dynamic architecture where information is distributed more evenly, and the variance of predictions across iterations provides bounds for prediction uncertainty. Such a method enhances reliability in low-data scenarios, where overfitting is a concern. Implicit model ensemble averaging leverages ensemble stacking, where multiple retrained models are combined, either by averaging (for continuous data) or majority voting (for categorical data). This approach provides statistical measures of uncertainty in property predictions by capturing the variability across models, enabling robust decision-making in the face of limited data availability.

Loading

Article metrics loading...

/content/papers/10.3997/2214-4609.202510668
2025-06-02
2026-02-14
Loading full text...

Full text loading...

References

  1. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014), “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, While this paper focuses on dropout as a regularization technique, the mechanism of randomly dropping units is the basis for Monte Carlo dropout. DOI 10.5555/2670313.2670314
    https://doi.org/10.5555/2670313.2670314 [Google Scholar]
  2. Gal, Y., & Ghahramani, Z. (2016) “Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning”, DOI: 10.5555/3045390.3045502
    https://doi.org/10.5555/3045390.3045502 [Google Scholar]
  3. Hansen, L. K., & Salamon, P. (1990), “Neural Network Ensembles”, DOI: 10.1109/72.80206
    https://doi.org/10.1109/72.80206 [Google Scholar]
/content/papers/10.3997/2214-4609.202510668
Loading
/content/papers/10.3997/2214-4609.202510668
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error