Full text loading...
Accurately forecasting the fate of fluids (e.g., injected CO2) in geological formations involves employing time-consuming fine-scale (high fidelity) multi-physics simulations. The computational burden for the propagation of uncertainty using high-fidelity simulations makes it imperative to explore methods for reducing overall computation costs. This study addresses this challenge by introducing a super-resolution neural network that predicts fine-scale simulation results from coarse-scale (low fidelity) outputs, compensating for deviations due to the information loss resulting from upscaling.
We adapted the concept of super-resolution neural networks, used for image resolution enhancement, and developed a recurrent super-resolution convolutional neural network for downscaling. The network takes three inputs: (1) the state vector at the current timestep (or initial condition) from the fine-scale simulation, (2) the up-sampled state vector at the next timestep from the coarse-scale simulation, and (3) uncertain input parameters of the fine-scale simulation (e.g., porosity and permeability fields). The output is the approximated state vector at the next timestep on the fine scale. The input and output are treated as multichannel images (3D tensors).
In this design, recurrency is embedded, allowing each timestep of each realisation to be treated as a training sample. Once trained, the network cannot only predict subsequent timesteps for the same realisations but also for unseen realisations. The approach was tested on two cases: a synthetic two-phase, two-component flow with random Gaussian field realisations with 9x9x2 downscaling and a more complex reactive transport case, adapted from the AquiferCO2Leak project with 5x5x1 downscaling. In each example, the network was trained on only 80 different realisations and then used to predict 20 unseen realisations and timesteps. The simulation runtime ratio was around 100 in the first case and 20 times in the second case.
The results demonstrate the proposed architecture’s promise. In the first example, the mean squared error was reduced by nearly two orders of magnitude for both pressure and saturation. In the second example, the mean squared error for porosity variation, concentration and pH was reduced by one order magnitude. In the proposed design, the coarse scale serves as a control to prevent rapid error amplification. Despite error accumulation over timesteps, the mean squared error after mapping by the trained network remained smaller than the coarse-scale original results. Given the network relies on coarse-scale results, it could be trained by a significantly lower number of samples compared to full surrogates (that do not require coarse-scale simulations as input).