1887

Abstract

Summary

This study explores the use of large language models (LLMs) in coding agents to enable geoscientists to converse with subsurface databases through natural language queries. By utilizing a ReAct (reasoning and action) framework, the agents can dynamically plan and execute SQL queries based on user input and adapt to errors or intermediate results. The study tests the performance of the agents on a complex SQL database of Petrel data and metadata from over 1200 projects. Initial challenges included the agents’ difficulties in understanding table relationships and correctly formulating queries, which were mitigated by providing database descriptions and adding specific ReAct loop strategies to help solve the task. The agents demonstrated improved accuracy, particularly in complex queries requiring joining data from multiple tables, while also reducing response time and resource costs. Results indicate that users can effectively interact with their data without needing SQL expertise, revealing the potential benefits of coding agents in enabling new subsurface workflows.

Loading

Article metrics loading...

/content/papers/10.3997/2214-4609.202539014
2025-03-24
2026-02-11
Loading full text...

Full text loading...

References

  1. Katsogiannis-Meimarakis, G. and Koutrika, G. [2023]. A survey on deep learning approaches for text-to-SQL. The VLDB Journal, 32(4), 905–936.
    [Google Scholar]
  2. Oliveira, A., Nascimento, E., Pinheiro, J., Avila, C.V.S., Coelho, G, Feijó, L., Izquierdo, Y., Garcia, G, Leme, L.A.P.P., Lemos, M. and Casanova, MA. [2024]. Small, Medium, and Large Language Models for Text-to-SQL. International Conference on Conceptual Mode ling, 276–294
    [Google Scholar]
  3. Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K. and Cao, Y. [2022]. React: Synergizing reasoning and acting in language models. ICLR conference 2023.
    [Google Scholar]
  4. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q.V. and Zhou, D. [2022]. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35, 24824–24837.
    [Google Scholar]
  5. Zhao, Z., Wallace, E., Feng, S., Klein, D. and Singh, S. [2021]. Calibrate before use: Improving few-shot performance of language models. International conference on machine learning, 12697–12706.
    [Google Scholar]
/content/papers/10.3997/2214-4609.202539014
Loading
/content/papers/10.3997/2214-4609.202539014
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error