1887

Abstract

Summary

We introduce Wiisu, an assistant based on Large Language Models (LLMs) specifically designed for oil and gas exploration. Wiisu offers an interface that allows users to ask open-ended questions in natural language about various internal databases of an oil and gas company. Its goal is to assist managers and stakeholders in obtaining timely, well-founded, and cost-effective answers to their data inquiries. Unlike traditional models that depend solely on extensive training, Wiisu utilizes prompt engineering and database adaptation to improve its relevance and accuracy. Furthermore, we implement an agentic architecture to carry out tasks: a disambiguation agent addresses and resolves ambiguous questions, while a Structured Query Language (SQL) agent converts the clarified questions into SQL queries to extract data from relational databases. Evaluations conducted with 19 stakeholders confirmed Wiisu’s usefulness, accuracy, and effectiveness.

Loading

Article metrics loading...

/content/papers/10.3997/2214-4609.202539076
2025-03-24
2026-02-10
Loading full text...

Full text loading...

References

  1. Aliannejadi, M., Zamani, H., Crestani, F. and Croft, W.B. [2019] Asking Clarifying Questions in Open-Domain Information-Seeking Conversations. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '19. ACM.
    [Google Scholar]
  2. Dong, Q., Li, L., Dai, D., Zheng, C., Ma, J., Li, R., Xia, H., Xu, J., Wu, Z., Chang, B., Sun, X., Li, L. and Sui, Z. [2024] A Survey on In-context Learning. arXiv preprint arXiv:2301.00234.
    [Google Scholar]
  3. Jin, N., Siebert, J., Li, D. and Chen, Q. [2022] A Survey on Table Question Answering: Recent Advances.
    [Google Scholar]
  4. Kuhn, L., Gal, Y. and Farquhar, S. [2023] CLAM: Selective Clarification for Ambiguous Questions with Generative Language Models.
    [Google Scholar]
  5. Lee, D., Kim, S., Lee, M., Lee, H., Park, J., Lee, S.W and Jung, K. [2023] Asking Clarification Questions to Handle Ambiguity in Open-Domain QA.
    [Google Scholar]
  6. Lee, J., Park, S., Shin, J. and Cho, B. [2024] Analyzing evaluation methods for large language models in the medical field: a scoping review. BMC Medical Informatics and Decision Making, 24(1), 366.
    [Google Scholar]
  7. Lewis, P.S.H., Perez, E., Piktus, A., Petroni, F, Karpukhin, V., Goyal, N., Kuttler, H., Lewis, M., Yih, W., Rocktaschel, T, Riedel, S. and Kiela, D. [2020] Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. CoRR, abs/2005.11401.
    [Google Scholar]
  8. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F, Chi, E.H., Le, Q.V. and Zhou, D. [2022] Chain-of-thought prompting elicits reasoning in large language models. In: Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS ‘22. Curran Associates Inc., Red Hook, NY, USA.
    [Google Scholar]
  9. Zaib, M., Zhang, W.E., Sheng, Q.Z., Mahmood, A. and Zhang, Y [2021] Conversational Question Answering: A Survey.
    [Google Scholar]
/content/papers/10.3997/2214-4609.202539076
Loading
/content/papers/10.3997/2214-4609.202539076
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error