1887

Abstract

Summary

This study explores the use of Large Language Models (LLMs) to address communication challenges in oil and gas operations, particularly in the drilling domain. By integrating LLMs with specialized prompts, the approach enables rapid analysis and prototyping of diverse textual data sources, surpassing traditional Natural Language Processing methods. The benefits include automated identification of rig activities, correct assignment of activity codes, categorization of non-productive time, detection of invisible lost time, and identification of HSE issues and personnel/equipment events.

The study demonstrates the efficient use of LLMs to analyze Daily Drilling Reports (DDR) and generate concise downhole summaries, facilitating faster decision-making and improved drilling performance. This approach significantly reduces analysis time from days or months to hours, bypassing the need for extensive training or retraining. The case study showed that an LLM-based pipeline could effectively identify downhole issues, such as tight pull, pressure shoot-up, and stuck tool events, with commercial models like GPT-3.5/4 being more accurate and cost-effective than open-source alternatives. This innovative method enhances efficiency in oil and gas operations by streamlining text data analysis.

Loading

Article metrics loading...

/content/papers/10.3997/2214-4609.202539053
2025-03-24
2026-02-14
Loading full text...

Full text loading...

References

  1. Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Airman, S., Anadkat, S. and Avila, R., 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
    [Google Scholar]
  2. Ashish, V., 2017. Attention is all you need. Advances in neural information processing systems, 30, p.I.
    [Google Scholar]
  3. Bansal, T., Jha, R. and McCallum, A., 2019. Learning to few-shot learn across diverse natural language classification tasks. arXiv preprint arXiv: 1911.03863.
    [Google Scholar]
  4. Devlin, J., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv: 1810.04805.
    [Google Scholar]
  5. Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L. and Chen, W., 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
    [Google Scholar]
  6. Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.T., Rocktäschel, T. and Riedel, S., 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33, pp.9459–9474.
    [Google Scholar]
  7. Radford, A., 2018. Improving language understanding by generative pre-training.
    [Google Scholar]
  8. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q.V. and Zhou, D., 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35, pp.24824–24837.
    [Google Scholar]
/content/papers/10.3997/2214-4609.202539053
Loading
/content/papers/10.3997/2214-4609.202539053
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error