1887

Abstract

Summary

Acquisition of rock core samples during the well drilling process is a crucial step that yields substantial subsurface information, thereby enabling the comprehensive characterization of geological formations and the identification of potential petroleum system elements. Core samples are cylindrical sections of rock extracted from subsurface formations during drilling operations. Following the cutting, retrieval, measurement, and cleaning processes, core samples are typically marked to document their orientation and the depth interval from which they were extracted. Photographs of these core samples are taken at the core lab to maintain a digital record. An innovative workflow is proposed to address the challenge of extracting core data (core depth, well and plug number) from images, thereby enhancing the efficiency and accuracy of the process. This paper presents the workflow to employ the depth attribution and standardization of core sample images using segmentation and Large Vision Models (LVMs). It provides an effective way of labelling the images with the right depth and standardization of the images in a standard template.

Loading

Article metrics loading...

/content/papers/10.3997/2214-4609.202639001
2026-03-09
2026-02-19
Loading full text...

Full text loading...

References

  1. Li, C., Liu, W., Guo, R., Yin, X., Jiang, K., Du, Y., Du, Y., Zhu, L., Lai, B., Hu, X., Yu, D., & Ma, Y., 2022. PP-OCRv3: More Attempts for the Improvement of Ultra Lightweight OCR System. ArXiv, abs/2206.03001.
    [Google Scholar]
  2. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W., Dollár, P., & Girshick, R.B., 2023. Segment Anything. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 3992–4003.
    [Google Scholar]
  3. Liu, Shilong & Zeng, Zhaoyang & Tianhe, Ren & Li, Feng & Zhang, Hao & Yang, Jie & Li, Chunyuan & Yang, Jianwei & Su, Hang & Zhu, Jun & Zhang, Lei., 2023. Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection.
    [Google Scholar]
  4. Li, Minghao & Lv, Tengchao & Cui, Lei & Lu, Yijuan & Florencio, Dinei & Zhang, Cha & Li, Zhoujun & Wei, Furu., 2021. TrOCR: Transformer-based Optical Character Recognition with Pretrained Models.
    [Google Scholar]
  5. Chen, Z., Wang, W., Tian, H., Ye, S., Gao, Z., Cui, E., Tong, W., Hu, K., Luo, J., Ma, Z., Ma, J., Wang, J., Dong, X., Yan, H., Guo, H., He, C., Jin, Z., Xu, C., Wang, B., Wei, X., Li, W., Zhang, W., Lu, L., Zhu, X., Lu, T., Lin, D., & Qiao, Y., 2024. How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites. ArXiv, abs/2404.16821.
    [Google Scholar]
  6. PengWang et al., 2024. Qwen2-VL: Enhancing Vision-Language Model’s Perception of the World at Any Resolution. ArXiv, abs/2409.12191v2
    [Google Scholar]
  7. Yu, T., Feng, R., Feng, R., Liu, J., Jin, X., Zeng, W., & Chen, Z., 2023. Inpaint Anything: Segment Anything Meets Image Inpainting. ArXiv, abs/2304.06790.
    [Google Scholar]
/content/papers/10.3997/2214-4609.202639001
Loading
/content/papers/10.3997/2214-4609.202639001
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error