Extracting and Classifying Graphic Information from Geoscience Unstructured Documents Using Deep Learning Based Computer Vision Approaches.
H. Blondelle, J. Micaelli and P. Kaur
Event name: 81st EAGE Conference and Exhibition 2019 Workshop Programme
Session: WS10 Machine Learning: Opportunities and Challenges
Publication date: 03 June 2019
Info: Extended abstract, PDF ( 744.8Kb )
Price: € 20
Formation Evaluation Logs (FEL) and composites integrate together a lot of information gathered together by the well-site geologist while drilling and logging. But, since they are frequently published as unstructured documents, they are not easy to use as a source of information in digital business processes. We had the opportunity to support our customer Equinor to “read” lithological columns, O&G show symbols, and geological descriptions from FEL and composites using a state-of-the-art computer vision approach called YOLO and our indexing solution named iQC. A process based on YOLO and iQC transforms the graphical information into usable, numeric and text values that can be consumed by business databases. Computer vision and semantic analysis models were trained on composite logs, which were tagged by subject matter experts with expected labels. The developed models automatically detect and draw bounding boxes around target objects in test documents. This paper details this experiment, lessons learnt and provides some perspective to improve the accuracy of the first results obtained.