Quick Links


Using Big Loop and ensemble-based methods for more reliable reservoir predictions: applications on fractured reservoirsNormal access

Authors: Jarlie Frette and Julie Vonnet
Journal name: First Break
Issue: Vol 35, No 11, November 2017 pp. 65 - 68
Language: English
Info: Article, PDF ( 1.57Mb )
Price: € 30

Predicting the future and generating information on future oil and gas field behaviour is key to present and future reservoir management decision-making. The traditional approach to predicting oil and gas field behaviour and updating reservoir models relies on a small number of scenarios (base, high and low cases) and introduces deterministic steps that do not always fit with modern reservoir management guidelines. It also means the lack of an automated workflow, making model updates time consuming. It is with these critical aspects in mind that Emerson has developed the Big Loop workflow. A cornerstone of Emerson’s reservoir characterization and modelling workflow, the software tightly integrates static and dynamic domains and offers the propagation of uncertainties from seismic characterization through to geological modelling and simulation. Big Loop also includes modern, ensemble-based approaches, using results from a large set of models to analyse the uncertainty in the predicted values and enabling workflow automation for easier model updates. To effectively manage oil and gas related risks, it is essential to have uncertainty information. The evaluation of risk means having a set of possibilities each with quantified probabilities and quantified losses/gains. The oil and gas industry has also now reached a stage where a large fraction of producing fields can provide several years of operational and production history. This includes not only production data (well data) but also seismic data that should be shared through the same models. Models that capture the reservoir uncertainties and integrate the acquired data will build more accurate predictions of the future. Through an ensemble-based history matching workflow, uncertain inputs are identified and characterized in order to parameterize the reservoir model and give reservoir engineers the chance to constrain those parameters with the observed ‘real’ measurements. How to estimate the probability of a hypothesis? That is the challenge reservoir engineers’ face in their daily work. For example, if we observe the cumulative produced volumes of fields after ten years of production, what is the likelihood that the production was driven by, for instance, the aquifer’s strength?

Back to the article list