By Emmanuel Udeh, Subsurface Data Analyst at Shell

Generating integrated reservoir models is very important to the E&P business because it helps Oil & Gas companies better understand, evaluate, develop and predict the behavior of their subsurface assets. Advancements in reservoir modeling software capabilities have made it possible to rapidly collect and analyze vast amounts of diverse data, generating models that accurately represent reservoir properties. This helps to create a useful and accurate model for history matching and for predicting reservoir behavior at any particular stage of field development and management.

At the same time, however, computer based modeling – no matter how sophisticated – cannot reproduce the subsurface exactly. Advances in computer technology cannot eliminate the central role of the engineer in model interpretation. Many uncertainties exist at every stage of reservoir modelling, so it is important that the quality of data required to address these uncertainties be of a certain level of quality, in order to create a useful earth model with the data available.

Data sources

Shell’s ex-Exploration Head Roel Murris once remarked, “We probably have more oil in our archives than we’ve ever actually found!” This statement emphasizes the value of information in our data stores. Relevant data required for an integrated reservoir characterization study will normally be available from several sources, including in-house sources, national data repositories, the various information service companies that offer data services, and proprietary reports on specific fields or reservoirs. These reports most often contain detailed geologic and reservoir parameters. Published articles and reports that may have appeared in journals or other literature may also provide helpful information. It is up to the subsurface team to approach this effort in an integrated manner in order to identify available and useful data.

Assemble and review

The first step in building any integrated reservoir model is to gather and evaluate all available data on both field and target reservoir interval. This commonly will include well logs, fluid (PVT) data, pressure data, core data, seismic data (2-D or 3-D), production data (for developed fields or from well tests) and any available reports on that field. In cases where very little information is available, analogues (data from reservoirs with similar properties and characteristics) become very important in attempting to generate the closest and most useful representation of the subsurface.

After all relevant data has been assembled, it is essential that these datasets must be evaluated in order to determine:

  • Basic data availability
  • Data quality
  • Impact of data gap analysis on deliverables, in order to make data acquisition recommendations which would help to mitigate subsurface uncertainties

E&P blog

Fig 1: Sample data availability table

The efficiency of any integrated reservoir characterization study is determined by a clear and precise understanding of what the resulting reservoir model will be used for, whether it is for a full field development plan or for reserves booking. In all cases, data gathering and evaluation can be made more efficient as establishing precise purposes will invariably determine both the complexity of the model and different data types required.

Data review and integration

After available data has been assembled and organized, it should be evaluated as to relevance, quality, consistency and completeness. However, a typical integrated reservoir characterization study could be a very complex task because it requires the combination of several disciplines, each having a different perspective and often focused on different approaches to solving the same problem.

In most cases, it is necessary to prepare a detailed report after the data gathering process has been completed. The report should summarize the available data and interpretations, comment on the limitations of these datasets and give specific recommendations regarding further data acquisition. This will help provide a clear and structured approach in the appraisal of existing data. It will also serve as an excellent reference for management reviews and for making decisions regarding the need for further data acquisition.

E&P blog

Fig 2: Basic workflow for building an integrated reservoir model

Each source of data carries different information, which means that available data must be validated and interpreted to remove or reduce subsurface uncertainties, while at the same time be calibrated to reservoir structure and properties. This makes it critical that all members of the team have a common starting point, i.e. all measured data is the same, regardless of the application or database being used.  While each domain of information is essential, it is important to note that they possess certain limitations. Although logs and core analysis can provide a high vertical resolution of the subsurface, they are limited by discontinuous sampling. On the other hand, 3-D seismic surveys provide continuous data, but at much lower vertical resolution. These limitations underline the importance to any reservoir characterization study of implementing an integrated approach between the different disciplines.

It is important to note that no integrated reservoir characterization study is ever final or absolute in its ability to describe the subsurface. The interaction between the various disciplines involved in building a reservoir model can produce very significant changes to the final model, depending on the uncertainty they are looking to address. That is why quality input data which has been properly validated and assured is very important to building the most useful and accurate reservoir model. For example, a new or wrong interpretation of the geological structure can heavily affect the whole reservoir model, while the wrong evaluation of the porosity and permeability of a single facies would influence the fluid amount or distribution in the model.

Overall, data quality is key to the success of any project. However good the application and the assembled data, if there are key data deficiencies the results of modelling will likely have little validity. It is therefore critical that data is gathered and shared effectively throughout the project’s duration, ensuring that all members of a team have the most up-to-date version of interpreted data, rather than differing versions. To this end, it is strongly advised that every team have a focal point responsible for ensuring effective sharing of data. While not an exciting task, this is often the most critical activity to ensure the smooth execution, and ultimately, success of any project.

References and further reading

Fully Integrated Hydrocarbon Reservoir Studies: Myth or Reality?; 2010; Christoforos Benetatos and Dario Viberti

Integrated Reservoir Characterization Course; IPIMs.com

Read more blog posts by Emmanuel Udeh on E&P data quality  and Big Data in the E&P organization.

    Did you like this article? Get our new articles in your inbox with our occasional email newsletter.

    We will never share your details with anyone else, except that your data will be processed securely by EmailOctopus (https://emailoctopus.com/) in a third country, and you can unsubscribe with one click at any time. Our privacy policy: https://www.etlsolutions.com/privacy-policy/.

    data migration planning guide