Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter Open Access December 30, 2016

Uncertainty assessment based on scenarios derived from static connectivity metrics

  • Noémi Jakab EMAIL logo
From the journal Open Geosciences

Abstract

The approach presented in this paper characterises the uncertainty related to the outputs of a sequential Gaussian simulation. The input data set was a random subset of a complete CT slice. To outline possible, characteristically different groups of realisations within the outputs, we performed a distance-based classification of the realisations based on their derived connectivity features. Global metrics of connectivity also called geo-body or geoobject connectivity is derivative properties related to the overall structure of the simulated field. Based on these attributes stochastic images, which show the same characteristics from a statistical point of view become distinguishable. The scenarios generated this way are able to bridge the gap of information content between the individual stochastic images and the entirety of the pooled realisations. Scenarios are also capable of highlighting the groups of most probable outcomes from the realisations while screening the effect of ergodic fluctuations of the individual stochastic images. They yield a more realistic representation of the smaller scale heterogeneities than the individual stochastic images. In this sense, our approach is able to resolve the question of how many realisations to choose for the assessment of uncertainty. Besides, it eliminates subjectivity and supports reproducible decision-making when the task is to select stochastic images for dynamic simulation.

1 Introduction

Stochastic spatial simulations are widely used to generate multiple, equally probable, realisations of a spatial process, and to assess the related uncertainties. Uncertainty is the result of the imperfectness of our knowledge when trying to characterise any spatial phenomenon. Therefore, it is an inherent feature of geological models. It essentially stems from the fact that it is impossible to characterise the true distribution of a studied property between data. Thus, uncertainty is not an attribute of the studied process itself, and unlike error, it cannot be measured in an absolute way. It is rather an attribute of the interpretation of the available empirical data - it is a model-dependent property.

When assessing uncertainty, the first important decision is the scale of the assessments. Models of local uncertainty are specific to a single location ; spatial or regional uncertainty modelling requires uncertainty assessment of attribute values at several locations taken together [1].The notion of spatial uncertainty can be elaborated further by using the spatial distribution of the modelled attribute as an input for flow simulators. This results in the propagation of the input uncertainties, through the transfer functions, to uncertain response values, allowing the assessment of response uncertainty [2].

The importance of analysing, visualising and communicating the uncertainties is unquestionable. However, there is no generally accepted unified approach for this task. Traditional measures of local uncertainty use the equally probable realisations to describe the probability distribution at the grid nodes. These include the conditional variance, probability intervals, and the statistical entropy.

The conditional variance reflects the uncertainty through the node-by-node averaging of the realisations, producing a so-called E-type estimate and the corresponding variance [3]. The probability interval is also related to the expected value of the local distribution. It represents in Z-values (original data units) the width of the interval, which contains the mean value with a defined probability. Statistical entropy or information entropy was defined by Shannon [4] and in the past decades, it has been frequently used in geostatistics [5, 6]. The calculation of the entropy requires the discretization of the probability distribution into bins. From which the sum of the product and the logarithm of the probabilities are calculated. The listed measures of uncertainty all characterise the probability distribution at the grid nodes in a general sense. However, they do not describe the simultaneous relationship between the probability distributions at multiple grid nodes taken together.

As it is frequently mentioned in the literature [7], another seemingly convenient way to compare the uncertainties of simulation outputs would be to evaluate every single realisation with a flow simulator. However, due to the large computation time of the post-processing, this is not possible in practice. A traditional approach to bypass this difficulty is to rank the realisations based on a static measure, such as the original oil in place, and to select realisations that represent the P10, P50, P90 quantiles of the flow response. Some attempt aimed at identifying these quantiles from subsets of representative realisations instead of using every single output of the simulation. An example of this in the petroleum applications is described in a paper by Scheidt and Caers [7], where the realisations were assigned to subsets based on the dissimilarity distances calculated between them. The work of Armstrong et al. [8], based on stochastic optimisation, also presented an approach to scenario reduction tailored to the mining applications.

The aim of this paper is to characterise uncertainty by outlining subsets within the space of output realisations. However, the objective was not to highlight unique realisations but to regard the subsets as groups that represent characteristically different scenarios within the space of the simulation outputs. The intention was to regard these scenarios as pillars within the range of the pooled outputs. These pillars would represent the most probable groups or recurring spatial patterns formed by the individual outcomes that can be anticipated from the realisations. To perform the subsetting, instead of the traditional ranking measures, a different method was applied. This approach utilises static connectivity measures as the basis for differentiation between the stochastic images. The use of such metrics is not unprecedented in the relevant literature, Allard [9] for example compared the truncated Gaussian and Boolean model based on these metrics.

2 Input data

The input data, a slice of a CT image, shows a core-size sedimentary structure. While the image itself holds low importance, such CT images can be viewed as a representation of larger-scale processes due to the fractal nature of the environment. Therefore, conclusions drawn from any study based on them can hold true invariant of the scale.

The image and the corresponding dataset (Fig. 1) consist of 16000 measured Hounsfield Unit (HU) values arranged on a regular 125 × 128 grid. From the complete dataset, we retained a random set of 100 data locations and the corresponding values as hard conditioning data (Fig. 1). When randomly selecting the 100 locations we confirmed equality of the two distributions by comparing with two-sample Kolmogorov-Smirnov test, which yielded a p-value of 0.6353 (Fig. 2). This meant that the equality of the two distributions could be assumed with a significance level of 0.05.

Figure 1 The exhaustive (left) and the sample data set (right).
Figure 1

The exhaustive (left) and the sample data set (right).

Figure 2 Comparison of the exhaustive and sample distributions.
Figure 2

Comparison of the exhaustive and sample distributions.

3 The Applied Methods

3.1 Variogram modelling

After transforming the data into the standard normal Gaussian distribution, the first step in the workflow (Fig. 3) was the modelling of the variogram. The aim of this step is to analyze and characterise the spatial continuity as well as to provide the required input values to the simulation system.

Figure 3 The applied workflow.
Figure 3

The applied workflow.

First, we calculated the experimental semivariogram as the average of the squared differences between data values separated by anhvector [10]:

γ^(h)=12N(h)i = 1N(h)[z(ui) - z(ui + h)]2(1)

Where N(h) is the number of data pairs pooled for a given h distance. We used the obtained variogram to fit a permissible model of spatial continuity [3], from which the range, the sill, and anisotropy parameters have been feed into the simulation system.

3.2 Sequential Gaussian Simulation

As the next step, we performed a sequential Gaussian simulation using the variogram model and the normal score transform of a sample data set. This type of simulation relies on the Gaussian model, which requires a parametric distribution. This implicitly assumes that the spatial variability of the studied attribute values can be fully characterised by a single covariance function. The steps of the sequential Gaussian simulation followed Deutsch [10], Isaaks [11], Goovaerts [12], and Gómez-Hernandez & Srivastava [13]:

  1. Define a random path through the grid nodes

  2. Construct the conditional distribution of the random variable at the first grid node given the (n) conditioning data, then draw a random value from it

  3. Add this new value to the conditioning data set

  4. Move to the next grid node, construct the conditional distribution of the random variable given the (n +1) conditioning data, then draw a random value from it

  5. Repeat until all grid nodes are simulated

The sequential simulation produces conditional realisations, the sample data is honoured at their locations, ensured by a conditional distribution at those locations, characterised by a mean equal to the sample value and zero variance. In addition, to ensuring reproduction of the semivariogram model, each conditional cumulative distribution function is made conditional not only to the original n data but also to all values simulated at previously visited locations.

The simulation was performed on five different grid resolutions (0.5 × 0.5, 1 × 1, 1.5 × 1.5, 2 × 2 and 2.5 × 2.5), each with 200 realisations. The use of several resolutions enabled us to study the effect of the changing number of grid nodes on the simulation outputs. At first glance it would seem, that the higher the resolution the smaller the scale of the heterogeneities that are revealed. However, the fact that some of these heterogeneities may be artificial, the effect is known as ergodic fluctuation, has to be accounted for, following paper by Goovaerts [14]. This originates from the fact that every time when simulating a value at a new node, previously simulated values within the range of correlation are used to construct the local conditional probability distribution [11, 13]. Thus, at a fixed range of correlation, a higher grid resolution means a smaller ratio of original conditioning data to simulated conditioning data. This can result in a higher chance to locally deviate from the input statistics. By using several different grid resolutions, the changing magnitude of these local deviations and their effect on the uncertainty assessment could be examined.

The optimal number of the realisations is a recurring question in the field [14, 15]. On one hand, practicality dictates using the lowest number possible. On the other hand, choosing too few realisations would prevent drawing reliable conclusions from the results. There are two reasons for this. The first one is that producing a small number of outputs enables us to see only a very coarse outline of the local probability distributions at the grid nodes. The second reason is that when examining only a few realisations, the effect of the ergodic fluctuations may overpower the overall picture. Using 200 realisations enabled us to check how the reliability of the results change when doubling the number of stochastic images from, say, 25 to 50, then to 100 and 200.

We implemented the modelling and simulation workflow (Fig. 3) in the open source R statistical software environment, using gstat package [16], fundamentally based on the GSLIB framework [3]. We also used the geoR package [17] in the analysis of variograms. We utilized the ggplot2 package [18] to generate the images.

3.3 Connectivity metrics

After the back-transform of the simulated normal score values, the next step in the workflow was to find attributes, to allow discrimination between the realisations. To differentiate between the individual realisations, different attributes had to be considered than the ones included in the simulation inputs. This can be explained by the fact that the simulation algorithm itself is constructed to produce outputs that replicate the spatial continuity and the probability distribution of the conditioning data [3]. Traditional methods used to describe the spatial continuity of a field is based on two-point statistics, such as the covariance or variogram. These metrics estimate the spatial correlation between data points separated by a vector h. However, they are not able to reflect the possibility of a connection between these locations, since they do not involve more than two locations simultaneously.

Conversely, global metrics of connectivity also called geo-body or geo-object connectivity are derivative properties of the realisations not defined by the conditioning data. They are capable of expressing these features as they are related to the overall structure of the simulated field. Geo-objects are groups of connected cells on a stratigraphic grid that have one or more rock property values that fall within given property ranges [19]. Based on these geo-objects, fields which otherwise show the same characteristics from a statistical point of view, become distinguishable [20]. These metrics are also particularly useful from the practical point of view because the connectivity structure of the heterogeneity strongly influences subsurface flow.

Several implementations are available [21] to perform the static connectivity analysis. In this paper, we applied the method proposed by Deutsch [22]. This approach identifies the connected geo-objects appearing in stochastic images in order to examine the geometric aspects of connected features. The connectivity analysis (Fig. 4) consists of the following steps:

  1. Define an attribute cut-off (binary indicator) of net cells

  2. Scan through the field aggregating corner-, or edgewise connected blocks of net cells

  3. Collect all the identified geo-objects and their sizes for each realization in a data file

Figure 4 Stages of the connectivity analysis: an individual realization (left), its indicator transform (middle) and the identified geo-objects (right).
Figure 4

Stages of the connectivity analysis: an individual realization (left), its indicator transform (middle) and the identified geo-objects (right).

3.4 Clustering and scenarios

The result of the connectivity analysis was a data file containing the number of geo-objects occurring in each realisation along with the sizes of these geo-objects. Based on the data file we constructed a summary table, listing the number of net cells, the number of geo-objects and the size of the largest geo-object for each realisation.

As the next step, we performed a clustering of the realisations based on their Euclidean distances. These distances were calculated in the three-dimensional feature space defined by the attributes extracted from the connectivity analysis. A crucial aspect of this process was to link probabilities to the resulting clusters. To do so, we constructed three artificial realisations from the quantiles of the connectivity attributes of the realisations. We assembled these from the values of the 0.25, 0.5 and 0.75 frequencies of the probability distributions of each connectivity attribute. The 0.25 probability of the number of net cells and of the size of the largest geo-object were paired with the 0.75 probability of the number of geo-objects and vice versa. To account for this, we have to turn to percolation theory [23]. This states that as the number of net cells exceeds a threshold the geo-objects tend to unite into one large connected component instead of many small independent geo-objects, hence the reverse relationship.

We selected he stochastic images closest to these artificially constructed realisations based on the Euclidean distance matrix of their connectivity attributes. We chose these realisations as the initial cluster centres. To keep the cluster centres fixed and their probabilities were known, the clustering was stopped after one iteration, before the recalculation of the cluster centres. The decision to create three clusters was made arbitrarily, and as an analogue of the P10, P50, and P90 quantiles, which are often used in the petroleum industry [7].

After generating the three clusters, we calculated the E-type estimates for each. These estimates served a double purpose. Firstly, they evaluated the results of the clustering process. Secondly, they acted as visualisation tools when presenting the maps derived from them. We based the evaluation of the results on the number of net cells appearing on the E-type maps: we ranked the clusters by the number of net cells appearing on them. As the result of this process, we were able to highlight groups representing the low, medium and high number of net cells. We named those ‘Pessimistic’, ‘Realistic’ and ‘Optimistic’ scenarios accordingly to how they presented about the possible outcome.

4 Results

To reveal a spatial continuity, we used the omnidirectional variogram of the X-Y plane and two other experimental directional variograms, which were obtained from the full image (Fig. 5). We based the initial fitting process parameters [16], as well as the acceptance of the resulting parameters of the model, on visual inspection of the experimental data and the variogram surface. The variogram fitting process resulted in an exponential model with a sill of 1 and a range of 48.47. The main continuity direction was 45°, and the ratio of anisotropy was calculated to be 0.55. The variogram models for the 45° and 135° directions can be seen in (2) and (3), where h stands for the lag distance. The omnidirectional variogram and the variogram for the 135° direction displayed a trend. However, it was observable only beyond the range of correlation, thus, it did not affect the results of the variogram modelling.

Figure 5 Results of the variogram fitting procedure.
Figure 5

Results of the variogram fitting procedure.

γ(h)=1exp(3h48.47)(2)
γ(h)=1exp(3h26.66)(3)

The results of the sequential Gaussian simulation were 200 equally probable realisations, all honouring the sample data values at their location and reproducing the modelled spatial continuity and the probability distribution of the sample data set. Despite these limitations, the algorithm was able to produce significantly different stochastic images (Fig. 6), due to the differences between the random paths of each realisation.

Figure 6 Individual realizations from the outputs of the Gaussian simulation.
Figure 6

Individual realizations from the outputs of the Gaussian simulation.

The indicator threshold for the connectivity analysis was set to 2700 Hounsfield Units, corresponding to the highest porosity category on the original CT image [24]. For the identification of connected blocks of net cells on the indicator-recoded images, we took into consideration both corner and edge connections.

A result of the clustering of the stochastic images based on their connectivity attributes can be seen in Fig. 7. According to the E-type maps of the identified scenarios, they exhibit prominent differences in the overall picture about the simulated spatial features. The differences between the scenarios also manifest in the number and the connectedness of the net cells. For example, on Fig. 7 some pronounced differences could be seen in the development of the low valued elongated feature on the diagonal of the maps.

Figure 7 Results of the clustering for the 0.5 × 0.5 grid with 50 realizations.
Figure 7

Results of the clustering for the 0.5 × 0.5 grid with 50 realizations.

This feature appears as one connected geometry in some clusters, while it seems to be more fragmented on others. The inner heterogeneities of the other low valued mass in the lower right corner of the maps also appear different in the case of the individual clusters (Fig. 7). It can be stated as well, that the maps of the clusters convey a more realistic representation of the structure of the heterogeneity, than a single realisation highlighted (Fig. 6) from the 200 outputs.

Fig. 8 shows the ratio of net cells calculated from the E-type maps of the three different scenarios for 25, 50, 100 and 200 realisations. The y-axis value is the percentage of the size of the simulation grid making the comparison possible between the different grid resolutions and the whole image. The results in Fig. 8 are grouped by the different grid resolutions. The dashed horizontal black line indicates the total number of cells below the indicator threshold, as calculated from the whole dataset.

Figure 8 Ratio of net cells to the total number of cells calculated from the E-type maps of the three clusters vs. the number of realizations grouped by grid resolution.
Figure 8

Ratio of net cells to the total number of cells calculated from the E-type maps of the three clusters vs. the number of realizations grouped by grid resolution.

Based on Fig. 8 our estimates seems well balanced. The ‘Realistic’ scenario is, in fact, the closest to the true value. The trends visible on the plots suggest that the 1 × 1, 1.5 × 1.5 and 2 × 2 grid resolutions provide the most stable estimates. This is also reflected by the fact, that these graphs are the least fluctuating of all. At the same time, the 0.5 × 0.5 resolution presents a clear increasing trend, and 2.5 × 2.5 grid shows a mild decrease, as a number of realisations increases. According to Fig. 8, the most suitable resolution for the simulation, in this case, seems to be the 1.5 × 1.5 resolution. This resolution seems optimal in a global sense since this yields the smallest differences in the number of net cells.

Fig. 9 shows the same data but aggregated by the number of realisations. The plot reveals another trend: the number of realisations used to generate the clusters seems to have an effect on the reliability of our estimation. Scenarios derived from a smaller number of realisations (25 and 50) are more strongly influenced by the resolution of the simulation grid than scenarios extracted from a higher number of stochastic images.

Figure 9 Ratio of net cells calculated from the E-type maps of the three clusters vs. the grid resolution grouped by the number of realizations.
Figure 9

Ratio of net cells calculated from the E-type maps of the three clusters vs. the grid resolution grouped by the number of realizations.

The scenarios extracted from 100 and 200 realisations show about the same number of net cells in a global sense. When checking the results for the 1.5 × 1.5 resolution, which was indicated optimal based on the last graph, we can say, that using 100 realisations is the appropriate number to choose

Figure 10 presents the distribution of the number of stochastic images between the generated scenarios. The ftgure points out that these ratios are in no way balanced, particularly when a fewer number of realisations are used to extract the scenarios. This, of course, will have an impact on the smoothness of the generated e-type maps.

Figure 10 The ratio of realizations behind the identified clusters.
Figure 10

The ratio of realizations behind the identified clusters.

5 Discussion

During the modelling and simulation, an aim of the simulation algorithm was to reproduce the statistics of the whole distribution as close as possible. The adequacy of the sample dataset was verified with a Kolmogorov-Smirnov test. The modelling of the spatial continuity was done based on the whole dataset, and the accuracy of the fitted model was checked visually. The choice of sequential Gaussian simulation was supported by the observation, that the majority of net cells (cells that are valued below 2700 Hounsfield Units) are positioned in a non-structured way.

The simulation was set to produce 200 realisations. The reason for several realisations to explore spatial uncertainty is trivial; however, it is still not clear how to specify the necessary number of the realisations. More realisations mean more detail in the local probability distributions but at the price of a large amount of information to be handled. Highlighting a few realisations such as P10, P50 and P90 quantiles is an often applied approach in the petroleum industry that seems convenient. Nevertheless, when checking individual stochastic images, it is obvious, that they largely deviate from reality about local heterogeneities (Fig. 1 and Fig. 6). Producing estimates from the realisations may help to overcome this problem, but when calculated from a large number of stochastic images, these tend to overly generalise and hide the important local detail.

The goal of this work was to find a way to fill the gap between the information content of the individual stochastic images and the entirety of all images. The characteristic patterns of the larger scale heterogeneities were uncovered by applying distance-based classification based on the derived connectivity properties. The resulting clusters were ranked in an ascending order based on the global picture of their E-type estimates and the number of net cells appearing in them. From the uncertainty perspective, those results suggest that both the chosen grid resolution and the number of the stochastic images generated have an influence on the appropriateness of our estimates. According to the results, too much detail from high grid resolutions may be problematic as well as the reverse. Estimates from the scenarios are most stable when the grid resolution is the same or slightly coarser than the resolution of the whole dataset (Fig. 8). This, of course, can be stated with certainty only in the case of this specific dataset. A more general answer should be subject to further research and is beyond the scope of this paper. The number of realisations also seems to have an effect on the resulting scenarios. Estimates derived from fewer realisations are more influenced by the grid resolution than the scenarios extracted from more stochastic images. For the scenario approach, the use of at least 100 realisations seems advisable in order to draw reliable conclusions (Fig. 9). Fig. 10 showed that the distribution of the realisations between the extracted scenarios is not balanced. Although the individual stochastic images are equally probable, the possible scenarios outlined by them have different probabilities.

6 Conclusions

The method presented in this paper bridge the gap of information content by highlighting the patterns of the larger scale heterogeneities from the simulation outputs. By the term of ‘patterns of larger scale heterogeneities,’ we understand the scenarios that can be extracted from the full space of output uncertainty. Apart from the ability to reflect the differences between groups of realisations, connectivity metrics are also useful from the practical point of view. These may be of high significance when deriving further reservoir geological or flow properties for petroleum or hydrogeological applications. Similarly, connectivity attributes provide a useful tool to assess the spatial patterns of soil or groundwater contaminants. They also enable the practitioner to perform the uncertainty analysis with respect to any particular interval deemed important, even when the use of the indicator approach was not justifiable for the simulation.

The viability of the methodology can be extended for cases when there are multiple indicator variables in question, for example when working with different lithology types. Constructing a different indicator variable for each enables us to repeat the presented workflow and to assess the occurrence of different lithology types on a scenario level. The proposed method may be further extended in several ways. One approach would be to consider not only 2D but also 3D grids and to include additional attributes in the clustering process. Such additional properties would be the connectivity function or tortuosity-like properties, the use of which make more sense as well when working with 3D data sets. The results coming from the presented method are similar to the conclusions one could draw using the traditional measures of uncertainty mentioned in the Introduction. For example, an E-type map and the corresponding conditional variances can also highlight areas of different levels of uncertainty. However, traditional approaches only allow us to see the entire picture together. They do not offer means to separate the stochastic images representing these different levels of uncertainties. Solving this task and selecting the realisations later used for the flow simulation is usually left for the practitioner’s subjective judgment. The strength of the method described in is that it eliminates subjectivity and supports reproducible decision-making.

References

[1] Goovaerts P., Geostatistical modeling of uncertainty in soil science, Geoderma, 2001,103, 3–2610.1016/S0016-7061(01)00067-2Search in Google Scholar

[2] Goovaerts P., Geostatistical modeling of the spaces of local, spatial, and response uncertainty for continuous petrophysical properties. In: Coburn T.C., Yarus J. M., and Chambers R. L., (Eds.) Stochastic modeling and geostatistics: Principles, methods, and case studies, volume II, AAPG, Tulsa, 2006,1–21Search in Google Scholar

[3] Deutsch C. V., Journel A. G., GSLIB: Geostatistical Software Library and User’s Guide, United States: Oxford University Press, USA,1993Search in Google Scholar

[4] Shannon C. E., A Mathematical Theory of Communication, Bell System Technical Journal, 1948, 27, 379–42310.1002/j.1538-7305.1948.tb01338.xSearch in Google Scholar

[5] Journel A. G., Deutsch C. V., Entropy and spatial disorder, Mathematical Geology, 1993, 25, 329–35510.1007/BF00901422Search in Google Scholar

[6] Geiger J., Újhelyi J., Application of Bayes’ Theorem and Entropy Sets in the Evaluation of Uncertainty. In: Theories and Applications in Geomathematics, Volume: Selected studies of the 2012 Croatian-Hungarian Geomathematical Convent, Opatija, Croatia. GeoLitera Publ.House, 2012, 15–37Search in Google Scholar

[7] Scheidt, C., Caers, J., Representing Spatial Uncertainty Using Distances and Kernels, Mathematical Geosciences, 2009, 41(4), 397–41910.1007/s11004-008-9186-0Search in Google Scholar

[8] Armstrong, M., Ndiaye, A., Razanatsimba, R., Galli, A., Scenario Reduction Applied to Geostatistical Simulations, Mathematical Geosciences, 2013, 45(2), 165–18210.1007/s11004-012-9420-7Search in Google Scholar

[9] Allard, D., On the Connectivity of Two Random Set Models: The Truncated Gaussian and the Boolean, Quantitative Geology and Geostatistics, Springer Science + Business Media, 1993, 467–47810.1007/978-94-011-1739-5_37Search in Google Scholar

[10] Deutsch, C. V., 2014, Geostatistical reservoir modeling 2nd Edition, Oxford University Press, New York.Search in Google Scholar

[11] Isaaks, E. H., The Application of Monte Carlo Methods to the Analysis of Spatially Correlated Data, PhD Thesis, 1990, Stanford UniversitySearch in Google Scholar

[12] Goovaerts, P., 1997, Geostatistics for natural resources evaluation, Oxford University Press Inc, United States.Search in Google Scholar

[13] Jaime Gómez-Hernández, J., Mohan Srivastava, R., ISIM3D: An ANSI-C three-dimensional multiple indicator conditional simulation program, Computers & Geosciences, 1990, 16(4), 395–44010.1016/0098-3004(90)90010-QSearch in Google Scholar

[14] Goovaerts, P., Impact of the simulation algorithm, magnitude of ergodic fluctuations and number of realizations on the spaces of uncertainty of flow properties, Stochastic Environmental Rese1arch and Risk Assessment (SERRA), 1999, 13(3), 161–18210.1007/s004770050037Search in Google Scholar

[15] Neufield, C., Deutsch, C. V., Why Choosing One Realization for Mine Planning is a Bad Idea, CCG Report, Centre for Computational Geostatistics Department of Civil & Environmental Engineering University of Alberta, 2007Search in Google Scholar

[16] Pebesma, E. J., Multivariable geostatistics in S: the gstat package, Computers & Geosciences, 2004, 30(7), 683–69110.1016/j.cageo.2004.03.012Search in Google Scholar

[17] Diggle, P. J., Ribeiro, P. J., Model-based Geostatistics (Springer Series in Statistics), Springer-Verlag New York, United States, 200710.1007/978-0-387-48536-2Search in Google Scholar

[18] Wickham, H., ggplot2: Elegant Graphics for Data Analysis, Springer-Verlag New York, New York, 200910.1007/978-0-387-98141-3Search in Google Scholar

[19] Hovadik, J. M., Larue, D. K., Static characterizations of reservoirs: refining the concepts of connectivity and continuity, Petroleum Geoscience, 2007 13(3), 195–21110.1144/1354-079305-697Search in Google Scholar

[20] Renard, P., Allard, D., Connectivity metrics for subsurface flow and transport, Advances in Water Resources, 2013, 51,168–19610.1016/j.advwatres.2011.12.001Search in Google Scholar

[21] Pardo-Igúzquiza, E., and Dowd, P. A., CONNEC3D: a computer program for connectivity analysis of 3D random set models, Computers & Geosciences, 2003, 29(6), 775–78510.1016/S0098-3004(03)00028-1Search in Google Scholar

[22] Deutsch, C. V., Fortran programs for calculating connectivity of three-dimensional numerical models and for ranking multiple realizations, Computers & Geosciences, 1998, 24(1), 69–7610.1016/S0098-3004(97)00085-XSearch in Google Scholar

[23] Stauffer, D., Aharony, A., Introduction to percolation theory, Taylor & Francis, London, 1992Search in Google Scholar

[24] Győry L., Kristóf G., Balogh M., Geiger J., Horváth J., I-Core numerical rock and pore model. In: Geiger J., Pál-Molnár E., Malvic T. (Eds.), Theories and applications in geomathematics, Selected studies of the 2012 Croatian-Hungarian Geomathematical Convent, Opatija, Geolitera Publishing House, Szeged, Hungary, 2012, 49–69Search in Google Scholar

Received: 2016-1-21
Accepted: 2016-7-13
Published Online: 2016-12-30
Published in Print: 2016-1-1

© 2016 Noémi Jakab

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.

Downloaded on 2.3.2024 from https://www.degruyter.com/document/doi/10.1515/geo-2016-0057/html
Scroll to top button