Synthetic model - Mansfield area
We will be inverting gravity and magnetic data generated for the a synthetic model inspired from geological mreasurements collected in the Mansfield area (Victoria, Australia) as shown below. Details about the model used here can be found in Giraud et al. (2017) and Giraud et al. (2019) (geophysics and petrophysics) and Pakyuz-Charrier (2017, 2018) (geology).
Density contrast and magnetic susceptility
The density contrast and magnetic susceptibilitity models used here are shown in Figure 1 (top). The corresponding geophysical measurements are shown in Figure 1 (bottom).
Structure term
The entropy used for the structure term constraint in presented in Figure 2.
Geological model
The geological reference model is shown in Figure 1. The corresponding simulated probabilistic geological model is shown in Figure 4. The geological model was derived by Pakyuz-Charrier (2018).
Modelling petrophysical measurements
The petrophysical measurements are modelled using a Gaussian mixture model, as shown in Figure 3. It presents 2 ambiguities. Lithologies 1 and 2, and 5 and 6, respectively, have the same density contrasts. The same applies to lithologies 3 and 5, and 2 and 4, respectively, for magnetic susceptibility.
Constrained potential field inversions
A step-by-step guided process from unconstrained inversion to geologically constrained joint inversion
September 2019, AEGC Perth
Clément Barrière and Jérémie Giraud
Disclaimer
all material is provided without any warranty whatsoever. While we make every effort to ensure that this Jupyter notbeook is accurate and up to date, there remain the possibility of human error, and utilisation of the code provided here may by no means provide exact, certain answers to any modelling problem. We do not guarantee, and accepts no legal liability whatsoever arising from or connected to, the accuracy or completeness of any material contained in this Jupyter notebook or obtained through usage of it. The information here is subject to be changed without any notice/legal procedure. You are reading the text and code provided here and using this notebook at your own risks.
In this notebook we propose to run a series of inversions with increasing constraints. The aim of this exercise is to understand the influence of constraints onto inversion. These constraints range from the simple application to depth weight to counteract the decay of the sensitivity with depth to the application of spatially varying petrophysical constraints in joint inversion.
Introductory note
The Python code proposed in this practical tutorial provides a simplified version of the 3D parallel inversion platform Tomofast-x (Martin et al., 2013, Martin et al., 2018, Giraud et al., 2018, Giraud et al., 2019), which we call 'Tomoslow-2D'. While the implementation the clustering algorithm and the gradient constraints differs, the philosophy remains the same and results obtained with the code shown here present much similar features. It has been developed for educational and prototyping purposes.
The aim of this jupyter notebook is to provide a high-level understanding of the mechanics of constrained and joint inversion codes similar in philosophy to the implementation we follow. In the current implementation, the cell-size is chosen to be unity (1m x 1m), but the problem is scalable to any dimension.
Acknowledgements
The authors acknowledge Mark Jessell's support throughout the project.
As one of the pre-requisites to run this code, we first need to load libraries for visualisation and numerical computations.
Following this, we need to import modules that contain fonctions called by the code during inversion.
The module Calc_Jac
contains code that is necessary to calculates the Jacobian matrix of the problem considered.
The module Joint_inversion
contains code to calculate depth weighting and functions to apply constraints to inversion.
The file names we define will be used and explained later in the notebook.
FORMULATION OF THE COST FUNCTION
We formulate the inverse problem in a least-squre sense as follows:
Where the different terms mean:
is the invered model.
refers to the geophysical data.
stands for the prior model.
refers to the respective model.
represent the data term, (Lines and Treitel, 1984).
represent the model term, (Hoerl and Kennard, 1970).
is the structure term which stands for local gradient regularization, where represent the model gradient and represent the damping gradient constraints, (Li and Oldenburg, 1996, 1998).
represent the petrophysics term or the the clustering constraint. Here is the petrophysical distribution which is given by the following equation: where evrywhere if no geology is available or in the i-th cell otherwise (Giraud et al, 2017).
Here, matrices , and are spherical matrices.
Solver
We solve the inverse problem using the LSQR algorithm (Paige and Saunders, 1982).
In the following part of the code, A et B refers respectively to a matrix and a vector wich will be used to resolve the in the least-square sense using the LSQR algorithm (Paige and Saunders, 1982).
Then the code use the lsqr
function from the scipy.sparse.linalg
library to solve the system of equations . The LSQR function retruns the solution model called
For density contrast inversion, the LSQR function parameters are left to their default values except for the number of iterations wich is set to 100 to have a good compromise between speed and accuracy. The magnetic inversion requires a smaller tolerance to get a better approximated solution. More information about the way the LSQR function is built is available on the website: https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.linalg.lsqr.html
can be rewritten as . The calculation of an exact solution is limited to small models as calculating increases rapidly with the size of the model. For large models, the LSQR algorithm provides an approximated solution. In this notebook, the system is quite small so an exact solution could have been calculated but using the LSQR algorithm allow us to work on bigger models if necessary.
1. UNCONSTRAINED GRAVITY INVERSION
The simplest inversion we perform is constrained only by the model smallness term. The function to optimize is given as:
First the code applies a depth weighting (DW) to the Jacobian matrix using a power law (Li and Oldenburg, 1998). This balances the relative importance of the different model-cells with depht via the following equation:
,
where represent the depth of the cell, is usually set accordingly to the physics of the problem, and depends on the cell size (Li and Oldenburg, 1996, 1998).
LSQR formulation for least-square inversion
The LSQR algorithms approximates the solution of the following problem, where is the quantity inverted for:
We can therefore solve Equation (2) by setting:
The matrix "A" as with the Jacobian matrix and an indentity matrix mulitplied by a specific weight to match the size of the vector .
The vector "B" as with being the data vector and the prior model wich initialised to 0.
The equation (2) can now be written as:
In the following lines, the code build these elements.
Note:
Several function used in the code are not detailed here to make the notebook more readable. Theses functions are:
set_mod
creates a python matrix from data stored in a text file.grav_jac
calculates Jacobian matrix for the gravity inverse problem.mag_jac
calculate Jacobian matrix for the magnetic inverse problem..get_dim
gets the dimensions of a matrix stored in a text file.posx
generates a vector of positions along the horizontal axis used in the calculation of the Jacobian matrix.Calc_DW
calculates depth weighting. It creates matrices containing the depth weighting values for the gravity and the magnetic models.calc_grad
calculates the gradients of the model.lsqr
solves a system of equations in a least-squre framework using the LSQR algorithm (Paige and Saunders, 1982)
and contain the value of the DW that will be used to precondition the Jacobian matrix. From the gravity and magnetic problems, depth weighting is calcualted using equation (3) as follows:
and for gravity (Boulanger and Chouteau, 2001);
and for magnetic (Li and Chouteau, 1998).
Depth weighting is then applied to the Jacobian matrix (Gij
):
As explained above, we will run a serie of inversions with increasing constraints. Under certain circumstances, these constraints might have competing influence on inversion due to the non-uniqueness of the solution and the topography of the cost function. Therefore, we need to implement different weights to strengthen or reduce the effect of a given contraint.
The first weight we add here is grav_w
for the unconstrained density contrast inversion.
The next step is to buid the matrix and the vector : here the matrix is multiplied by a weight:
After inversion we use a number of plotting function for quality control and visualisation from the pyplot
library:
We also define the data misfit and the model misfit as follow:
These two values allow us to keep track of the difference between the inverted models and the true one.
Exercise: What happens if you change grav_w
?
2. ADDING THE STRUCTURE TERM
In this section, we utilise the information entropy (Shannon, 1948, Wellmann and Regenauer-Lieb, 2012) to adjust the strength of the smoothness term locally. Schweizer et al. (2017) define information entropy as follow: information entropy "quantifies the amount of missing information and hence, the uncertainty at a descrite location". For the cell, it is given by (Shannon, 1948):
Here, we normalise and calculate such that it reaches its maximum values where uncertainty is high and conversely where entropy is low. We calculate (see eq. 1) as follows (Giraud et al, 2019):
In the following lines, we compute the gradients of the model in the horizontal and vertical directions ( and , respectively) using a forward difference scheme. We then perform element-wise multiplication of the gradients with to adjust the strength of the smoothness constrained applied during in inversion. This process relaxes the smoothness constraints in uncertain parts of the model while maintaining it is where uncertainty is lower.
We also define (w_entropy
) to weight this structure (i.e. smoothess) term constraint in the cost function.
After this, we build the matrix A
and vector B
. We build A
by concatenating the Jacobian matrix with w_entropy
multiplying and and B
by concatenating with all-zeros vectors indicating that the smoothness term should be minimized.
Exercise: What happens if you change w_entropy
?
3. ADDING PETROPHYSICAL CONSTRAINTS
In this part of the code we add the petrophysical constraints term to the cost function (eq. 1):
Single domain inversions
Gravity modelling
As shown in Figure 1, lithologies [1 and 2] and [5 and 6] have the same density contrast. We can therefore model density contrasts using a mixture of 4 Gaussians. The individual Gaussians are parameterised as:
mean values of: 0, 100, 200 and 300
standard deviation of 40 .
Magnetic modelling
Using the same logic as for gravity, lithologies [2 and 4] and [3 and 5] have the same magnetic susceptibility. We can therefore model magnetic susceptibility with a mixture of 4 Gaussians. The Gaussians are parameterised as:
mean values of: 0, 0.025, 0.05 and 0.0075 SI
standard deviation of 0.0001 SI (suggested value as starting point for tests in this notebook)
An exemple of such Gaussian mixture model is presented on the figure 5.
Use of probabilistic geological modelling
The results of probabilistic geological modelling we use consists of the observation probability of lithologies. This is useful to adapt the weight of the different lithologies in the Gaussian mixture model accordingly with their observation probability.
Implementation
We optimize of the petrophysical constraints term using a k-nearest neighbour algorithm, in which the distance between the different cluster is calculated using the Gaussian mixture model (Mahalanobis distance). The implementation is explained below.
In what follows we perform inversion petrophysical constraints. We start with gravity inversion, followed by the inversion of magnetic data.
The following lines compute a first resulting model with and from part 1. We use this model as a prior model to compute the value of the Gaussian mixture. It is represented by the term pdfs
, calculated by the function PDF
.
Next, the code enters a loop described below.
For each cell of the model we determine the maximum of the 4 Gaussians. For each model-cell, the value of the center of the best matching Gaussian in the mixture is stored in the vector (P_vector
). This vector is then multiplied by (weight
). When geological information is availabe, this value depending on the the probability of the different lithologies. An identity matrix () (pfdp
) is also created to match the size of . In this part of the code, we define (pgrav_w
) to adjust the strength of the petrophysical constraint during inversion.
Then we concatenate with and with the and solve the system using LSQR.
At each iteration, and the resulting model is used to create a new vector for an other instance of the loop.
At the end of the loop, plotting function are to visualise the solution. As we can observed after executing these lines, the values cluster around the centers of the Gaussians of the mixture model.
Petrophysical constraints using probabilistic geological modelling
When probabilistic geological modelling results are available, we can condition the petrophysical using the observation probability of the different rock types.
These probabilities are obtained using the Monte Carlo Uncertainty Estimation method (MCUE). In MCUE, a series of geological models honouring geological measurements is generated. These models are then used to calculate a probabilistic model comprised of the observation probability of the different lithologies in every cell of the model (Pakyuz-Charrier et al, 2018).
When available, such probabilities can be used to weight the different Gaussians of the mixture model representing the petrpohysical measurements in the area accordingly with the lithology they correspond to (Giraud et al., 2017, 2019). In such case, the petrophysical constraints applied vary across the model accordingly with probabilistic geological modelling results.
The probabilities used here are shown below.
Exercise: What happens if you change entropy
to True
?
4. MAGNETIC INVERSION
The inversion of magnetic data follows the same philosophy as the gravity data.
In terms of petrophysical properties, the centers of the gaussians are: 0, 0.025, 0.05, 0.075 SI.
Parameters of the magnetic inversion:
filename_mag
: the name of the file containing the magnetic data.mag_w
: weight of the the magnetic data.pmag_w
: The weights of the petrophysical constraints for the magnetic model.w_entropy
: weight of the information entropy.pc
: Boolean to add the petrophysical constraints to the system, set to False by defaultentropy
: Boolean to add the entropy and the gradients of the model to the system, set to False by defaultGeol
: Boolean the geological information of the model, set to False by default
Exercise: What happens if you change :
1:
geol
toTrue
,2:
entropy
toTrue
3:
geol
andentropy
toTrue
5. JOINT INVERSION
Now that we have performed single-domain inversions using petrophysical constraints, we can combine them to perform joint inversion.
The method is very similar to single-domain inversion. The only difference is that we compute the Gaussian mixture model in cross-plot domain (see Figure 5), via the python function multivariate_normal.pdf
form the scipy.stats
library. This will retrun a (P_vector
) that will be used in the LSQR.
More information on the multivariate_nomal.pdf function is available on this link : http://lagrange.univ-lyon1.fr/docs/scipy/0.17.1/generated/scipy.stats.multivariate_normal.html
the code below offers the possibility to perform joint inversion using global petrophysical constraints (ie, no geological information), petrophysical constraints conditioned by probabilistic geological modelling, and structural constraints using geological uncertainty information.
Note: not all parameters or combinations of constraints may work well at the first attempt.
Joint_inversion parameters:
filename_grav
andfilename_mag
: the name of the files containing the gravimetic and magnetic datagrav_w
andmag_w
: weights of the gravimetric and magnetic data, respectively.pgrav_w
andpmag_w
: The weights of the petrophysical constraints for the gravimetric and magnetic modelsw_entropy
: weight for the structural termentropy
: Boolean to add the entropy and the gradients of the model to the system, set to False by defaultverbose
: Boolean to view or not the solution, set to True by defaultGeol
: Boolean the geological information of the model, set to False by default
Exercise: What happens if you change :
1:
geol
toTrue
,2:
entropy
toTrue
3:
geol
andentropy
toTrue
4: Prior model? How do you get a more geologically sensible model?
Boulanger, O. and Chouteau, M., 2001, Constraints in 3D gravity inversion, Geophys. Prospect., 49(2), 265–280, doi:10.1046/j.1365-2478.2001.00254.x, 2001.
Giraud, J., E. Pakyuz-Charrier, M. Jessell, M. Lindsay, R. Martin, and V. Ogarko, 2017, Uncertainty reduction through geologically conditioned petrophysical constraints in joint inversion: GEOPHYSICS, 82, ID19-ID34.
Giraud, J., Lindsay, M., Ogarko, V., Jessell, M., Martin, R. & Pakyuz- Charrier, E., 2019. Integration of geological uncertainty into geophysical inversion by means of local gradient regularization, Solid Earth, 10, 193–210.
Giraud, J., Ogarko, V., Lindsay, M., Pakyuz-Charrier, E., Jessell, M., and Martin, R., 2019, Sensitivity of constrained joint inversions to geological and petrophysical input data uncertainties with posterior geological analysis, Geophysical Journal International, Volume 218, Issue 1, July 2019, Pages 666–688, https://doi.org/10.1093/gji/ggz152
Li, Y. & Chouteau, M., 1998. Three-dimentional gravity modeling in all space: Survesy in GEOPHYSICS, 19, 10.1023/A:1006554408567.
Li, Y. & Oldenburg D., 1996. 3-D inversion of magnetic data. GEOPHYSICS, 61, 10.1190/1.1443968.
Li, Y. & Oldenburg D.,1998. 3-D inversion of gravity data. GEOPHYSICS, 63, 10.1190/1.1444302.
Martin, R., Monteiller, V., Komatitsch, D., Perrouty, S., Jessell, M., Bonvalot, S. & Lindsay, M.D., 2013. Gravity inversion using wavelet-based compression on parallel hybrid CPU/GPU systems: application to southwest Ghana, Geophys. J. Int., 195, 1594–1619.
Martin, R., Ogarko, V., Komatitsch, D. & Jessell, M., 2018. Parallel threedimensional electrical capacitance data imaging using a nonlinear inversion algorithm and Lp norm-based model, Measurement, 128, 428–445.
Paige, C.C. & Saunders, M.A., 1982. LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Softw., 8, 43–71.
Pakyuz-Charrier, E., & Intrepid Geophysics. (2017). Mansfield (Victoria, Australia) area original GeoModeller model and relevant MCUE outputs [Data set]. Zenodo. http://doi.org/10.5281/zenodo.848225
Pakyuz-Charrier, E., Lindsay, M., Ogarko, V., Giraud, J., and Jessell, M.: Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for disturbance distribution selection and parameterization, Solid Earth, 9, 385-402, https://doi.org/10.5194/se-9-385-2018, 2018.
Schweizer, D., Blum, P. and Butscher, C.: Uncertainty assessment in 3-D geological models of increasing complexity, Solid 10 Earth, 8(2), 515–530, doi:10.5194/se-8-515-2017, 2017.
Shannon, C. E. E.: A Mathematical Theory of Communication, Bell Syst. Tech. J., 27(3), 379–423, doi:10.1002/j.1538- 7305.1948.tb01338.x, 1948.
Wellmann, J. F. and Regenauer-Lieb, K.: Uncertainties have a meaning: Information entropy as a quality measure for 3-D geological models, Tectonophysics, 526–529, 207–216, doi:10.1016/j.tecto.2011.05.001, 2012.