The approach to gravity forward calculation of 3D Tesseroid mesh model and its parallel algorithm

Ad Code

The approach to gravity forward calculation of 3D Tesseroid mesh model and its parallel algorithm


Summary

Forward calculation of 3D grid model is the basis of gravity data inversion. High-precision and high-efficiency forward modeling is conducive to improving the quality of inversion interpretation. Aiming at the problem of high-precision and high-efficiency gravity forward modeling in the large-scale, surface observation surface research area, this paper presents a gravity anomaly forward modeling method and parallel algorithm for the three-dimensional Tesseroid grid model in spherical coordinates. Among them, the forward algorithm uses the improved Gauss-Legendre integral method to realize the high-precision calculation of the gravity anomaly of the large-scale and surface observation surface, and the parallel algorithm uses the MATLAB task parallel algorithm based on OpenMP to realize the high-efficiency calculation. The theoretical model and the three-dimensional model data test of East China lithosphere verify the validity of the method in this paper. The method in this paper provides technical support for efficient large-scale gravity field simulation and 3D inversion.



Key words: spherical coordinate system ; Tesseroids ; Gravity Forward ; Parallel Computing ; surface observation surface

Abstract

The forward modeling of a 3D mesh model is the basis of gravity data inversion. High precision and high efficiency forward modeling is helpful to the improvement of the quality of inversion interpretation. In order to solve the problem of high precision and high efficiency gravity forward modeling Based on a large-scale surface observation area, this paper presents the gravity anomaly forward modeling method and parallel algorithm of a 3D Tesseroid mesh model in the spherical coordinate system. The forward modeling uses the improved Gauss-Legendre high Quadrature integration to me -precision gravity anomaly calculation based on a large-scale surface observation area, and also uses the MATLAB task parallel algorithm based on OpenMP to realize the high-efficiency forward modeling.The test on the 3D theoretical model and the Eastern China lithospheric model has verified the validity of the proposed method. This method can provide technical support for efficient large-scale gravity field simulation and 3D inversion.

Keywords: spherical coordinate ; Tesseroids ; gravity forward modeling ; parallel computing ; surface observation

0 Preface

The forward and inversion of the 3D grid model is an important means for the three-dimensional interpretation of gravity data. Among them, forward modeling is the basis of inversion, and high-precision and high-efficiency forward modeling will promote high-quality and high-efficiency inversion interpretation. For small-scale and small-scale research areas, the underground three-dimensional space is usually subdivided into many regularly arranged vertical cuboid combinations, and the density of each cuboid is different, and then the forward modeling and inversion are carried out on the three-dimensional grid model of the vertical cuboid combination. However, for large-area and large-scale research areas, due to the existence of the curvature of the earth, the conventional three-dimensional grid model based on the combination of upright cuboids is no longer applicable, and the three-dimensional grid model method of the spherical coordinate system is required.

At present, there are many forward modeling methods for gravity anomalies in the spherical coordinate system, mainly in the following four categories: The unit body in the spherical coordinate system processes the approximate regular shape in the form of point elements, line elements or surface elements[ ] , which The main problem of this type of method is that the calculation accuracy is very low, and it is difficult to meet the increasing accuracy requirements; Divide the global terrain into a combination of regular shapes such as fan-shaped rings or spherical cap units, and give a strict integral analytical formula, and then Calculate the forward gravity effect of each regular model body in the spherical coordinate system, and then accumulate it as the global terrain correction value 2 ] . This method is generally used for single-point observation, and as the observation point changes, the corresponding spherical coordinate And the terrain model needs to be rebuilt; Based on the Taylor series expansion method of the Tesseroid unit body, Taylor expansion is performed on the model integral kernel function at the geometric center point of the forward model body, and the gravity effect of each Tesseroid unit body in the spherical coordinate system is calculated and calculated. Superposition 1 , 3 - 4 ] , the calculation accuracy of this method is closely related to the Taylor expansion series and the size of the subdivided grid; Based on the Gauss-Legendre (GLQ) integral method of Tesseroid unit, the Then the GLQ integration method based on the numerical integration method is applied to the calculation of the ellipsoid volume integral, and the final gravitational effect is also obtained by superposition 5 , 6 , 7 ], the calculation accuracy of this method is generally higher than that of the Taylor series expansion method. However, whether it is the Gauss-Legendre integral method or the Taylor series expansion method, when they forward calculate the anomaly of the observation point on the surface or near the surface, there will be a proximity effect, that is, the error will decrease with the observation distance. And increased sharply. In this regard, Wang Xiang et al. and Hao et al. improved the Gauss-Legendre integral method, and gave a new unit body adaptive subdivision scheme, which greatly improved the calculation accuracy of the algorithm on the surface observation surface, and then respectively gave Based on the improved Gauss-Legendre integral method (GLQ2D), the single density layer interface depth inversion and apparent density mapping method [ 8,9 ] .

In this paper, the forward modeling method of Wang Xiang et al. and Hao et al. 8 , 9 ] for a single density layer model is extended to a 3D grid model with multiple density layers, laying the foundation for large-scale numerical simulation and 3D inversion. Since the GLQ2D algorithm refines the subdivision of each Tesseroid unit, the number of subdivided small units increases and the amount of forward modeling increases exponentially, resulting in low forward modeling efficiency of large-scale 3D Tesseroid mesh models.

In the field of gravity and magnetism, scholars at home and abroad have given some fast algorithms for forward and inversion of gravity anomalies, mainly including frequency domain algorithm 10 ] , translational symmetry algorithm 11 ] and parallel computing 12 , 13 , 14 , 15 , 16 ] wait. Among them, parallel computing is mainly divided into two directions, CPU parallelism on the host side and GPU parallelism on the device side. The parallelism on the CPU side mainly includes OpenMP (open multi-processing) and MPI (message passing interface). OpenMP is a set of guiding annotations for multi-threaded programming of shared-memory parallel systems, which enables programmers to devote more energy to the parallel algorithm itself rather than its specific implementation details. MPI is a message transfer interface, a language-independent communication protocol, which requires programmers to manually manage data allocation, realize process communication and maintain synchronization. The mainstream parallel mode on the GPU side is the NIVIDIA CUDA architecture. According to different parallel strategies, parallel algorithms are also divided into two types: one is task parallelism, and the other is data parallelism. The parallel idea of ​​task parallelism is to assign the same batch of data to different loop bodies in the for loop for processing. The idea of ​​data parallelism is to process different data with the same program. At present, some scholars at home and abroad have used parallel algorithms to improve the efficiency of gravity and magnetic forward and inversion to a certain extent. For example, Chen Zhaoxi et al. 12 ] gave the forward calculation algorithm of gravity and gravity gradient data based on NIVIDIA CUDA parallel computing, Hou et al.[12 ]13 , 14 , 15 ] gave a gravity gradient 3D density inversion algorithm based on multi-level hybrid parallelism, and Zhou Xue et al. [ 16 ] gave a gravity and gravity gradient forward calculation algorithm based on MPI and OpenMP parallel computing. But these algorithms are all based on the Cartesian coordinate system.

In this paper, aiming at the high-precision and high-efficiency gravity forward modeling problem of the large-scale, surface observation surface research area, the gravity anomaly forward calculation algorithm of the three-dimensional Tesseroid grid model in the spherical coordinate system is studied, and the GLQ2D algorithm is used to improve the gravity anomaly calculation of the large-scale, surface observation surface Accuracy, using OpenMP-based MATLAB task parallel algorithm to improve forward modeling efficiency. Finally, the theoretical model data and the three-dimensional density grid model of the East China lithosphere are used for forward modeling experiments to evaluate the effectiveness of this method.

1 Method principle

The forward modeling method of the gravity anomaly of the three-dimensional grid model based on the spherical coordinate system is to divide the underground three-dimensional space into a combination of regularly arranged Tesseroid units, each unit body has a different density value, and then calculate each unit body through the forward modeling of the GLQ2D algorithm The gravity anomaly of each measuring point on the observation surface, the gravity anomaly value of any measuring point on the observation surface is the accumulation of gravity anomalies of all Tesseroid units at this measuring point.

1.1 Tesseroid unit gravity forward modeling formula

In the spherical coordinate system, for any Tesseroid unit body ( Fig. 1 ), the gravity anomaly integral formula can be expressed by the following formula 1 , 6 ] :

Figure 1   Tesseroid unit schematic diagram 8 ]

Fig.1   Tesseroid model diagram 8 ]

 

1.2 Improved Gauss-Legendre integral method

The integral of the model integral kernel function is intervalized into Legendre polynomials, and then the outliers of the model are obtained by calculating and accumulating the Legendre polynomials of the approximate integral kernel function. The GLQ integral decomposes the gravity anomaly integral expression (1) in the spherical coordinate system, and obtains the Legendre polynomial expression of the Tesseroid unit body as follows 5 ] :


(3)

Compared with the traditional GLQ integral method, the improved GLQ integral method of Heck and Seitz 1 ] first integrates the gravitational anomaly integral expression in the spherical coordinate system in the r direction, and Wang Xiang et al. 8 ] further decomposes the GLQ integral to achieve Fewer weight control points are used to calculate the gravity anomaly of the model (that is, it is not expanded in the r direction after improvement), which not only improves the accuracy of the model's gravity calculation, but also reduces the time for model calculation. This improved GLQ integral method is called the improved Gauss-Legendre integral, that is, the GLQ2D algorithm.

In order to further improve the forward modeling accuracy of the surface observation surface, Hao et al. 9 ] simplified and improved the previous adaptive subdivision scheme on the basis of Wang Xiang et al. 8 ] , and introduced the given distance Judgment value W. Whether to further subdivide a Tesseroid unit whose radial, latitudinal and meridian calculation ranges are [ 1 , 2 ], [ φ 1 , φ 2 ], [ λ 1 , λ 2 ], respectively, can be Judgment is made by discriminant formula (4).

In the formula: φ , λ are obtained according to the ratio of the arc length to the perimeter of the Tesseroid unit body. According to the improved self-adaptive subdivision scheme, the problem of insufficient precision caused by the proximity effect in near-surface observation is effectively solved.

1.3 Gravity forward modeling process and parallel algorithm of 3D Tesseroid mesh model

In this paper, the forward modeling process of the 3D mesh model in the spherical coordinate system is similar to the forward modeling of the 3D mesh model in the rectangular coordinate system. Figure 2 shows the flow chart of the serial algorithm for gravity forward modeling of the three-dimensional grid model in the spherical coordinate system.

figure 2

Diagram

Description automatically generated

Fig. 2   Serial process of gravity forward modeling of 3D Tesseroid mesh model

Fig.2   Serial flow chart of gravity forward modeling based on a 3-D Tesseroid mesh model

 

The steps of the forward serial algorithm in this paper are as follows: Input the relevant data of the three-dimensional Tesseroid grid model and the observation surface . , ny , nz ) and the volume density of each unit, the observation surface data include the total number of measuring points ( ns ) and the longitude, latitude, and height of each measuring point.  Traversing layer by layer according to the number of layers nz on the depth axis of the model; traversing point by point according to the number of measuring points on the observation surface ns ; further, each layer traverses and calculates according to the nx×ny units of the model rule network first in the direction of the y- axis and then in the direction of the x- axis . According to the above traversal sequence, use the GLQ2D integral method to calculate the gravity effect of a single unit body on a single observation point, then superimpose the gravity effect of all unit bodies on a single point in a single layer, and finally superimpose the gravity effect of all layer unit bodies on a single point, that is, the model is obtained Gravity anomaly at each measuring point on the observation surface.  Output the gravity forward modeling results of the model on the observation surface. Figure 3 is a schematic diagram of the multi-layer Tesseroid unit body model in the spherical coordinate system.

image 3

Figure 3   3D Tesseroid mesh model

Fig.3   3-D Tesseroid mesh model

 

The parallel idea of ​​this paper adopts the MATLAB task parallel algorithm based on OpenMP, that is, uses parfor to parallelize the for loop ( Figure 4 ). MATLAB adopts client and worker modes when performing parfor loops. The client is the MATLAB side that writes and starts the code, and the worker refers to the MATLAB side that runs the code. The MATLAB software in the computer can be regarded as a process, and the same computer can run multiple MATLAB processes at the same time, and the physical unit corresponding to each worker is a processor or a processor core. Data transmission can be carried out in a certain way between each MATLAB process. The user first writes the code to be run on the client side, and the client side distributes the code segments that need to be parallelized to other MATLAB processes during the process of running the code.

Figure 4

Diagram

Description automatically generated

Figure 4   MATLAB parfor task parallel algorithm schematic diagram

Fig.4   MATLAB parfor task parallel algorithm schematic

 

There are many parallel strategies for the gravity forward calculation of the three-dimensional grid model in the spherical coordinate system shown in Figure 2, but the effects are basically the same According to the principle of optimal parallel acceleration of the outermost layer (model depth layer) loop, this paper proposes a parallel strategy as follows: first, specify the code segment that needs to be parallelized, that is, the loop structure of the model layer, and then realize the code design on the client side through task parallel acceleration , the MATLAB client automatically assigns the algorithms used in different layers of the model to different worker processing units, and each processing unit calculates the gravity effect of a single layer separately, and finally superimposes to obtain the forward modeling results of the model.

2 SLAB model test

2.1 Model parameters

This series of models is a simple model simulation of the subduction zone. The ranges of longitude and latitude are 100°E~110.2°E, 20°N~30.2°N, and the range of depth is 0~200 km. The subduction zone is used to simulate the plate convergence edge. The vertical distance from the top to the observation surface is 30 km, and it extends to 200 km in the direction of 60° eastward dip, with a residual density of 0.08 g/cm 3 .

In this paper, three models with different orders of magnitude are designed: The model size is 51°×51°×20 km, the grid spacing in the longitude and latitude direction is 0.2°×0.2°, and the depth direction step is 10 km; The model size is 51°×51 °×40 km, the grid spacing in the longitude and latitude directions is 0.2°×0.2°, and the step size in the depth direction is 5 km; The model size is 101°×101°×40 km (Fig. 5 ), and the grid spacing in the longitude and latitude directions is 0.1°×0.1°, with a depth step of 5 km. The heights of the observation surfaces of the three models are uniformly set to 0 m.

Figure 5

Figure 5   Schematic diagram of the three-dimensional model of the subduction zone

Fig.5   Three dimensional model of subduction zone



2.2 Model forward modeling and effect evaluation

This paper implements CPU-based parallel computing on cluster machines. Cluster CPU information: Intel(R) Xeon(R) CPU E5-2680 v2 @2.80GHz, parallel software: MATLAB R2018a, the number of model parallel computing units refers to the parallel use of CPU cores Number: 12. Figure 6 shows the comparison of GLQ2D serial forward modeling results, parallel forward modeling results, and model 24°N section results. By comparison, it can be seen that the forward modeling results of the parallel algorithm and the serial algorithm are completely consistent.

Figure 6

Figure 6   Comparison of GLQ2D serial and parallel forward modeling results

Note: The white dotted line box is the position of the subduction zone projected on the observation plane along the dip direction; the black dotted line is the position of the comparison section

Fig.6   The comparison results between the GLQ2D serial and parallel forward modeling

Note: the white dotted box shows the position of the subduction zone projected onto the observation plane along the dip angle; the back dotted line is the position of the comparison profile

 

Comparing Fig. 6a and b, from the numerical point of view, the anomaly results are all abnormally high in the middle, abnormally low on both sides, and the abnormal decrease on the right side of the subduction zone is slower than that on the left side. Then further compare the forward curve of the 24°N profile in the figure, as shown in Figure 6c, the gravity anomaly is high in the middle and low on both sides, the anomaly on the left side of the subduction zone decreases faster than the right side, and the serial and parallel results of the gravity anomaly are completely consistent. It is verified that the use of parallel computing for this data model will not change its calculation results and calculation accuracy.

Table 1 and Table 2 compare the gravity forward calculation efficiency of parallel subduction zone models with different core numbers and the gravity forward calculation efficiency of subduction zone models with different orders of magnitude. Here, for the convenience of explanation, the definition of speedup ratio is introduced, that is, the serial running time is more than the parallel running time 17 ] .


Table 1    Comparison of calculation efficiency of gravity forward modeling with different numbers of parallel cores

Table 1  The efficiency comparison results of gravity forward modeling with different parallel cores

model size

number of cores

time/s

Speedup ratio

101×101×40

serial

11432.970

2 cores

5871.609

1.947

4 Nuclear

2882.673

3.966

6 cores

2033.522

5.622

8 cores

1704.662

6.707

10 cores

1441.482

7.931

12 cores

1193.787

9.577

New window opens | Download CSV

 

Table 2    Comparison of calculation efficiency of gravity forward modeling with different data volume models

Table 2  The efficiency comparison results of gravity forward modeling with different data volume models

The amount of data

Parallel time/s

serial time/s

Speedup ratio

51×51×20

86.243

529.506

6.159

51×51×40

147.773

1234.136

8.372

101×101×40

1193.787

11432.970

9.580

New window opens | Download CSV

 

It can be seen from Table 2 that parallel computing has obvious and stable speed advantages compared with serial computing, and as the number of parallel cores increases, the speedup ratio continues to increase, and the maximum average speedup ratio is 9.580 in 12-core parallel computing. Therefore, other forward algorithms in this paper use 12-core parallel computing.

For a parallel system (including but not limited to algorithms, programs), if its performance (such as speedup) can increase proportionally with the increase in the number of processing units, we call the parallel system scalable 17 ] . For the MATLAB gravity anomaly forward parallel algorithm in the spherical coordinate system, it is indispensable to maintain the scalability of the program. Comparing the calculation running time and acceleration ratio of different parallel core models in Table 1 and the calculation running time and acceleration ratio of different data volume models in Table 2 , we can know that the parallel strategy is stable and feasible, that is, it has good scalability. And with the increase of the amount of calculation data, the calculation speed-up ratio continues to increase and tends to the number of processing units.

3. Three-dimensional lithosphere model test in East China

The study area selected for this forward modeling experiment is eastern China, with a longitude range of 99.75°E~122.25°E and a latitude range of 20.75°N~45.25°N. East China is an important region for geological structure and dynamics research and resource and energy exploration in my country, and its regional geophysical field numerical simulation and crust-mantle structure imaging are of great scientific significance. In this paper , the three - dimensional shear-wave velocity structure model of the lithosphere in the study area was extracted from Shen et al . The grid size of the East China lithosphere 3D density model is 111°×121°×95 km, the grid spacing in the longitude and latitude directions is 0.2°×0.2°, and the step size in the depth direction is 2 km. The minimum density is 2.26 g/cm 3 , the maximum density is 3.45 g/cm 3 , and the average density is 3.144 g/cm 3 . For this model, the calculation of the traditional rectangular coordinate system will produce obvious errors due to the influence of the curvature of the earth, so the calculation of the spherical coordinate system is required. However, the forward modeling serial algorithm in the spherical coordinate system of this model takes a long time (about 15 h), so this paper uses a parallel algorithm to improve the forward modeling efficiency.

Figure 7

Figure 7   The three-dimensional density model of the lithosphere in East China

Fig.7   Three dimensional density model of Eastern China lithosphere

 

The method in this paper was used to carry out forward modeling calculation on the 3D model of East China lithosphere, and the 30°N section of the model was intercepted for effect comparison. Figure 8 shows the serial forward modeling results of the East China model GLQ2D and the 12-core parallel forward modeling results, as well as the comparison of the forward modeling effects of the 30°N section. By comparison, it can be seen that the results of parallel forward modeling and serial forward modeling of the model are completely consistent. The theoretical gravity anomaly of the lithospheric density model in the study area changes relatively smoothly on the whole, with an amplitude ranging from -450 to 650 mGal. Among them, the central and eastern parts of Inner Mongolia, northern Hebei and the Qinghai-Tibet Plateau show a wide range of low value distribution, the Yangtze Craton shows a wide range of high value distribution, the Ordos Block and Songliao Basin show a medium range of high value distribution, and the southeast coastal area shows an abnormal The overall amplitude is between -100~200 mGal.

Figure 8

Fig. 8   Comparison of forward modeling results of lithosphere density model in East China

Note: The black dotted line is the position of the comparison section

Fig.8   The comparison results of the Eastern China lithosphere density model

Note: the back dotted line is the position of the comparison profile

 

In this paper, the actual Bouguer gravity anomaly data ( Fig. 8d ) of EGM2008 20 ] in the study area are collected for anomaly comparison, and the grid spacing of the data is 0.2°×0.2°. The actual Bouguer gravity anomaly in EGM2008 in the study area generally shows a trend of being higher in the east and lower in the west, with the amplitude ranging from -350 to 200 mGal. The overall anomaly characteristics of the forward modeling results in this paper are quite different from the actual Bouguer gravity anomaly. There are two main reasons for this: (1) The forward modeling model used in this paper is the 3D density model of the East China lithosphere within the depth range of 0–200 km. The surrounding area and the medium model below 200 km depth are not included; (2) The lithosphere 3D density model used in this paper is converted from the lithosphere 3D shear wave velocity model 18 ] obtained from background noise imaging through the velocity-density empirical formula 19 ] However, the model is not constrained by gravity data, and the empirical formula used cannot accurately reflect the actual velocity and density relationship in East China, and cannot reflect the real crust-mantle density distribution in the study area, which leads to the above-mentioned anomalous difference.

Table 3 is the running schedule of the East China model. It can be seen from Table 3 that parallel computing has obvious and stable speed advantages compared with serial computing. The average speedup ratio obtained by time mean is 11.206, which is much larger than the former 9.580. This further verifies the scalability of the parallel system, that is, as the amount of computing data increases, the parallel speedup ratio increases and tends to the number of processing units.

Table 3    Gravity forward modeling calculation efficiency comparison

Table 3  The efficiency comparison results of gravity forward modeling

The amount of data

Parallel time/s

serial time/s

Speedup ratio

111×121×95

4841.807

54255.090

11.206

New window opens | Download CSV



4 Conclusion

Aiming at the high-precision and high-efficiency gravity forward modeling problem of large-scale and surface observation surface, this paper proposes a parallel calculation scheme based on the improved Gauss-Legendre integral method, and it is verified by the theoretical model and the three-dimensional model data of East China lithosphere the effectiveness of the method in this paper. Data experiments show that the method in this paper can realize high-precision and high-efficiency forward modeling of gravity anomalies on large scales and surface observation surfaces. The MATLAB task parallel algorithm based on OpenMP has scalability and stability. It will tend to the number of parallel units; the amount of data remains unchanged, but the increase in parallel units increases the speedup ratio. The method in this paper lays a technical foundation for efficient large-scale gravity field simulation and 3D inversion.


 

Close Menu