Hybrid of differential quadrature and sub-gradients methods for solving the system of Eikonal equations

: Manyimportantnaturalphenomenaofwaveprop-agations are modeled by Eikonal equations and a variety of new methods are needed to solve them. The differential quadrature method (DQM) is an effective numerical method for solving the system of differential equations that can achieve accurate numerical results using fewer grid points and therefore requires relatively little computational effort. In this paper, we focus on the implementation of the non-smooth Eikonal optimization by using a hybrid of polynomial differential quadrature (PDQ) or Fourier differential quadrature (FDQ) method and sub-gradients idea. Our goal istodevelopanewEikonalequationsystemdesignforwave propagation equations, as well as the efficiency and accuracy of new grid points to reduce errors and compare errors in various physical equation problems, especially wave propagation equations, and achieve their convergence. We explore the accuracy and stability of the Eikonal equation system by two-dimensional and three-dimensional numerical examples and the use of three types of grid points in a comprehensive manner. This article aims to create a new and innovative solution to the non-smooth Eikonal equation system. This new method is much more efficient and effective than traditional numerical solution methods same as DQ.


Introduction
Eikonal differential equations are the most important geometric equations that are widely used in many different fields of mathematical science and engineering. One of the most important applications of these equations is seismic waves. They are studied by geophysicists called seismologists. Seismic wave fields are recorded by a seismometer, hydrophone (in water). Also, seismic waves are energy waves that pass through the earth's layers. The result is earthquakes, volcanic eruptions, large-scale landslides, and large man-made explosions that produce lowfrequency sound energy. All of them are somehow modeled with the equations of this paper. Many phenomena are caused by low-amplitude waves, which are called ambient disturbances [1][2][3][4] and references therein.
Such as the propagation of a wave in a moving fluid that has been extensively investigated by 2D and 3D numerical samples in [1] and maximum stability of the solution of the equation for the geometric equation and their existence has been investigated in [2]. The method (FSM) for solving the factored Eikonal equation of three evaluation criteria to evaluate the accuracy and convergence of inversion velocity in [3] has been discussed. The time-dependent Eikonal equation used in various sections. So According to the initial and boundary conditions, it is considered by the meshless methods in [4].
In this paper, we consider the system of Eikonal equations in the form below where Ω is a domain in R d and Γ is the subset of Ω domain where the boundary condition is located. In (1), f 1 (x), f 2 (x), g 1 (x), g 2 (x), α 1 , α 2 , β 1 , β 2 are known and u(x), v(x) ∈ C n (Ω) are unknown functions. The space C n (Ω), k ≥ 1, consists of functions which are n-times continuously differentiable over Γ.
We consider that in (3), the objective function is not smooth. The problem (3) is called non-smooth Eikonal optimization (NSEO) [5,6]. The subgradient method is one of the simplest methods for solving partial differential equations used to minimize a nondifferentiable convex function. It is well known that the subgradient method is slower, but it can be more successful than existing methods (nonsmooth optimization) by solving changes and creating algorithms and combining some other methods to solve different problems. It is well known that the subgradient method is slower, but with useful modifications and algorithms, and by combining some other methods, it may be more successful in solving various problems than the existing methods for (nonsmooth optimization).
In this paper we want to use a combination of two methods (subgradient and differential quadrature) to solve the system of Eikonal equations. Because these methods are easy to implement and have a relatively low computational cost. As a result, we can quickly find solutions and develop methods for nonsmooth problems. Different versions of the subgradient method for various problems have been developed and presented by researchers. A new method for solving (non-smooth convex optimization problem) using the sub-degree method has been introduced and its convergence has been proved. Also, The Proximal bundle method is compared with the proposed algorithm [7]. Subgradient methods are a powerful tool that can quickly find near-optimal dual solutions. The use of dual subgradient methods for convex optimization of a nonlinear multicommodity flow problem has been investigated by researchers in [8] and their convergence has been generalized. Algorithms have a direct impact on convergence speed, computational cost and response accuracy, so we need a useful and efficient algorithm for our problem. A new algorithm has been implemented by researchers to minimize non-convex and nonsmooth functions and has been compared with two versions of the subgradient method [9]. Subgradient methods are mostly used to solve nonsmooth convex optimization problems, such as bifurcation/ buckling analysis of beams, which is a nonsmooth problem. Also, the nonsmooth benchmark problems are acceptable by this method [10]. The combination of the subgradient method with the sloping method of a nonlinear operator on a problem similar to fused lasso has been analyzed as (fixed-point subgradient splitting method) and its convergence has been proved [11]. Researchers have extensively researched Riemannian subgradient methods to minimize weak convex functions on the Stiefel manifold [12]. Various solution methods for (mixed integer linear optimization) such as the subgradient optimization problem, the use of Lagrangian duals, for mixed-binary optimization, the recovery of primary solutions and cutting-planes have been extensively analyzed by researchers in [13]. A nonsmooth Equilibrium Problem is presented with a subgradient method in finite dimensional space so, an inexact subgradient algorithm and its convergence have been analyzed [14]. A dual subgradient method has been proposed using subgradient methods to solve convex optimization problems, which has been investigated with the average for (optimal resource allocation) in a multifactorial environment [15]. On the other hand, subgradient methods can be used to generate initial-dual solutions and develop solutions for nonsmooth convex optimization problems [16]. To solve bilevel equilibrium problems, a computational method using the subgradient method has been investigated and its convergence theorem has been proved, thus an algorithm with low computational cost has been introduced in [17]. For equilibrium problems and nonexpansive mappings, the subgradient method is used and an algo-rithm that uses a projection does not require the Lipschitz condition for bifunctions [18]. From the combination of subgradient methods, two strongly convergent algorithms have beenproposed, their convergence speed has been compared and its numerical results have been analyzed [19]. To solve the nonlinear split feasibility problems, an algorithm has been investigated by researchers using a combination of subgradient methods [20]. Recent advances have shown that the subgradient method is a powerful, useful, and efficient numerical method in science and other physical and engineering phenomena that plays a vital role in real life. Such as estimation in sensor networks, distributed control of multifactor systems, resource allocation, game theory, equilibrium problem, cooperative control of multiagent systems, equilibrium problem in network systems and machine learning [21][22][23]. Which has been used extensively by researchers. Therefore, a combination of subgradient methods for different problems is proposed and their algorithms are developed [24]. Different models of mathematics in science and engineering lead to the system of equations (1). This type of nonlinear system model can be seen in mechanics, materials science, engineering, and other sciences. Where one type of these models (Eikonal equation system) with Dirichlet boundary conditions for the 2-D problem is considered using the FEM in [25]. A numerical method based on elliptical solvents is presented in [26] and the type of special systems that arise in optimal control of a random evolutionary process has been studied in [27]. Consideration (density, velocity, energy), velocity and pressure disturbance, energy flux, wave equation, Acoustic wave are among the applications of hyperbolic systems that have been extensively researched in [28]. In model (1), suppose we have two u and v wave propagations and are moving in a fluid environment by fluid, Therefore, f 1 and f 2 show the wave propagation velocity, respectively. In many applications, waves have very high frequencies, are equivalents, or have very low wavelengths. To describe such waves, we need a distinction that is significantly smaller than a single wavelength. So how can the problem of having two waves propagating simultaneously at different speeds be calculated? Although there are analytical methods for solving the system of equations (1), using them is not easy. For example, optical phenomena. They are described by a wave equation. Assume that we use 10 points along the wave to describe each wave, so in this case, storing data is difficult [29]. In addition to the complexity of solving system equations (1), its application is also important, so we want to develop the Eikonal equation system.
In this paper, we implement the differential quadrature (DQ) method [30] for numerical solution of Equation System (1). Therefore, we consider the methods of Polyno-mial differential quadrature (PDQ) and Fourier differential quadrature (FDQ) on the system of equations (1). It should be noted that in this method, the effect of grid point on the accuracy of DQM results and the stability of the issue plays a very important role, So the choice of grid points depends on the subject [31]. The DQM has been applied to various equation problems by researchers. In [32], using DQ method, the authors have been investigated for deviation, buckling and analysis of free vibration of beams and plates with different boundary conditions and to obtain the numerical solution of the nonlinear Schrödinger equation using the DQ method based on the modified cube B-spline for the ordinary differential equation system [33]. Based on modified cube B-spline for solving linear and nonlinear PDEs in higher dimensions has also been studied in [34]. In [35], the RBF-DQ method is used on a bunch of Lane-Emden type equations and for solving linear and nonlinear systems, the second order value problem is considered in [36] by LRBF-DQ method. Characteristics of time domain DQ method have been studied in [37]. For the two-dimensional and three-dimensional equation system, B-spline functions have been used to develop the new DQ method [38] and the chaotic Lorenz, Chen, Genesio and Rössler systems have been studied by the DQ method in [39].
In Section 2, we present the DQ method for the first and upper order derivatives with respect to x and y respectively to obtain the weight coefficients. Therefore, to solve the system (3), PDQ and FDQ methods and the use of Lagrange interpolation have been expressed as test functions to obtain weighting coefficients.
Also, in this section, we propose three types of grid points that are with uniform and non-uniform for convergence and accuracy of numerical results. We will also show that the choice of grid points depends on the type of issue.
In Section 3. On the other hand, we seek to solve (3) by using hybrid of sub-gradient idea and the PDQ or FDQ methods. Therefore, we give two class of algorithms. So, we give some numerical examples of (3) in 2-D and 3-D problems. To solve the numerical solution of the system of equations, we use the weighted coefficient of the first order derivatives of the proposed methods.
In Section 5, To measure the accuracy of our errors, we consider the L 1 , L∞ norm and the root mean square error for different problems using PDQ and FDQ methods. Each of these issues can be a model of wave propagation equations, which is an important physical phenomenon that has been presented by the true solution of the system of equations or as a field of velocity. In this section, we have specifically considered five examples for two-dimensional and threedimensional systems, and for each of the problems we have presented errors and order of convergence in the tables. Also, to compare the accuracy of the errors, we have shown the error plots of the PDQ and FDQ methods in the figures and in Section 6, the results are summarized.

An overview of DQM
DQM is a numerical method suitable for numerical solution of ordinary and PDE because this method has high accuracy and little computational cost. One of the special advantages of the DQ method is its ease of use, so it has been implemented extensively for a variety of topics. The main idea of this method started from ordinary integral quadrature [31,40]. The method of DQ was introduced by Bellman in 1972, Quan and Chang in 1989, and then the general approach was presented by Shu in 1991 [31]. Now we consider the approximations of the derivatives of the first order for the 2D problem as follows: Where u (1) x , u (1) y the first-order derivatives, Q (1) ik and Q (1) jk are the weighting coefficients of the derivatives of the u(x, y) function with respect to x and y respectively. Also, we consider that Nx and Ny are the number of grid points in the x and y directions at the domain a = x 1 , x 2 , ..., x N = b. Similarly, for high-order derivatives, we can write as follows:

Polynomial differential quadrature (PDQ) method
Now, to calculate the weighting coefficients, we use the Lagrange interpolated polynomials as basic functions [30,40], so we can write, Where Using Equation (8), the weighting coefficients of the first order derivatives can be obtained as follows: And for the weighted coefficients of the second order derivatives, we can write as follows, Here x i are the coordinates of the grid points and for i = 1, 2, ..., N. Similarly, for high-order derivatives, you can refer to [31] for more information.

Superior selection from grid points
In the DQ method, grid points are a key element in the numerical solution of differential equations because grid points affect the convergence and accuracy of numerical results. The choice of grid points depends on the type of problem [31,41]. It should be noted that the DQ method has been tested for various partial differential equation problems that the non-uniform grid points are better than the uniform grid points, and have a high accuracy answer. So, we focus on the following three types of grid points: or i = 1, 2, ..., N, Grid points (13) grid points are uniform (equally spaced points), grid points (14) are non-uniform grid points (unequally spaced points) that are called Chebyshev-Gauss-Lobatto points and are usually used in most articles [30,38,41,43]. We are inspired by the difference between the two Chebyshev-Gauss-Lobatto in [42] and define (15). The grid points (15)  Therefore, by putting (11) and (12) in (3) with one class of grid points (13) or (14) or (15), we have a minimization problem. Our goal in this article is to solve the optimization problem by DQ. On the other hand, we consider the following the differential quadrature method for combining with (3).

Fourier differential quadrature (FDQ) method
In the FDQ method, we use the Fourier series expansion to obtain weighting coefficients, which means that the function is approximated by the Fourier series expansion [30,41,43]. We presented the Fourier series expansion as follows, In (16) we consider the period of φ = kπx L and L represents the length of the interval (physical domain). Using Lagrange interpolated trigonometric polynomials, the weighting coefficients of the first and second-order derivatives can be obtained as follows, where See [31] for more information. Therefore, by putting (17) and (18) in (3) with one class of grid points (13) or (14) or (15), we have a minimization problem with FDQ. Our goal in this article is to solve the optimization problem by FDQ.

Implementation of the methods based on sub-Gradients [5], [6]
In this section, we implement FDQ and PDQ methods on the system (3). Let A = |∇u|, B = |∇v|, so, α 1 β 2 −α 2 β 1 ≠ 0, , We discrete the nonlinear system for (x, y) ∈ Ω ⊂ R 2 , where α 1 (x i , y j ), α 2 (x i , y j ), β 1 (x i , y j ), β 2 (x i , y j ) and the functions (1). it can be written as follows, Similarly, for (x, y, z) ∈ Ω ⊂ R 3 , where α 1 (x i , y j , z k ), We present the discrete form A and B as the following: Therefore, in system of (20) and (21), j,k ,Q (1) κ,k are the weighting coefficients of the first order derivatives, with respect to x ,y and z respectively. The weighting coefficients in the PDQ method are calculated using (11) and in the FDQ method are obtained from (17) and (19). Where Nx , Ny andNz are the number of grid points in the x, y and z directions at the domain [a, b].

Algorithms of sub-Gradient for PDQ or FDQ at grid points
The subgradient (related to subderivative and subdifferential) of a convex functional is a idea of generalizing or approximating the derivative of a convex functional at nondifferentiable points. The definition of a subgradient is as follows: g(u, v) is a subgradient of J(u, v) at (u,v) if, for all (U,V), the following is true: Hybrid of differential quadrature and sub-gradients methods for. . . | 441 We know that for minimizing J the subgradient method uses the following iteration: Where k is the number of iterations, u (k) and u (k) are the kth iterate, g (k) = (g (k) 1 , g (k) 2 ) T is any subgradient at (u (k) , v (k) ) and α k (> 0) is the kth step size. Thus, at each iteration of the subgradient method, we take a step in the direction of a negative subgradient. As explained above, when f is differentiable, g (k) simply reduces to ∇J(u (k) , v (k) ). It is also important to note that the subgradient method is not a descent method in that the new iterate is not always the best iterate. Thus we need some way to keep track of the best solution found so far, i.e. the one with the smallest function value. We can do this by, after each step, setting is the best (smallest) point found so far. Thus we have: (1) , v (1) ), J(u (2) , v (2) ), ..., J(u (k) , v (k) )} which gives the best objective value found in k iterations. Since this value is decreasing, it has a limit (which can be −∞). Two class of algorithms are provided below for the sub-Gradient method:
Step 5 -If the condition of step 4 is satisfied optimal solution is found.
Step 5 -If the condition of step 4 is satisfied optimal solution is found.

Numerical results
In this study, we implement numerical algorithms of FDQ and PDQ by sub-gradient on the Eikonal equation system and consider five examples to develop the equation system. Our goal is to solve the system (1) and obtain its errors, Therefore, we use the following three criteria to measure accuracy and convergence: Where N is the number of grid points and u (exact) i and u (app) i show the exact solution and approximate solution respectively. The order of convergence is calculated by the following formula, Where E is L 2 , L∞ and (RMS) errors obtained for N 1 and N 2 and N 1 , N 2 indicate the number of grid points.

Remark 1.
For investigating of stability of methods, we know that a small perturbation in initial data may generate a large amount of perturbations in the solution. Therefore, we apply noisy data to show the stability of the proposed methods: Where α 1 = x 2 y, α 2 = xy 2 , β 1 = sin(x, y), β 2 = cos(x, y), Γ is the point (0, 0) and The true solution of the system is u(x, y) = e x 2 +y 2 , v(x, y) = e −(x 2 +y 2 ) . We use the grid points (14) in this example. To measure the accuracy of the answers to this problem, the errors L 2 , L∞ and its convergence order are presented in Table 1. Error plots of FDQ and PDQ algorithms are shown in Figure 1.   where α 1 = y sin(x), α 2 = y cos(x), β 1 = x sin(y), β 2 = x cos(y) and the true solution is u(x, y) = e sin(x) sin(y) , v(x, y) = e cos(x) cos(y) . In this example, we use the grid points (3.3) in algorithms of FDQ and PDQ. Also, errors L 2 , L∞ and convergence of the methods are shown in Table 2, and for the convergence accuracy, we have also presented the error plots in Figure 2. In this example, Γ has three separate points, Γ is the points Γ = {︁(︁     )︁ . Error plots for u, v functions are shown in Figure 3. Errors (L∞, RMS) and convergent order are reported in Table 3.    (1) is v(x, y) = sin(2πx) sin(2πy) + cos(x 2 + y 2 ).
Where Γ is the point (0, 0). We implement FDQ and PDQ algorithms on the equation system. For high accuracy, we use the grid points (14). The convergence of the methods is compared in Figure 4. The numerical results of the errors and the convergence order are considered in Table 4. The two-dimensional and three-dimensional numerical solutions are shown in Figure 6.   )︁ . We use both methods in the system of equations and using grid points (14). Errors accuracy (RMS, L∞) and convergence order are given in Table 5. Error plots are shown in Figure 5.

Conclusion
In this paper, we propose the hybrid of sub-gradient ideas with PDQ and FDQ methods for solving the system (1) and developing the Eikonal equation system. We have used three grid points (uniform and non-uniform grid) on different issues for the accuracy and convergence of the equation system (1). In addition, we have shown that the choice of grid points to reduce errors, accuracy and convergence depends on the type of problem. As you can see, we implemented the new grid points in the second example. We have shown that the scheme of new grid points is also effective and useful for solving the system of nonlinear equations. In this study, we found that to solve the equation system (1), the FDQ method requires less computational time than the PDQ method, and the PDQ method has a high accuracy compared to the FDQ method and reduces errors sufficiently. As you can see, we have shown the norm of errors, the root mean square error, order of convergence, and the accuracy of the methods presented in the tables, and we have also obtained error plots. As a result, we have achieved the convergence of problems and acceptable numerical results. The items to be researched in this article are as follows: 1. One of the biggest challenges in sub-gradient optimization is stage size selection. In this article, I state that our numerical results are a trial and error and finding the best step size. Because we need to study convergence theory to describe it accurately, which will be explored in future articles. 2. This method requires a modified pattern for nonconvex problems. Which we will discuss in future work. 3. Error analysis of this method and its limits should be proved using functional analysis. 4. Applied benchmarks in vehicle design and solid mechanics can create a new perspective on the usefulness of this method. 5. We observe that the results of this method for stiff systems are not suitable and we need to modified it.