A new generalized shift-splitting method for nonsymmetric saddle point problems

Recently, Huang and Huang [Journal of Computational and Applied Mathematics, 328 (2018) 381–399] proposed a modified generalized shift-splitting preconditioned (denoted by MGSSP) method for solving large sparse saddle point problems, and gave the corresponding theoretical analysis and numerical experiments. In this paper, based on the modified generalized shift-splitting preconditioned (MGSSP) method, we generalize the MGSSP algorithms and further present the new generalized shift-splitting preconditioned (NGSSP) method for nonsymmetric saddle point problems. Moreover, by similar theoretical analysis, we analyze the convergence conditions of the corresponding matrix splitting iteration methods of the NGSSP preconditioned saddle point matrices. In final, one example is provided to confirm the effectiveness. MSC: 65F10, 65F15, 65F50


Introduction
Consider the following 2 3 2 block saddle point problems where A 2 R n, n is a symmetric positive definite, B 2 R n, m , m ł n, has full column rank, B T 2 R m, n is the conjugate transpose of B, and f 2 R n , g 2 R m are two given vectors. It appears in many different applications of scientific computing, such as constrained optimization, 1 the finite element method for solving the Navier-Stokes equation, [2][3][4] and constrained least squares problems and generalized least squares problems [5][6][7][8][9][10][11][12][13][14][15][16][17] and so on. In recent years, there has been a surge of interest in the saddle point problem of the form (1), and a large number of stationary iterative methods have been proposed. For example, Santos et al. 6 studied preconditioned iterative methods for solving the singular augmented system with A = I. Golub et al. 18 presented SOR-like algorithms for solving linear systems (1). Darvishi and Hessari 9 studied SSOR method for 1 solving the augmented systems. Bai and Wang 1,[20][21][22] presented GSOR method, parameterized Uzawa (PU) and the inexact parameterized Uzawa (PIU) methods for solving linear systems (1). Zhang and Lu 23 showed the generalized symmetric SOR method for augmented systems. Peng and Li 24 studied the unsymmetric block overrelaxation-type methods for saddle point. Bai and Golub [25][26][27][28][29][30][31] presented splitting iteration methods such as Hermitian and skew-Hermitian splitting (HSS) iteration scheme and its preconditioned variants, Krylov subspace methods such as preconditioned conjugate gradient (PCG), preconditioned MINRES (PMINRES) and restrictively preconditioned conjugate gradient (RPCG) iteration schemes, and preconditioning techniques related to Krylov subspace methods such as HSS, block-diagonal, block-triangular, and constraint preconditioners and so on.
Recently, based on the modified generalized shiftsplitting of the saddle point matrix A, Huang and Huang 32 constructed a modified generalized shift-splitting preconditioned (denoted by MGSSP) method for solving large sparse saddle point problems. Moreover, theoretical analyses and numerical example show that the corresponding iteration method unconditionally converges to the unique solution of the saddle point problems.
For large, sparse or structure matrices, iterative methods are an attractive option. In particular, Krylov subspace methods apply techniques that involve orthogonal projections onto subspaces of the form K(A, b)[span b, Ab, A 2 b, :::, A nÀ1 b, :::g: È The conjugate gradient method (CG), minimum residual method (MINRES) and generalized minimal residual method (GMRES) are common Krylov subspace methods. The CG method is used for symmetric, positive definite matrices, MINRES for symmetric and possibly indefinite matrices and GMRES for unsymmetric matrices. 33 In this paper, on the basis of the special shiftsplitting by Huang and Huang, 32 we generalize the MGSSP algorithms and further present the new generalized shift-splitting preconditioned (NGSSP) method for nonsymmetric saddle point problems. Moreover, by similar theoretical analysis, we analyze the convergence conditions of the corresponding matrix splitting iteration methods of the NGSSP preconditioned saddle point matrices. In final, one example is provided to confirm the effectiveness.

New generalized shift-splitting preconditioned (NGSSP) method
Recently, for the coefficient matrix of the augmented system (1), Huang and Huang 32 made the following splitting where a.0, b.0 are two constant numbers, and I m and I n are the identity matrix. Based on the iteration methods studied in Huang and Huang, 32 we establish the new generalized shift-splitting preconditioned (NGSSP) method of the saddle point matrix A, which is as follows: where a.0, b.0, g.0 are three constant numbers, and I m and I n are the identity matrix. By this special splitting, the following mnew generalized shift-splitting preconditioned (NGSSP) method can be defined for solving the saddle point problem (1): New generalized shift-splitting preconditioned (NGSSP) method: Given initial vectors u 0 2 R m + n , and three relaxed parameters a.0, b.0, g.0. For k = 0, 1, 2, ::: until the iteration sequence fu k g converges, compute It is easy to see that the iteration matrix of the NGSSP iteration is G NGSSP = P À1 NGSSP R NGSSP = aI n + gA gB ÀgB T bI m À1 aI n + (g À 1)A (g À 1)B If we use a Krylov subspace method such as GMRES (Generalized Minimal Residual) method or its restarted variant to approximate the solution of this system of linear equations, then can be served as a preconditioner. We call the preconditioner P NGSSP the NGSSP preconditioner for the generalized saddle point matrix A.
In every iteration of the NGSSP iteration (4) or the preconditioned Krylov subspace method, we need solve a residual equation needs to be solved for a given vector r at each step. Hence, analogous to Algorithm 2.1 in Cao, 34 we can derive the following algorithmic version of the NGSSP iteration method.
Algorithm 2.1. For a given vector r = ½r T 1 , r T 2 T , the vector z = ½z T 1 , z T 2 T can be computed by (7) from the following steps: Step 1: Computed t 1 = br 1 À bgBr 2 ; Step 2: Solve (abI n + bgA + g 2 BB T )z 1 = t 1 ; Step 3: On the new generalized shift-splitting preconditioned (NGSSP) method, when g = 2, the NGSSP method reduces to the MGSSP method. So, the NGSSP method is the generalization of existing iterative algorithm.

Convergence of NGSSP method
Now, we turn to study the convergence of the NGSSP iteration for solving saddle point problems (1). It is well known that the iteration method (4) is convergent for every initial guess if and only if r(G)\1, where r(G) denotes the spectral radius of G. In Huang and Huang, 32 based on the MGSSP method, Huang and Huang established the spectral properties of the iteration matrix and the preconditioned matrix P À1 MGSSP A: In this section, we will analyze the spectral properties of the iteration matrix P À1 NGSSP R NGSSP and obtain that the NGSSP iterative method is convergent under certain conditions. Lemma 3.1. Let A 2 R n, n be symmetric positive definite, B 2 C n, m have full column rank and G NGSSP be defined as in (5). Assume l is an eigenvalue of the NGSSP iteration matrix G NGSSP = P À1 NGSSP R NGSSP and (x T , y T ) T the corresponding eigenvector, then Proof. Similar to the proving process of Lemmas 2.1 and 2.2 in Cao et al., 35 we obviously can get the above Lemma.
Theorem 3.4. Let l be defined as in (13) or (14). If the iteration parameters a.0, b.0, g.0 satisfy b a ø 4u s 2 , then all eigenvalues of the NGSSP iteration matrix G NGSSP are positive real.
Proof. By some algebra of (14), then it is easy to obtain that the explicit formula of the eigenvalue l satisfies the following form l 6 (G NGSSP ) = ½2ab + (2g À 1)sb + (2g 2 À 2g)u 6 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (sb) 2 À 4uab q where l + corresponds to the + sign and l À associates with the À sign. So, when (sb) 2 À 4uab ø 0, or b a ø 4u s 2 , all eigenvalues of the NGSSP iteration matrix G NGSSP are positive real.
Remark 3.1. On the one hand, the NGSSP method is the generalization of the MGSSP method. On the other hand, when the appropriate parameters are selected, the NGSSP method will have better convergence than the NGSSP method.

Numerical examples
In this section, we verify the above conclusions through numerical experiments. MATLAB 7.1 software was used to conduct numerical experiments, and the numerical experiment matrices were respectively generated based on the mixed two-dimensional time-harmonic Maxwell equations. In all of our runs, the relative residuals have been reduced by at least six orders of magnitude by the time we use zero initial guesses and stop iterating (i.e. when k b À Ax k k 2 ł 10 À6 k bk 2 ). Example 1. In this section, to further evaluate the effectiveness of the new block triangular preconditioned matrix P À1 NGSSP A combined with Krylov subspace methods, we consider a numerical examples based on a two-dimensional time-harmonic Maxwell equations with constant coefficients: find the vector field u and the multiplier p such that vector field u and the multiplier p such that Here, O 2 R 3 is simply connected polyhedron domain with a connected boundary ∂O and n denotes the outward unit normal on ∂O. The datum f is a given generic source. We know that finite element discretization using Ne´de´lec elements of the first kind for the approximation of the vector field and standard nodal elements for the multiplier yields the above 2 3 2 block saddle point problems.
For the simplicity, we take the generic source: f = 1 and a finite element subdivision such as Figure 2   the sparsity information of correlation matrices on different grids, where nz (A) denote the nonzero elements of matrix A.
Since the new preconditioners have three parameters, in numerical experiments we will test different values. Numerical experiments show the spectrum of the NGSSP preconditioned matrix P À1 NGSSP A and the MGSSP preconditioned matrix P À1 MGSSP A when choosing different parameters, which coincides with theoretical analysis.
In Figures 2, 4 and 6 we display the eigenvalues of the iteration matrix P À1 NGSSP R NGSSP in the case of   Tables 2;3 we show iteration counts about preconditioned matrices P À1 NGSSP A and P À1 MGSSP A, when choosing different parameters and applying to BICGSTAB and GMRES Krylov subspace iterative methods on three meshes, where It BSTAB(P À1 NGSSP A) and Res BSTAB(P À1 NGSSP A) are the iteration numbers and relative residual of the preconditioned matrices P À1 NGSSP A when applying to BICGSTAB Krylov subspace iterative methods, respectively. It GMRES(P À1 NGSSP A) and Res GMRES(P À1 NGSSP A) are the iteration numbers and relative residual of the preconditioned matrices P À1 NGSSP A when applying to GMRES Krylov subspace iterative methods, respectively. It BSTAB(P À1 MGSSP A) , Res BSTAB(P À1 MGSSP A) , It GMRES(P À1 MGSSP A) , Res GMRES(P À1 MGSSP A) are similar definitions.
Remark 4.1. Figures 2 to 7 show that the distribution of eigenvalues of the iteration matrix confirm our above theoretical analysis. Remark 4.2. From Tables 2 and 3, it is very easy to see that the preconditioner P NGSSP and P MGSSP will

.
the conditions of parameter convergence. In the actual numerical simulation, the convergence speed can be improved by selecting appropriate parameters.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research of this author is supported by the