宋L

导航

线性方程的迭代方法

线性方程的迭代方法

本文将介绍求解线性方程 \(Ax = b\) 的几种方法。 其中 A 是一个大型稀疏矩阵,通常以算子的形式给出,例如在偏微分方程中。这类问题的规模太大,以至于像LU分解之类的直接方法受内存限制无法使用,故需要使用迭代求解。

静态(Stationary)方法

采用如下迭代格式:

\[x_{k+1}=-M^{-1}(A-M)x_k+M^{-1}b\tag{1} \]

M是一个相对于A更便于求逆的矩阵。可以很方便的验证上述格式在\(Ax=b\) 是不动点。M有3种常用的选择

  1. \(A\) 的对角元 \(D\) (Jacobi 格式),其具体迭代为:

\[x^{(k+1)}_i={1\over a_{ii}}\left(-\sum_{j\ne i}a_{ij}x^{(k)}_j+b_i\right)\tag{2} \]

  1. \(A\) 的下三角元 \(D+L\) (Gauss-Siedel 格式),其具体迭代为:

\[x^{(k+1)}_i={1\over a_{ii}}\left(-\sum_{j<i}a_{ij}x^{(k+1)}_j- \sum_{j>i}a_{ij}x^{(k)}_j+b_i\right)\tag{3} \]

  1. 以上二者线性组合 \({1\over \omega}D+L\) (SOR 格式),其具体迭代为:

\[x_i^{(k+1)}={\omega\over a_{ii}}\left(b_i-{\omega-1\over\omega}x_i^{(k)}-\sum_{j<i}a_{ij}x_j^{(k+1)}-\sum_{j>i}a_{ij}x_j^{(k)}\right)\tag{4} \]

共轭梯度法(Conjugate gradient, CG)

\(A\) 对称正定时,可将原问题化为最小化 \(\phi(x)={1\over 2}x^TAx-b^Tx\). 从一个点 \(x\)开始,沿一个方向 \(p\) 搜索 \(\phi(x+ap)\) 的最小值,得到

\[a=\arg\min\phi(x+ap)={p^Tr \over p^TAp }, \quad\text{where }r=b-Ax\tag{5} \]

再把 \(x+ap\) 当做新的起点,换个方向重复以上步骤就可以不断减小 \(r\) 直到找到解。剩下就是选择搜索的方向。注意到\(\nabla\phi(x)=-r\), 即 \(r\)\(\phi(x)\) 下降最快的方向,故可以选取 \(r\) 作为每次搜索的方向。但实际上有更好的办法。记 \(P_{i-1}=[p_0, \cdots, p_{i-1}]\in\mathbb{R}^{n\times i}\) 为前 \(i\) 次的搜索方向,则 \(x_i=x_0+P_{i-1}y_{i-1}\), 其中 \(y_{i-1}\) 为每次迭代的 \(a\) 。则在第 \(i\) 次迭代中

\[\begin{align} \phi(x_i+ap_i)&=\phi(x_0+P_{i-1}y_{i-1}+ap_i)\\ &=\phi(x_0+P_{i-1}y_{i-1})+ap_i^TA(x_0+P_{i-1}y_{i-1})+{1\over2}a^2p_i^TAp_i-ab^Tp_i\\ &=\phi(x_0+P_{i-1}y_{i-1})-ap_i^Tr_0+ay_{i-1}^TP_{i-1}^TAp_i+{a^2\over2}p_i^TAp_i \tag{6} \end{align} \]

如果 \(P_{i-1}^TAp_i=0\) , 即 \(\forall j<i,p_j^TAp_i=0\) ,则上式变为

\[\phi(x_i+ap_i)=\phi(x_i)-ap_i^Tr_0+{a^2\over2}p_i^TAp_i\tag{7} \]

注意到现在 \(a\)\(y_{i-1}\) 已经分开了。则现在有

\[\underset{y\in\mathbb{R}^{i+1}}{\arg\min}\space\phi(x_0+P_iy)=[y_{i-1},a]^T \tag{8} \]

\(\mathcal{K}_i=\left<p_0,\cdots,p_i\right>\) ,其中尖括号表示向量张成的空间,\(\mathcal{K}_i+x_0=\{x_0+x|x\in\mathcal{K}_i\}\) 。则按这种方法迭代的 \(x_i\) 满足

\[x_i=\underset{x\in x_0+\mathcal{K}_{i-1}}{\arg\min}\space\phi(x)\tag{9} \]

由此,我们选择 \(p_i\)\(r_i\) 减去其在 \(A\mathcal{K}_{i-1}=\{Ax|x\in\mathcal{K}_{i-1}\}\) 上的投影。记 \((A\mathcal{K}_{i-1})^\perp=\{x|\forall x'\in\mathcal{K}_{i-1},x^TAx'=0\}\), 则有

\[r_i=\gamma_ip_i+z_i,\quad\text{where }z_i\in A\mathcal{K}_{i-1},\space p_i\in(A\mathcal{K}_{i-1})^\perp\tag{10} \]

到此我们已经得到了一个算法。但其在每步中需要使用 Gram-Schmidt 方法来求 \(p_i\) 。下面说明 \(p_i\)\(r_i\)\(p_{i-1}\) 的线性组合,从而对算法优化。先证几个命题。

Proposition 1: \(\mathcal{K}_k=\left<r_0,Ar_0,\cdots,A^kr_0\right>\)

Proof: 使用归纳法。\(k=0\) 时显然成立,设命题对于 \(k\) 成立,注意到

\[r_i-r_0=-AP_{i-1}y_{i-1}\in A\mathcal{K}_{i-1}=\left<Ar_0,\cdots,A^ir_0\right>\tag{11} \]

\(z_i\in A\mathcal{K}_{i-1}\), 则

\[\begin{align} p_i={1\over\gamma_i}[r_0-(z_i+r_0-r_i)]\in\left<r_0,Ar_0,\cdots,A^ir_0\right>\\ \mathcal{K}_i=\left<\mathcal{K}_{i-1},p_i\right>=\left<r_0,\cdots,A^ir_0\right>\quad\quad\quad\quad\square\tag{12} \end{align} \]

Proposition 2: \(P_{i-1}^Tr_i=0\)

Proof: 由前得知,\(x_i=x_0+P_{i-1}y_{i-1}\) ,

\[\begin{align} y_{i-1}&=\arg\min\phi(x_i)\\ &=\arg\min\left\{\phi(x_0)+x_0^TAP_{i-1}y_{i-1}+{1\over2}y_{i-1}^T P_{i-1}^TAP_{i-1}y_{i-1}-b^TP_{i-1}y_{i-1}\right\}\\ &=(P_{i-1}^TAP_{i-1})^{-1}P_{i-1}^Tr_0\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad(13)\\ P_{i-1}^Tr_i&=P_{i-1}^T(r_0-AP_{i-1}y_{i-1})=0\qquad\qquad\square \end{align} \]

Proposition 3: \(\forall j\ne i,\space r_i^Tr_j=0\)

Proof: 由命题二得 \(\forall x\in\mathcal{K}_{i-1},\space x^Tr_i=0\) ,故只需证明 \(r_i\in \mathcal{K}_i\) 。使用归纳法,当 \(i=1\) 时显然,若对 \(i-1\) 成立,则 \(r_i=r_{i-1}-a_{i-1}Ap_{i-1}\) ,又由命题一得 \(A\mathcal{K}_{i-1}\sub\mathcal{K}_i\),即 \(Ap_{i-1}\in\mathcal{K}_i\),则 \(r_i\in\mathcal{K}_i\qquad\square\)

由命题三可知,\(\mathcal{K}_i=\left<r_0,\cdots,r_i\right>\), 且 \(r_0,\cdots,r_i\) 构成一组正交基。

Proposition 4: \(p_i\in\left<r_i,p_{i-1}\right>\)

Proof: 首先,\(z_i\) 满足

\[z_i=\underset{z\in A\mathcal{K}_{i-1}}{\arg\min}\space\left\|r_i-z\right\|^2\tag{14} \]

可设 \(z_i=AP_{i-1}y_{i-1},\space y_{i-1}\in\mathbb{R}^i\),则

\[y_{i-1}=\arg\min \left\|r_i-AP_{i-1}y_{i-1}\right\|^2\tag{15} \]

\(y_{i-1}=[w_{i-1},\space\beta_{i-1}]^T\),又 \(r_{i-1}-r_i=Ap_{i-1}\),易得

\[\begin{align} \left\|r_i-z_i\right\|^2&=\left\|\left(1+{\beta_{i-1}\over a_{i-1}}\right)r_i-{\beta_{i-1}\over a_{i-1}}r_{i-1}-AP_{i-2}w\right\|^2\\ &=\left(1+{\beta_{i-1}\over a_{i-1}}\right)^2\left\|r_i\right\|^2+ \left({\beta_{i-1}\over a_{i-1}}\right)^2 \left\|r_{i-1}+AP_{i-2}\left({a_{i-1}\over\beta_{i-1}}w_{i-1}\right)\right\|^2\space(16) \end{align} \]

最后一式第二项同 \(\left\|r_{i-1}-z_{i-1}\right\|^2\) 比较得

\[w_{i-1}=-{\beta_{i-1}\over a_{i-1}}y_{i-2}\tag{17} \]

\[AP_{i-2}w_{i-1}=-{\beta_{i-1}\over a_{i-1}}(r_{i-1}-\gamma_{i-1}p_{i-1}) \]

代回得

\[\begin{aligned} \gamma_ip_i&=r_{i-1}-a_{i-1}Ap_{i-1}+{\beta_{i-1}\over a_{i-1}}(r_{i-1}-\gamma_{i-1}p_{i-1})-\beta_{i-1}Ap_{i-1}\\ &=\left(1+{\beta_{i-1}\over a_{i-1}}\right)r_i-{\beta_{i-1}\over a_{i-1}}\gamma_{i-1}p_{i-1}\qquad\qquad \qquad\square \end{aligned} \]

最终实现

import numpy as np

def CG(A,b,x,n):
    r = b - A @ x
    for i in range(n):
        s = r @ r
        if i==0:
            p = r
        else:
            p = r + s / _s * _p
        _s, _p = s, p
        q = A @ p
        a = s / (q @ p)
        x += a * p
        r -= a * q
        print(max(abs(r)))
    return x

BiCG

Let \(\mathcal{K}_m=\left<r_0,\cdots,A^{m-1}r_0\right>\), \(\mathcal{L}_m=\left<r_0,\cdots,\left(A^{T}\right)^{m-1}r_0\right>\), let \(V_m=\{v_1,\cdots,v_m\}\) be a set of basis of \(\mathcal{K}_m\), $W_m={ w_1,\cdots,w_m } $ be a set of basis of \(\mathcal{L}_m\). We further require that \(V_m\) and \(W_m\) satisfies \(W_m^TV_m=I_m\) . Then \(W_m^TAV_m\) and \(V_m^TA^TW_m\) are both Hessian, thus \(T_m=W_m^TAV_m\) is tridiagonal,

\[T_m= \begin{bmatrix} \alpha_1 & \delta_2 \\ \beta_2 & \alpha_2 & \delta_3 \\ & \beta_3 & \alpha_3 & \ddots \\ & & \ddots & \ddots & \delta_m \\ & & & \beta_m & \alpha_m \end{bmatrix}\tag{18} \]

Then we calculate the LU decompostion of \(T_m=L_mU_m\), with the diagonal of Lm equals to 1

\[T_m=\begin{bmatrix} 1 \\ \lambda_2 & 1 \\ & \lambda_3 & 1\\ & & \ddots & \ddots \\ & & & \lambda_m & 1 \end{bmatrix}\begin{bmatrix} \eta_1 & \delta_2 \\ & \eta_2 & \delta_3 \\ & & \eta_3 & \ddots \\ & & & \ddots & \delta_m\\ & & & & \eta_m \end{bmatrix}\tag{19} \]

where

\[\begin{aligned} \eta_k&=\alpha_k-{\beta_k\delta_k\over\eta_{k-1}},\quad\eta_0=\alpha_0\\ \lambda_k&={\beta_k\over\eta_{k-1}} \end{aligned}\tag{20} \]

Now consider finding \(x_m\in x_0+\mathcal{K}_m\), such that \(r_m=b-Ax_m\) is orthogonal to \(\mathcal{L}_m\) . It follows immediately that \(r_m\in \mathcal{K}_{m+1}\) and $r_m\perp\mathcal{L}_m $ , then \(r_m=kv_{m+1}\). Let \(x_m=x_0+V_my_m\), then \(W_m^Tr_m=W_m^T(r_0-AV_my_m)=0\), i.e. \(y_m=T_m^{-1}(\beta e_0)\) , and \(x_m=x_0+V_mU_m^{-1}L_m^{-1}(\beta e_0)\) . Let \(P_m=V_mU_m^{-1}\) and \(Z_m=L_m^{-1}(\beta e_0)\), then by employ that

\[U_m^{-1}=\begin{bmatrix} U_{m-1}^{-1} & -\delta_m\eta_m^{-1}U_{m-1}^{-1}e_{m-1} \\ & \eta_m^{-1} \end{bmatrix}\\ L_m^{-1}=\begin{bmatrix} L_{m-1}^{-1} \\ -\lambda_me_{m-1}^TL_{m-1}^{-1} & 1 \end{bmatrix} \]

we can deduce that

\[P_m=[p_1,\cdots,p_m] \quad\text{where}\quad p_m = -{\delta_m\over\eta_m}\underbrace{V_{m-1}U_{m-1}^{-1}e_{m-1}}_{p_{m-1}}+{1\over\eta_m}v_m \\ Z_m=[z_1,\cdots,z_m]^T\quad\text{where}\quad z_m=-\lambda_mz_{m-1} \]

Thus \(x_m=x_{m-1}+p_mz_m\) . Now define \(P^*_m=W_mL_m^{-1}\) , then \((P^*_m)^TAP_m=I_m\) . Now we can derive the BiCG algorithm using following equation

\[r_m^Tr'_n=p'_mAp_n=0\qquad \text{if}\quad m\ne n \]

where

\[r_n=r_{n-1}+z_nAp_n\\ r_n'=r_{n-1}'+z'_nA^Tp'_n\\ p_n=r_{n-1} + \beta_np_{n-1}\\ p_n'=r_{n-1}'+\beta_n'p'_{n-1} \]

then

\[z_n=z_n'={r_{n-1}^Tr_{n-1}'\over p_{n-1}'^TAp_{n-1}}\\ \beta_n=\beta'_n={r'^T_{n-1}r_{n-1}\over r'^T_{n-2}r_{n-2} } \]

resulting code

def BCG(A,b,x,n):
    r1 = b - A @ x
    r2 = np.array(r1)
    for i in range(n):
        s = r1 @ r2
        if i==0:
            p1 = r1
            p2 = r2
        else:
            p1 = r1 + s / _s * _p1
            p2 = r2 + s / _s * _p2
        _s, _p1, _p2 = s, p1, p2
        q1 = A   @ p1
        q2 = A.T @ p2
        a = s / (p2 @ q1)
        x  = x + a * p1
        r1 = r1 - a * q1
        r2 = r2 - a * q2
        print(max(abs(r1)))
    return x

posted on 2021-09-18 19:14  宋L  阅读(273)  评论(0)    收藏  举报