$$ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Self-defined math definitions %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Math symbol commands \newcommand{\intd}{\,{\rm d}} % Symbol 'd' used in integration, such as 'dx' \newcommand{\diff}{{\rm d}} % Symbol 'd' used in differentiation \newcommand{\Diff}{{\rm D}} % Symbol 'D' used in differentiation \newcommand{\pdiff}{\partial} % Partial derivative \newcommand{DD}[2]{\frac{\diff}{\diff #2}\left( #1 \right)} \newcommand{Dd}[2]{\frac{\diff #1}{\diff #2}} \newcommand{PD}[2]{\frac{\pdiff}{\pdiff #2}\left( #1 \right)} \newcommand{Pd}[2]{\frac{\pdiff #1}{\pdiff #2}} \newcommand{\rme}{{\rm e}} % Exponential e \newcommand{\rmi}{{\rm i}} % Imaginary unit i \newcommand{\rmj}{{\rm j}} % Imaginary unit j \newcommand{\vect}[1]{\boldsymbol{#1}} % Vector typeset in bold and italic \newcommand{\phs}[1]{\dot{#1}} % Scalar phasor \newcommand{\phsvect}[1]{\boldsymbol{\dot{#1}}} % Vector phasor \newcommand{\normvect}{\vect{n}} % Normal vector: n \newcommand{\dform}[1]{\overset{\rightharpoonup}{\boldsymbol{#1}}} % Vector for differential form \newcommand{\cochain}[1]{\overset{\rightharpoonup}{#1}} % Vector for cochain \newcommand{\bigabs}[1]{\bigg\lvert#1\bigg\rvert} % Absolute value (single big vertical bar) \newcommand{\Abs}[1]{\big\lvert#1\big\rvert} % Absolute value (single big vertical bar) \newcommand{\abs}[1]{\lvert#1\rvert} % Absolute value (single vertical bar) \newcommand{\bignorm}[1]{\bigg\lVert#1\bigg\rVert} % Norm (double big vertical bar) \newcommand{\Norm}[1]{\big\lVert#1\big\rVert} % Norm (double big vertical bar) \newcommand{\norm}[1]{\lVert#1\rVert} % Norm (double vertical bar) \newcommand{\ouset}[3]{\overset{#3}{\underset{#2}{#1}}} % over and under set % Super/subscript for column index of a matrix, which is used in tensor analysis. \newcommand{\cscript}[1]{\;\; #1} % Star symbol used as prefix in front of a paragraph with no indent \newcommand{\prefstar}{\noindent$\ast$ } % Big vertical line restricting the function. % Example: $u(x)\restrict_{\Omega_0}$ \newcommand{\restrict}{\big\vert} % Math operators which are typeset in Roman font \DeclareMathOperator{\sgn}{sgn} % Sign function \DeclareMathOperator{\erf}{erf} % Error function \DeclareMathOperator{\Bd}{Bd} % Boundary of a set, used in topology \DeclareMathOperator{\Int}{Int} % Interior of a set, used in topology \DeclareMathOperator{\rank}{rank} % Rank of a matrix \DeclareMathOperator{\divergence}{div} % Curl \DeclareMathOperator{\curl}{curl} % Curl \DeclareMathOperator{\grad}{grad} % Gradient \DeclareMathOperator{\tr}{tr} % Trace \DeclareMathOperator{\span}{span} % Span $$

止于至善

As regards numerical analysis and mathematical electromagnetism

Adjoint operators $T_K$ and $T_{K^{*}}$ in BEM

In our last article, we introduced four integral operators in the boundary integral equations in BEM. Among them, the two compact operators \(T_K\) and \(T_{K^{*}}\) are of the second Fredholm type and have strong singularity when the model geometry contains sharp corners. This article will show that

  1. \(T_K\) and \(T_{K^{*}}\) are a pair of adjoint operators in the variational formulation of the boundary integral equations;
  2. when they are represented as matrices via Galerkin discretization, one is the conjugate transpose of the other.

Definition of dual operators

First, let's review the definition of dual operator, on which the adjoint operator depends.

Definition (Dual operators) Let \(X\) and \(Y\) be locally convex spaces and \(X_s'\) and \(Y_s'\) be their strong dual spaces. Let \(T\) be a linear operator from \(D(T) \subseteq X\) into \(Y\). A linear operator \(T'\) such that \(T'y' = x'\) is defined as

\[\langle Tx, y' \rangle = \langle x, x' \rangle \quad \text{for all $x \in D(T)$ and $\{x', y'\} \in X_s' \times Y_s'$}, \]

where \(x'\) is uniquely determined by \(y'\) through the mapping \(T'\) if and only if \(D(T)\) is dense in \(X\). Then \(T'\) is called the dual operator of \(T\).

When finite dimensional spaces are considered, i.e. \(X = \mathbb{C}^n\), \(Y = \mathbb{C}^m\), \(X_s' = \mathbb{C}^n\) and \(Y_s' = \mathbb{C}^m\), we have the elements in these spaces represented as column vectors: \(x_h, x_h' \in \mathbb{C}^n\), \(y' \in \mathbb{C}^m\) and the linear operators as matrices: \(T_h \in \mathbb{C}^{m \times n}\) and \(T_h' \in \mathbb{C}^{n \times m}\). The application of a vector in the dual space onto the one in the original space can be defined as matrix product as below, where we use the subscript \(^T\) to represent matrix or vector transpose:

\[\begin{aligned} \langle T_h x_h, y_h' \rangle &= y_h'^{T} T_h x_h \\ \langle x_h, T_h' y_h' \rangle &= (T_h' y_h')^T x_h = y_h'^T T_h'^T x_h \end{aligned}. \]

Because the above two terms are equal, we have \(T_h' = T_h^T\), i.e. in finite dimensional case, the dual operator is the transpose of the original operator.

Definition of adjoint operators

Definition (Adjoint operators) Let \(X\), \(Y\) be Hilbert spaces and \(T\) a linear operator defined on \(D(T) \subseteq X\) into \(Y\). Let \(D(T)^a = X\) and let \(T'\) be the dual operator of \(T\) which satisfies

\[ \langle Tx ,y' \rangle = \langle x, T'y' \rangle \quad (\forall x \in D(T), y' \in D(T')). \]

Let \(J_X\) be the one-to-one norm-preserving conjugate linear correspondence \(X_s' \ni f \leftrightarrow x_f \in X\) and \(J_Y\) is defined similarly as the correspondence \(Y_s' \ni g \leftrightarrow y_g \in Y\). On complex number field, the mappings \(J_X\) and \(J_Y\) along with their inverses \(J_X^{-1}\) and \(J_Y^{-1}\) are actually the operation of complex conjugate.

Then we have

\[ \langle Tx, y' \rangle = y'(Tx) = (Tx, J_Y y') \; \text{and} \; \langle x, T'y' \rangle = (T'y')(x) = (x, J_X T' y') \]

and

\[ (Tx, J_Y y') = (x, J_X T' y'). \]

Let \(y = J_Y(y')\), hence \(y' = J_Y^{-1}(y)\) and

\[\begin{equation} (Tx, y) = (x, J_X T' J_Y^{-1} y). \label{eq:adjoint-operator-condition} \end{equation} \]

When \(Y = X​\), let \(T^{*} = J_X T' J_Y^{-1} = J_X T' J_X^{-1}​\) and we call it the adjoint operator of \(T​\).

Remark

  1. Adjoint operators require the spaces to be Hilbert spaces, while dual operators only requires the spaces to be locally convex spaces.

  2. The triangular brackets \(\langle \cdot, \cdot \rangle\) represent the application of the second component in the strong dual space to the first component in the original space.

  3. The parentheses \((\cdot, \cdot)\) represent the inner product in the original space with the second component uniquely determined from an element in the strong dual space via the norm-preserving map \(J_X\) or \(J_Y\). This is ensured by a corollary derived from the famous Riesz representation theorem. They are given as follows for reference.

    Theorem (Riesz' representation theorem). Let \(X\) be a Hilbert space and \(f\) be a bounded linear functional on \(X\). Then there exists a uniquely determined vector \(y_f\) of \(X\) such that

\[\begin{equation} f(x) = (x, y_f) \quad \text{for all $x \in X$, and $\norm{f} = \norm{y_{f}}$}. \end{equation} \]

Conversely, any vector \(y \in X\) defines a bounded linear functional \(f_y\) on \(X\) .

Corollary Let \(X\) be a Hilbert space and \(X'\) be its dual space. Then there exists a norm-preserving bijective mapping between \(X\) and \(X'\).

In finite dimensional case, equation \eqref{eq:adjoint-operator-condition} can be represented as:

\[\begin{equation} \begin{aligned} (T_h x_h, y_h) &= (x_h, J_X T_h' J_Y^{-1} y_h) = (x_h, \overline{T_h' \overline{y_h}}) \\ &= (x_h, \overline{T_h'} y_h) = (x_h, \overline{T_h^T} y_h) = (x_h, T_h^* y_h) \end{aligned} \label{eq:adjoint-operator-finte-dimension-condition} \end{equation}. \]

Here we use over-line to represent complex conjugate. It can be seen that the adjoint operator is the Hermitian conjugate of the original operator.

Summary of dual and adjoint operators

In previous two sections, we present the definitions of dual and adjoint operators. In finite dimensional case, dual operator corresponds to matrix transpose while adjoint operator corresponds to Hermitian transpose. The commutative diagram for the relationships between the original and strong dual spaces as well as operators associating elements thereof can be illustrated as below.

\[\require{AMScd} \begin{CD} X @>T>> Y \\ @AJ_XAA @AAJ_YA \\ X_s' @<<{T'}< Y_s' \end{CD} \]

The adjoint operator \(T^{*}\) is obtained by following the path \(Y \rightarrow Y_s' \rightarrow X_s' \rightarrow X\).

\(T_K\) and \(T_{K^{*}}\) in the variational formulation of boundary integral equations

In our previous article, the boundary integral equations in matrix form are obtained as

\[\begin{equation} \begin{pmatrix} \gamma_0[u] \\ \gamma_1[t] \end{pmatrix} = \begin{pmatrix} \frac{1}{2}I - T_K & V \\ D & \frac{1}{2}I + T_{K^{*}} \end{pmatrix} \begin{pmatrix} \gamma_0[u] \\ \gamma_1[t] \end{pmatrix} \quad (x \in \Gamma). \label{eq:boundary-integral-equations-in-matrix-form} \end{equation} \]

We should note that these two equations hold for all \(x\) on \(\Gamma\). If we use the first row in Equation \eqref{eq:boundary-integral-equations-in-matrix-form} to match the Dirichlet data on \(\Gamma_D\) and the second row to match the Neumann data on \(\Gamma_N\), it is natural to separate the Dirichlet trace \(\gamma_0[u]\) into two parts: known data \(g_D = \gamma_0[u]\big\vert_{\Gamma_D}\) on \(\Gamma_D\) and unknown data \(\varphi_N = \gamma_0[u]\big\vert_{\Gamma_N}\) on \(\Gamma_N\); while the Neumann trace \(\gamma_1[t]\) comprises known data \(0\) on \(\Gamma_N\) and unknown data \(t_D = \gamma_1[t]\big\vert_{\Gamma_D}\) on \(\Gamma_D\). In addition, we need to remember that the operator \(\frac{1}{2}I\) in the first row comes from the direct value of double layer charge density when approaching to the Dirichlet boundary \(\Gamma_D\), so it only applies to \(g_D\) not \(\varphi_N\). Similarly, the operator \(\frac{1}{2}I\) in the second row only applies to \(0\) on \(\Gamma_N\) but not \(t_D\).

In this way, Equation \eqref{eq:boundary-integral-equations-in-matrix-form} becomes

\[\begin{equation} \begin{pmatrix} -V & T_K \\ T_{K^{*}} & D \end{pmatrix} \begin{pmatrix} t_D \\ \varphi_N \end{pmatrix}= \begin{pmatrix} -\frac{1}{2}g_D - T_K(g_D) \\ -D(g_D) \end{pmatrix}. \end{equation} \]

Now, it is obvious to see that the compact operator \(T_K\) maps a function defined on \(\Gamma_N\) to that on \(\Gamma_D\), while \(T_{K^{*}}\) maps a function defined on \(\Gamma_D\) to that on \(\Gamma_N\). This phenomenon has already reminded us of their adjoint property. But before showing this property in detail, we clarify the spaces on which \(T_K\) and \(T_{K^{*}}\) operate. Therefore the following two trace theorems are presented.

Theorem (Dirichlet trace theorem) Let \(\Omega\) be a bounded Lipschitz domain in \(\mathbb{R}\). Provided \(1/p < s \leq 1\), the Dirichlet trace operator \(\gamma_0\) defined on \(C^{\infty}(\bar{\Omega})\) has a unique continuous extension as a linear operator from \(W^{s,p}(\Omega)\) onto \(W^{s-1/p,p}(\pdiff\Omega)\). Specifically, when \(s = 1\) and \(p = 2\), we have \(\gamma_{0}: H^1(\Omega) \rightarrow H^{1/2}\).

Theorem (Neumann trace theorem) Let \(\Omega\) be a bounded Lipschitz domain in \(\mathbb{R}^3\) with unit outward normal \(\vect{n}\). Then the Neumann trace operator \(\gamma_1\) defined on \((C^{\infty}(\bar{\Omega}))^3\) can be extended by continuity to a continuous linear map from \(H(\divergence; \Omega)\) onto \(H^{-1/2}(\pdiff\Omega)\).

If we adopt \(t = \pdiff_{\vect{n}}u \in H(\divergence; \Omega)\), which implies finite excitation source charge in the domain, i.e. \(-\triangle u = f\) is square integrable; and \(u \in H^1(\Omega)\), which implies finite electric field energy in the domain, i.e. \(-\nabla u = \boldsymbol{E}\) is square integrable, according to the above trace theorems, we have \(\varphi_N \in H^{1/2}(\Gamma_N)\), \(t_D \in H^{-1/2}(\Gamma_D)\) and

\[\begin{equation} \begin{aligned} & T_K: H^{1/2}(\Gamma_N) \rightarrow H^{1/2}(\Gamma_D) \\ & T_{K^{*}}: H^{-1/2}(\Gamma_D) \rightarrow H^{-1/2}(\Gamma_N) \end{aligned}, \end{equation} \]

where \(H^{-1/2}\) is the dual space of \(H^{1/2}\).

By selecting test functions \(\psi \in H^{-1/2}(\Gamma_D)\) and \(\xi \in H^{1/2}(\Gamma_N)\), we can obtain the variational formulation of the boundary integral equations:

\[\begin{equation} \begin{aligned} \langle -V(t_D), \psi \rangle + \langle T_K(\varphi_N), \psi \rangle &= \langle -\frac{1}{2}g_D, \psi \rangle + \langle -T_K(g_D), \psi \rangle \\ \langle \xi, T_{K^{*}}(t_D) \rangle + \langle \xi, D(\varphi_N) &= \langle \xi, -D(g_D) \rangle \end{aligned}, \label{eq:variational-formulation-of-bie} \end{equation} \]

where the triangle brackets \(\langle \cdot, \cdot \rangle\) represent applying the element in the dual space \(H^{-1/2}\) to the one in the original space \(H^{1/2}\). If we use \(x\) to represent the coordinate on \(\Gamma_D\) and \(y\) on \(\Gamma_N\), the two bilinear forms related to \(T_K\) and \(T_{K^{*}}\) on the left hand side of Equation \eqref{eq:variational-formulation-of-bie} can be expanded as below:

\[\begin{equation} \begin{aligned} \langle (T_K \varphi_N(y))(x), \psi(x) \rangle_{\Gamma_D(x)} &= \int_{\Gamma_D(x)} \overline{\psi(x)} \left[ \int_{\Gamma_N(y)} K(x, y) \varphi_N(y) \intd o(y) \right] \intd o(x) \\ &= \int_{\Gamma_D(x)} \int_{\Gamma_N(y)} \overline{\psi(x)} K(x, y) \varphi_N(y) \intd o(y) \intd o(x) \\ \langle \xi(y), (T_{K^{*}} t_D(x))(y) \rangle_{\Gamma_N(y)} &= \int_{\Gamma_N(y)} \left[ \int_{\Gamma_D(x)} \overline{K^{*}(y, x)} \overline{t_D(x)} \intd o(x) \right] \xi(y) \intd o(y) \\ &= \int_{\Gamma_N(y)} \int_{\Gamma_D(x)} \overline{K(x, y)} \overline{t_D(x)} \xi(y) \intd o(x) \intd o(y) \\ &= \int_{\Gamma_D(x)} \int_{\Gamma_N(y)} \overline{t_D(x)} K(x, y) \xi(y) \intd o(y) \intd o(x) \end{aligned}, \end{equation} \]

where the over line represents complex conjugate if the adopted scalar field is \(\mathbb{K}\). Here we note that because the integral kernel \(K(x, y)\) only depends on coordinate, it is intrinsically real. In addition, we have also used the property \(K^{*}(y, x) = K(x, y)\) which is described in the previous article. Finally, by replacing \(\varphi_N(y)\) with \(\xi(y)\) and \(t_D(x)\) with \(\psi(x)\) in the above, we can show that \(T_K\) and \(T_{K^{*}}\) are a pair of adjoint operators due to

\[\begin{equation} \langle (T_K \xi(y))(x), \psi(x) \rangle_{\Gamma_D(x)} = \langle \xi(y), (T_{K^{*}} \psi(x))(y) \rangle_{\Gamma_N(y)}. \end{equation} \]

Matrix formulation of \(T_K\) and \(T_{K^{*}}\) in Galerkin discretization

By applying Galerkin discretization to the variational formulation of boundary integral equations in \eqref{eq:variational-formulation-of-bie}, we can migrate from infinite dimensional to finite dimensional spaces and the matrix formulations of the integral operators \(T_K\) and \(T_{K^{*}}\) can be obtained. This section introduces the procedure for this discretization.

Let \(\varphi_h(y) = \sum_{i=1}^n a_i \varphi_i(y)\) be a finite dimensional approximation of \(\varphi_N(y)\) and \(\psi_h(x) = \sum_{j=1}^m b_j p_j(x)\) be that of \(t_D(x)\). \(\{ \varphi_i(y) \}_{i=1}^n\) and \(\{ p_j(x) \}_{j=1}^m\) are finite dimensional bases for \(H^{1/2}(\Gamma_N)\) and \(H^{-1/2}(\Gamma_D)\) respectively. Then the bilinear forms related to \(T_K\) and \(T_{K^{*}}\) in Equation \eqref{eq:variational-formulation-of-bie} can be expanded as

\[\begin{equation} \begin{aligned} \langle T_K \varphi_h, \psi_{h} \rangle_{\Gamma_D(x)} &= \left\langle T_K \left( \sum_{j=1}^n a_j \varphi_j(y) \right), \sum_{i=1}^m b_i p_i(x) \right\rangle_{\Gamma_D(x)} \\ &= \sum_{i,j} \langle T_k \varphi_j(y), p_i(x) \rangle_{\Gamma_D(x)} a_j \bar{b}_i \\ \langle \varphi_h, T_{K^{*}}\psi_h \rangle_{\Gamma_N(y)} &= \left\langle \sum_{i=1}^n a_i\varphi_i(y), T_{K^{*}} \left( \sum_{j=1}^m b_j p_j(x) \right) \right\rangle_{\Gamma_N(y)} \\ &= \sum_{i, j} \langle \varphi_i(y), T_{K^{*}} p_j(x) \rangle_{\Gamma_N(y)} a_i \bar{b}_j \end{aligned}. \end{equation} \]

Note, here we adopt the convention that the subscript \(i\) is assigned to test function while \(j\) is assigned to basis function. Then let

\[\tilde{\varphi}_h = (a_1, \cdots, a_n)^T,\; \tilde{\psi}_h = (b_1, \cdots, b_m)^T, \]

\[\left(\widetilde{T}_{K}\right)_{ij} = \langle T_K \varphi_j(y), p_i(x) \rangle_{\Gamma_D(x)},\; \left(\widetilde{T}_{K^{*}}\right)_{ij} = \overline{\langle \varphi_i(y), T_{K^{*}} p_j(x) \rangle_{\Gamma_N(y)}} \]

and the matrix formulation can be obtained as

\[\begin{equation} \begin{aligned} \langle T_K\varphi_h, \psi_h \rangle_{\Gamma_D(x)} &= \overline{\tilde{\psi}}_h^{T} \widetilde{T}_K \tilde{\varphi}_h \\ \langle \varphi_h, T_{K^{*}} \psi_h \rangle_{\Gamma_N(y)} &= \overline{\left( \widetilde{T}_{K^{*}} \tilde{\psi}_h \right)}^T \tilde{\varphi}_h = \overline{\tilde{\psi}}_h^{T} \overline{\widetilde{T}}_{K^{*}}^T \tilde{\varphi}_h \end{aligned}. \end{equation} \]

It is obvious to see that the matrix formulation of \(T_{K^{*}}\) is the conjugate transpose of that for \(T_K\).

Summary

This article clarifies:

  1. Based on the standard definition in functional analysis, integral operators \(T_K\) and \(T_{K^{*}}\) obtained from the representation formula are a pair of adjoint operators, which is shown in the variational formulation after applying test functions to the boundary integral equations;
  2. In the finite dimensional approximation of boundary integral equations using Galerkin discretization, matrix formulations can be obtained for the two operator \(T_K\) and \(T_{K^{*}}\). On the complex scalar field, finite dimensional versions of adjoint operators are correlated by matrix conjugate transpose.
posted @ 2017-05-11 11:54  皮波迪博士  阅读(320)  评论(0编辑  收藏  举报