$$ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Self-defined math definitions %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Math symbol commands \newcommand{\intd}{\,{\rm d}} % Symbol 'd' used in integration, such as 'dx' \newcommand{\diff}{{\rm d}} % Symbol 'd' used in differentiation \newcommand{\Diff}{{\rm D}} % Symbol 'D' used in differentiation \newcommand{\pdiff}{\partial} % Partial derivative \newcommand{DD}[2]{\frac{\diff}{\diff #2}\left( #1 \right)} \newcommand{Dd}[2]{\frac{\diff #1}{\diff #2}} \newcommand{PD}[2]{\frac{\pdiff}{\pdiff #2}\left( #1 \right)} \newcommand{Pd}[2]{\frac{\pdiff #1}{\pdiff #2}} \newcommand{\rme}{{\rm e}} % Exponential e \newcommand{\rmi}{{\rm i}} % Imaginary unit i \newcommand{\rmj}{{\rm j}} % Imaginary unit j \newcommand{\vect}[1]{\boldsymbol{#1}} % Vector typeset in bold and italic \newcommand{\phs}[1]{\dot{#1}} % Scalar phasor \newcommand{\phsvect}[1]{\boldsymbol{\dot{#1}}} % Vector phasor \newcommand{\normvect}{\vect{n}} % Normal vector: n \newcommand{\dform}[1]{\overset{\rightharpoonup}{\boldsymbol{#1}}} % Vector for differential form \newcommand{\cochain}[1]{\overset{\rightharpoonup}{#1}} % Vector for cochain \newcommand{\bigabs}[1]{\bigg\lvert#1\bigg\rvert} % Absolute value (single big vertical bar) \newcommand{\Abs}[1]{\big\lvert#1\big\rvert} % Absolute value (single big vertical bar) \newcommand{\abs}[1]{\lvert#1\rvert} % Absolute value (single vertical bar) \newcommand{\bignorm}[1]{\bigg\lVert#1\bigg\rVert} % Norm (double big vertical bar) \newcommand{\Norm}[1]{\big\lVert#1\big\rVert} % Norm (double big vertical bar) \newcommand{\norm}[1]{\lVert#1\rVert} % Norm (double vertical bar) \newcommand{\ouset}[3]{\overset{#3}{\underset{#2}{#1}}} % over and under set % Super/subscript for column index of a matrix, which is used in tensor analysis. \newcommand{\cscript}[1]{\;\; #1} % Star symbol used as prefix in front of a paragraph with no indent \newcommand{\prefstar}{\noindent$\ast$ } % Big vertical line restricting the function. % Example: $u(x)\restrict_{\Omega_0}$ \newcommand{\restrict}{\big\vert} % Math operators which are typeset in Roman font \DeclareMathOperator{\sgn}{sgn} % Sign function \DeclareMathOperator{\erf}{erf} % Error function \DeclareMathOperator{\Bd}{Bd} % Boundary of a set, used in topology \DeclareMathOperator{\Int}{Int} % Interior of a set, used in topology \DeclareMathOperator{\rank}{rank} % Rank of a matrix \DeclareMathOperator{\divergence}{div} % Curl \DeclareMathOperator{\curl}{curl} % Curl \DeclareMathOperator{\grad}{grad} % Gradient \DeclareMathOperator{\tr}{tr} % Trace \DeclareMathOperator{\span}{span} % Span $$

止于至善

As regards numerical analysis and mathematical electromagnetism

James Munkres Topology: Theorem 20.3 and metric equivalence

Proof of Theorem 20.3

Theorem 20.3 The topologies on \(\mathbb{R}^n\) induced by the euclidean metric \(d\) and the square metric \(\rho\) are the same as the product topology on \(\mathbb{R}^n\).

Proof: a) Prove the two metrics can mutually limit each other.

Because
\[
\rho(\vect{x}, \vect{y}) = \max_{1 \leq i \leq n} \abs{x_i - y_i} = \left( \max_{1 \leq i \leq n} (x_i - y_i)^2 \right)^{\frac{1}{2}}
\]
and the scalar function \(f(x) = x^{\frac{1}{2}}\) is increasing when \(x \geq 0\), then from
\[
\max_{1 \leq i \leq n} (x_i - y_i)^2 \leq \sum_{i=1}^n (x_i - y_i)^2,
\]
we have
\[
\left( \max_{1 \leq i \leq n} (x_i - y_i)^2 \right)^{\frac{1}{2}} \leq \left( \sum_{i=1}^n (x_i - y_i)^2 \right)^{\frac{1}{2}}.
\]
Hence,
\[
\rho(\vect{x}, \vect{y}) \leq d(\vect{x}, \vect{y}).
\]
Meanwhile,
\[
\left( \sum_{i=1}^n (x_i - y_i)^2 \right)^{\frac{1}{2}} \leq \left( n \max_{1 \leq i \leq n} (x_i - y_i)^2 \right)^{\frac{1}{2}} = \left( n \left( \max_{1 \leq i \leq n} \abs{x_i - y_i} \right)^2 \right)^{\frac{1}{2}}.
\]
Therefore,
\[
d(\vect{x}, \vect{y}) \leq \sqrt{n} \rho(\vect{x}, \vect{y}).
\]
Summarize the above we have
\[
\rho(\vect{x}, \vect{y}) \leq d(\vect{x}, \vect{y}) \leq \sqrt{n} \rho(\vect{x}, \vect{y})
\]
and its equivalent form
\[
\frac{1}{\sqrt{n}} d(\vect{x}, \vect{y}) \leq \rho(\vect{x}, \vect{y}) \leq d(\vect{x}, \vect{y}).
\]
b) Prove the two metrics generate the same topology.

For all \(\vect{x} \in \mathbb{R}^n\) and \(\varepsilon > 0\), because \(d(\vect{x}, \vect{y}) \leq \sqrt{n} \rho(\vect{x}, \vect{y})\), if we let \(\sqrt{n} \rho(\vect{x}, \vect{y}) < \varepsilon\), we also have \(d(\vect{x}, \vect{y}) < \varepsilon\). This means the open ball \(B_{\rho}(\vect{x}, \frac{\varepsilon}{\sqrt{n}})\) in the topology induced by \(\rho\) is contained in the open ball \(B_d(\vect{x}, \varepsilon)\) in the topology induced by \(d\). So the square metric topology is finer than the euclidean metric topology according to Lemma 20.2.

Meanwhile, by letting \(\rho(\vect{x}, \vect{y}) \leq d(\vect{x}, \vect{y}) < \varepsilon\), we have the open ball \(B_d(\vect{x}, \varepsilon)\) being contained in the open ball \(B_{\rho}(\vect{x}, \varepsilon)\), which proves the euclidean metric topology is finer than the square metric topology.

Therefore, the two metrics generate the same topology.

Comment It can be seen that when a certain open ball radius is given, the larger the metric being defined, the smaller the open ball in the sense of set inclusion or cardinality.

c) Prove the topology induced by \(\rho\) is the same as the product topology on \(\mathbb{R}^n\).

Let \(\vect{B} = \prod_{i=1}^n (a_i, b_i)\) be a basis element for \(\mathbb{R}^n\) with the product topology. For all \(\vect{x} \in \vect{B}\) and \(i \in \{1, \cdots ,n\}\), there exists an \(\varepsilon_i > 0\) such that \(x_i \in (x_i - \varepsilon_i, x_i + \varepsilon_i) \subset (a_i, b_i)\). Let \(\varepsilon = \min_{1 \leq i \leq n} \{ \varepsilon_i\}\), we have \(x_i \in (x_i - \varepsilon, x_i + \varepsilon) \subset (a_i, b_i)\). Because \(B_{\rho}(\vect{x}, \varepsilon) = \prod_{i=1}^n (x_i - \varepsilon, x_i + \varepsilon)\), we have \(\vect{x} \in B_{\rho}(\vect{x}, \varepsilon) \subset \vect{B}\). Hence, the square metric topology is finer than the product topology on \(\mathbb{R}^n\).

On the other hand, let \(B_{\rho}(\vect{x}, \varepsilon)\) be an arbitrary open ball in \(\mathbb{R}^n\) with the square metric topology, it is itself a basis element for the product topology. Therefore, the product topology is finer than the square metric topology.

Finally, the two metrics generate the same topology as the product topology on \(\mathbb{R}^n\).

Comment It should be noted that although \(B_{\rho}(\vect{x}, \varepsilon) = \prod_{i=1}^n (x_i - \varepsilon, x_i + \varepsilon)\), we do not have \(B_{\bar{\rho}}(\vect{x}, \varepsilon) = \prod_{i=1}^{\infty} (x_i - \varepsilon, x_i + \varepsilon)\), where \(\bar{\rho}\) is the uniform metric on \(\mathbb{R}^{\omega}\). This point has been mentioned in this post.

Remark This theorem can be generalized as below.

If any two metrics \(d_1\) and \(d_2\) on a space \(X\) can be mutually limited, i.e. for all \(x\) and \(y\) in \(X\), there exist positive constants \(C_1\) and \(C_2\) such that \(C_1 d_1(x, y) \leq d_2(x, y) \leq C_2 d_1(x, y)\), then the two metrics induce the same topology on \(X\).

Then, these two metrics are considered to be equivalent in a topological sense and such “equivalence” can be understood like this. We have already known in this post that in a topological space, the concept of convergence is defined based on using a collection of nested open sets as rulers for “distance” measurement, when there is still no metric established. The equivalence of two metrics in a topological sense just means that the convergence behaviors in the topological spaces induced from these two metrics are the same.

Examples of equivalent metrics

In linear algebra, we have already witnessed examples of equivalent metrics, which are induced from corresponding norms for vectors or matrices.

For all \(\vect{x} \in \mathbb{R}^n\), the following is a list of commonly adopted vector norms:

  1. 1-norm: \(\norm{\vect{x}}_1 = \sum_{i = 1}^n \abs{x_i}\).
  2. 2-norm: \(\norm{\vect{x}}_2 = \left( \sum_{i=1}^n \abs{x_i}^2 \right)^{\frac{1}{2}}\).
  3. \(\infty\)-norm: \(\norm{\vect{x}}_{\infty} = \max_{1 \leq i \leq n} \abs{x_i}\).

It is easy to prove that these norms are equivalent as below, which implies the equivalence of their induced metrics and also the induced topologies on \(\mathbb{R}^n\).
\[
\begin{align*}
\norm{\vect{x}}_{\infty} \leq & \norm{\vect{x}}_1 \leq n \norm{\vect{x}}_{\infty} \\
\norm{\vect{x}}_{\infty} \leq & \norm{\vect{x}}_2 \leq \sqrt{n} \norm{\vect{x}}_{\infty} \\
\frac{1}{\sqrt{n}} \norm{\vect{x}}_2 \leq & \norm{\vect{x}}_1 \leq n \norm{\vect{x}}_2
\end{align*}.
\]
Based on the definition of vector norms, the corresponding norms for matrices, which are treated as linear operators on vector space, can also be induced. For all \(A \in \mathbb{R}^{n \times n}\), possible matrix norms are

  1. 1-norm: \(\norm{A}_1 = \sup_{\forall \vect{x} \in \mathbb{R}^n, \vect{x} \neq 0} \frac{\norm{A \vect{x}}_1}{\norm{\vect{x}}_1} = \max_{1 \leq j \leq n} \sum_{i=1}^n \abs{a_{ij}}\), which is the maximum column sum;
  2. 2-norm: \(\norm{A}_2 = \sup_{\forall \vect{x} \in \mathbb{R}^n, \vect{x} \neq 0} \frac{\norm{A \vect{x}}_2}{\norm{\vect{x}}_2} = \sqrt{\rho(A^T A)}\), where \(\rho\) represents the spectral radius, i.e. the maximum eigenvalue of \(A^TA\);
  3. \(\infty\)-norm: \(\norm{A}_{\infty} = \sup_{\forall \vect{x} \in \mathbb{R}^n, \vect{x} \neq 0} \frac{\norm{A \vect{x}}_{\infty}}{\norm{\vect{x}}_{\infty}} = \max_{1 \leq i \leq n} \sum_{j=1}^n \abs{a_{ij}}\), which is the maximum row sum.

The equivalence of these matrix norms can be directly derived from the equivalence of vector norms. For example, because \(\norm{A\vect{x}}_1 \leq n \norm{A\vect{x}}_2\) and \(\norm{\vect{x}}_1 \geq \frac{1}{\sqrt{n}} \norm{\vect{x}}_2\), we have
\[
\frac{\norm{A\vect{x}}_1}{\norm{\vect{x}}_1} \leq \frac{n \norm{A\vect{x}}_2}{\frac{1}{\sqrt{n}}\norm{\vect{x}}_2} = n\sqrt{n}\frac{\norm{A\vect{x}}_2}{\norm{\vect{x}}_2}.
\]
From \(\norm{A\vect{x}}_1 \geq \frac{1}{\sqrt{n}} \norm{A\vect{x}}_2\) and \(\norm{\vect{x}}_1 \leq n \norm{\vect{x}}_2\), we have
\[
\frac{1}{n\sqrt{n}}\frac{\norm{A\vect{x}}_2}{\norm{\vect{x}}_2} \leq \frac{\norm{A\vect{x}}_1}{\norm{\vect{x}}_1}.
\]
By taking supremum operation on both sides of the two inequalities,
\[
\frac{1}{n\sqrt{n}} \norm{A}_2 \leq \norm{A}_1 \leq n\sqrt{n} \norm{A}_2.
\]
Similarly, we also have
\[
\begin{align*}
\frac{1}{n} \norm{A}_{\infty} \leq & \norm{A}_1 \leq n \norm{A}_{\infty} \\
\frac{1}{\sqrt{n}} \norm{A}_{\infty} \leq & \norm{A}_2 \leq \sqrt{n} \norm{A}_{\infty}
\end{align*}.
\]
The equivalence of matrix norms implies the equivalence of their induced metrics and topologies on \(\mathbb{R}^{n \times n}\).

posted @ 2019-01-08 22:01  皮波迪博士  阅读(647)  评论(0编辑  收藏  举报