前言:定义与布局
根据求导的自变量和因变量是标量,向量还是矩阵,我们有9种可能的矩阵求导定义,谈谈几种常用的。
所谓向量对标量的求导,其实就是向量里的每个分量分别对标量求导,最后把求导的结果排列在一起,按一个向量表示而已。类似的结论也存在于标量对向量的求导,向量对向量的求导,向量对矩阵的求导,矩阵对向量的求导,以及矩阵对矩阵的求导等。总而言之,所谓的向量矩阵求导本质上就是多元函数求导,仅仅是把把函数的自变量,因变量以及标量求导的结果排列成了向量矩阵的形式,方便表达与计算,更加简洁而已。一个需要要约定的点是按分量求导后的结果如何进行排布?一般来说有两种布局方式:分子布局与分母布局,无论何种选择方式都是正确的!我们需要根据我们选定的排布方式推导出一套自洽的计算理论即可!诸如:链式法则,微分定义,等等
谈谈两种排布方式
对于分子布局来说,我们求导结果的维度以分子为主
对于分母布局来说,我们求导结果的维度以分母为主
举一个例子,标量\(y\)对矩阵\(X\)求导,那么如果按分母布局,则求导结果的维度和矩阵\(X\)的维度\(m\)×\(n\)是一致的。如果是分子布局,则求导结果的维度为n×m
一般来说,我们会使用一种叫混合布局的思路:即
- 如果是向量或者矩阵对标量求导,则使用分子布局为准 (不常用)
- 如果是标量对向量或者矩阵求导,则以分母布局为准。(最常用)
- 对于(列)向量对(列)向量求导,有些分歧,下面以分子布局的雅克比矩阵为标准。
\[\begin{gathered}\frac{\partial\mathbf{y}}{\partial\mathbf{x}}=\begin{pmatrix}\frac{\partial y_1}{\partial x_1}&\frac{\partial y_1}{\partial x_2}&\ldots&\frac{\partial y_1}{\partial x_n}\\\frac{\partial y_2}{\partial x_1}&\frac{\partial y_2}{\partial x_2}&\ldots&\frac{\partial y_2}{\partial x_n}\\\vdots&\vdots&\ddots&\vdots\\\frac{\partial y_m}{\partial x_1}&\frac{\partial y_m}{\partial x_2}&\ldots&\frac{\partial y_m}{\partial x_n}\end{pmatrix}\end{gathered}
\]
具体计算时,对y的每一个分量进行标量对向量求导,得到的导数转置后按行排列即是欲求的雅可比矩阵。
有的资料上会使用\(\frac{\partial\mathbf{y}}{\partial\mathbf{x}^\mathbf{T}}\)定义雅可比矩阵,数学意义与上式是一样的
标量对矩阵的求导
定义
回忆如何表示一个标量的微分,显然,\(df\)就是所有自变量变化引起的函数变化的总和,也就是
\[df=\sum_{i=1}^n\frac{\partial f}{\partial x_i}dx_i
\]
\[df=f^{\prime}(x)dx
\]
\[df=\sum_{i=1}^n\frac{\partial f}{\partial x_i}dx_i=\frac{\partial f}{\partial\boldsymbol{x}}^Td\boldsymbol{x}
\]
其中\(\frac{\partial f}{\partial\boldsymbol{x}}^T\)为\(f\)的梯度(行向量),全微分\(df\)是导数\(\frac{\partial f}{\partial x}\)与微分向量\(dx\)的内积
- \(dX\)为矩阵时,借助矩阵的迹,\(df\)可以表示为
\[df=\sum_{i=1}^m\sum_{j=1}^n\frac{\partial f}{\partial X_{ij}}dX_{ij}=\mathrm{tr}\left(\frac{\partial f}{\partial X}^TdX\right)
\]
其中全微分\(df\)是导数\(\frac{\partial f}{\partial X}\)与微分矩阵\(dX\)的内积,我们也由此自然地得出了矩阵对标量求导时的分子布局定义:标量f对矩阵原位逐个元素求导后取装置。
附:上式证明。
\[Proof.\begin{aligned}
\mathrm{d}f(\boldsymbol{X})& =\frac{\partial f}{\partial x_{11}}\mathrm{d}x_{11}+\frac{\partial f}{\partial x_{12}}\mathrm{d}x_{12}+\cdots+\frac{\partial f}{\partial x_{1n}}\mathrm{d}x_{1n} \\
&+\frac{\partial f}{\partial x_{21}}\mathrm{d}x_{21}+\frac{\partial f}{\partial x_{22}}\mathrm{d}x_{22}+\cdots+\frac{\partial f}{\partial x_{2n}}\mathrm{d}x_{2n} \\
&+\ldots \\
&+\frac{\partial f}{\partial x_{m1}}\mathrm{d}x_{m1}+\frac{\partial f}{\partial x_{m2}}\mathrm{d}x_{m2}+\cdots+\frac{\partial f}{\partial x_{mn}}\mathrm{d}x_{mn} \\
df=&tr(\begin{bmatrix}\frac{\partial f}{\partial x_{11}}&\frac{\partial f}{\partial x_{21}}&\cdots&\frac{\partial f}{\partial x_{m1}}\\\frac{\partial f}{\partial x_{12}}&\frac{\partial f}{\partial x_{22}}&\cdots&\frac{\partial f}{\partial x_{m2}}\\\vdots&\vdots&\vdots&\vdots\\\frac{\partial f}{\partial x_{1n}}&\frac{\partial f}{\partial x_{2n}}&\cdots&\frac{\partial f}{\partial x_{mn}}\end{bmatrix}_{n\times m}\begin{bmatrix}\mathrm{d}x_{11}&\mathrm{d}x_{12}&\cdots&\mathrm{d}x_{1n}\\\mathrm{d}x_{21}&\mathrm{d}x_{22}&\cdots&\mathrm{d}x_{2n}\\\vdots&\vdots&\vdots&\vdots\\\mathrm{d}x_{m1}&\mathrm{d}x_{m2}&\cdots&\mathrm{d}x_{mn}\end{bmatrix}_{m\times n})
\end{aligned}\]
矩阵微分的性质
- 1.加减法:\(d(X\pm Y)=dX\pm dY\);矩阵乘法:\(d(XY)=(dX)Y+XdY\);转置:
\(d(X^T)=(dX)^T\);迹:\(dtr(X)=tr(dX)\)。
\[Proof.略
\]
- 2.逆:\(dX^{-1}=-X^{-1}dXX^{-1}\)。此式可在\(XX^-1=I\)两侧求微分来证明。
- 3.行列式:\(d|X|=\operatorname{tr}(X^{*}dX)\),其中\(X^{*}\)表示X的伴随矩阵,在X可逆时又可以写作\(d|X|=|X|\)tr\((X^{-1}dX)\)。此式可用Laplace展开来证明
- 4.逐元素乘法:\(d(X\odot Y)=dX\odot Y+X\odot dY\),\(\odot\)表示尺寸相同的矩阵X,Y逐元素相乘。
- 5.逐元素函数:\(d\sigma(X)=\sigma^{\prime}(X)\odot dX\),\(\sigma(X)=[\sigma(X_{ij})]\)是逐元素标量函数运算,
\[\left.\begin{aligned}X=\begin{bmatrix}X_{11}&X_{12}\\X_{21}&X_{22}\end{bmatrix},d\sin(X)=\begin{bmatrix}\cos X_{11}dX_{11}&\cos X_{12}dX_{12}\\\cos X_{21}dX_{21}&\cos X_{22}dX_{22}\end{bmatrix}=\cos(X)\odot dX\end{aligned}\right.
\]
迹运算技巧
-
1.标量套上迹:\(a=\operatorname{tr}(a)\)
-
2.转置:tr\((A^T)=\operatorname{tr}(A).\)
-
3.线性:\(\operatorname{tr}(A\pm B)=\operatorname{tr}(A)\pm\operatorname{tr}(B).\)
-
4.矩阵乘法交换:\(\operatorname{tr}(AB)=\operatorname{tr}(BA)\),其中\(A\)与\(B^T\)尺寸相同。两侧都等于\(\sum_i,jA_{ij}B_{ji}\)
-
5.矩阵乘法/逐元素乘法交换:\(\operatorname{tr}(A^T(B\odot C))=\operatorname{tr}((A\odot B)^TC)\),其中\(A,B,C\)尺寸相同。两侧都等于\(\sum_{ij}A_{ij}B_{ij}C_{ij}\)
-
若标量函数f是矩阵X经加减乘法、逆、行列式、逐元素函数等运算构成,则使用相应的运算法则对f求微分,再使用迹技巧给df套上迹并将其它项交换至dX左侧,对照导数与微分的联系\(df=\mathrm{tr}\left(\frac{\partial f}{\partial X}^TdX\right)\)即可求得导数.\(ati.常常利用性质4,5来进行tr内部的顺序交换。\)
求解一些经典问题
- \(f=\boldsymbol{a}^TX\boldsymbol{b}\text{,求}\frac{\partial f}{\partial X}\) 其中\(a,b\)为列向量,\(X\)为矩阵。
\[df=a^{T}dXb\\df=tr(a^{T}dX b)=tr(ba^{T}dX)\\\frac{\partial f}{\partial x}=(ba^{T})^{T}=ab^{T}
\]
- \(f=\boldsymbol{a}^T\exp(X\boldsymbol{b})\),求\(\frac{\partial f}{\partial X}\)。其中\(\boldsymbol{a}\)是\(m\times1\)列向量,\(X\)是\(m\times n\)矩阵,\(\boldsymbol{b}\)是\(n\times1\)列向
量,exp表示逐元素求指数,\(f\)是标量。
\[\begin{aligned}
df &= a^{T}(\exp(Xb) \odot d(Xb)) \\
df &= \text{tr}(a^{T}[\exp(Xb) \odot d(Xb)]) \\
df &= \text{tr}((a^{T} \odot \exp(Xb))^{T}d(Xb)) \\
df &= \text{tr}(b(a \odot \exp(Xb))^TdX) = \text{tr}(((a \odot \exp(Xb))b^T)^TdX) \\
=>\frac{\partial f}{\partial X} &= (\boldsymbol{a} \odot \exp(X\boldsymbol{b}))\boldsymbol{b}^T
\end{aligned} \]
- \(f=\operatorname{tr}(Y^TMY),Y=\sigma(WX)\),求\(\frac{\partial f}{\partial X}\)。其中\(W\)是\(l\times m\)矩阵,\(X\)是\(m\times n\)矩阵,\(Y\)
是\(l\times n\)矩阵,\(M\)是\(l\times l\)对称矩阵,\(\sigma\)是逐元素函数,\(f\)是标量。
\[ \begin{aligned}
df &= \operatorname{tr}((dY)^TMY) + \operatorname{tr}(Y^TMdY) \\
&= \operatorname{tr}(Y^TM^TdY) + \operatorname{tr}(Y^TMdY) \\
&= \operatorname{tr}(Y^T(M+M^T)dY) \\
\frac{\partial f}{\partial Y} &= (M+M^T)Y = 2MY \\
df &= \operatorname{tr}\left(\frac{\partial f}{\partial Y}^TdY\right) \\
df &= \operatorname{tr}\left(\frac{\partial f}{\partial Y}^T(\sigma^{\prime}(WX)\odot(WdX))\right) \\
&= \operatorname{tr}\left(\left(\frac{\partial f}{\partial Y}\odot\sigma^{\prime}(WX)\right)^TWdX\right) \\
\frac{\partial f}{\partial X} &= W^T\left(\frac{\partial f}{\partial Y}\odot\sigma^{\prime}(WX)\right) = W^T\left((2M\sigma(WX))\odot\sigma^{\prime}(WX)\right)
\end{aligned} \]
- \(l=\|X\boldsymbol{w}-\boldsymbol{y}\|^2\),求\(\boldsymbol{w}\)的最小二乘估计,即求\(\frac{\partial l}{\partial\boldsymbol{w}}\)的零点。其中\(\boldsymbol{y}\)是\(m\times1\)
列向量,\(X\)是\(m\times n\)矩阵,\(\boldsymbol{w}\)是\(n\times1\)列向量,\(l\)是标量。
\[\begin{aligned}
l &=(X\boldsymbol{w}-\boldsymbol{y})^T(X\boldsymbol{w}-\boldsymbol{y})
\\dl &=(Xd\boldsymbol{w})^T(X\boldsymbol{w}-\boldsymbol{y})+(X\boldsymbol{w}-\boldsymbol{y})^T(Xd\boldsymbol{w})
\\dl &=2(X\boldsymbol{w}-\boldsymbol{y})^TXd\boldsymbol{w}
\end{aligned}
\]
对照导数与微分的联系\(dl=\frac{\partial l}{\partial\boldsymbol{w}}^Td\boldsymbol{u}\) ,得到\(\frac{\partial l}{\partial\boldsymbol{w}}=2X^T(X\boldsymbol{w}-\boldsymbol{y})\)。\(\frac{\partial l}{\partial\boldsymbol{w}}=\mathbf{0}\)即\(X^TX\boldsymbol{w}=X^T\boldsymbol{y}\),得到\(\boldsymbol{w}\)的最小二乘估计为
\(\boldsymbol{w}=(X^TX)^{-1}X^T\boldsymbol{y}.\)
向量对向量的求导
如何计算
给一个例子:DNN神经网络中第l层与第l+1层的输入输出\(z^{l+1}\)与\(z^{l}\)之间的关系为
\[z^{l+1}=W^{l+1}a^l+b^{l+1}=W^{l+1}\sigma(z^l)+b^{l+1}
\]
求解: \(\frac{\partial z^{l+1}}{\partial z^l}\)
先用标量对向量求导的方法,依次求出\(z^{l+1}\)的每一个分量的导数,再排列成雅可比矩阵。
考虑\(z_{j}^{l+1}\)对\(z_l\)的导数
\[dz_{j}^{l+1}=W^{l+1}\sigma^{'}(z^l) \odot dz^l
\]
\[dz_{j}^{l+1}=((W^{l+1})^{T} \odot \sigma^{'}(z^l))^{T} dz^l
\]
\(j=1\)到\(j=Nrow(W)\)排列成矩阵,最终写为
\[\frac{\partial z^{l+1}}{\partial z^l}=W^{l+1}diag(\sigma^{'}(z^l))
\]
链式法则:
向量对向量的链式求导
定理:
若多个列向量的依赖关系为 \(\boldsymbol{x}\to\boldsymbol{y}\to\boldsymbol{z}\)且其维数分别为\(n,p,m,\)则
- \(\frac{\partial z}{\partial x}=\frac{\partial z}{\partial y}\frac{\partial y}{\partial x}\) (若使用分子布局),
- \(\frac{\partial z}{\partial x}=\frac{\partial y}{\partial x}\frac{\partial z}{\partial y}\) (若使用分母布局)
对分子布局进行证明:
\[\frac{\partial z}{\partial x}=\begin{bmatrix}\frac{\partial z_1}{\partial x_1}&\frac{\partial z_1}{\partial x_2}&\cdots&\frac{\partial z_1}{\partial x_n}\\\vdots&\ddots&&\vdots\\\frac{\partial z_m}{\partial x_1}&\frac{\partial z_m}{\partial x_2}&\cdots&\frac{\partial z_m}{\partial x_n}\end{bmatrix}
\]
\[\left.=\left[\begin{array}{ccccc}\sum_{i=1}^p\frac{\partial z_1}{\partial y_i}\frac{\partial y_i}{\partial x_1}&\sum_{i=1}^p\frac{\partial z_1}{\partial y_i}\frac{\partial y_i}{\partial x_2}&\cdots&\sum_{i=1}^p\frac{\partial z_1}{\partial y_i}\frac{\partial y_i}{\partial x_n}\\\vdots&\ddots&&\vdots\\\sum_{i=1}^p\frac{\partial z_m}{\partial y_i}\frac{\partial y_i}{\partial x_1}&\sum_{i=1}^p\frac{\partial z_m}{\partial y_i}\frac{\partial y_i}{\partial x_2}&\cdots&\sum_{i=1}^p\frac{\partial z_m}{\partial y_i}\frac{\partial y_i}{\partial x_n}\end{array}\right.\right]
\]
\[=\begin{bmatrix}\frac{\partial z_1}{\partial y_1}&\frac{\partial z_1}{\partial y_2}&\cdots&\frac{\partial z_1}{\partial y_p}\\\vdots&\ddots&&\vdots\\\frac{\partial z_m}{\partial y_1}&\frac{\partial z_m}{\partial y_2}&\cdots&\frac{\partial z_m}{\partial y_p}\end{bmatrix}\begin{bmatrix}\frac{\partial y_1}{\partial x_1}&\frac{\partial y_1}{\partial x_2}&\cdots&\frac{\partial y_1}{\partial x_n}\\\vdots&\ddots&&\vdots\\\frac{\partial y_p}{\partial x_1}&\frac{\partial y_p}{\partial x_2}&\cdots&\frac{\partial y_p}{\partial x_n}\end{bmatrix}
\]
\[=\frac{\partial z}{\partial y}\frac{\partial y}{\partial x}
\]
证毕,同时推广到n个向量之间的链式法则也自然是成立的,因为多个链式总可以看为上述操作之叠加
标量对多个向量的链式求导
定理:
标量对更多的向量求导,比如\(\mathbf{y_1}\to\mathbf{y_2}\to\ldots\to\mathbf{y_n}\to z\),则有
\[\frac{\partial z}{\partial\mathbf{y_1}}=(\frac{\partial\mathbf{y_n}}{\partial\mathbf{y_{n-1}}}\frac{\partial\mathbf{y_{n-1}}}{\partial\mathbf{y_{n-2}}}\ldots\frac{\partial\mathbf{y_2}}{\partial\mathbf{y_1}})^T\frac{\partial z}{\partial\mathbf{y_n}}
\]
标量对多个矩阵的链式求导
定理:在依赖关系\(\mathbf{X}\to\mathbf{Y}\to z\)下,满足链式法则
\[\frac{\partial z}{\partial x_{ij}}=\sum_{k,l}\frac{\partial z}{\partial Y_{kl}}\frac{\partial Y_{kl}}{\partial X_{ij}}=tr((\frac{\partial z}{\partial Y})^T\frac{\partial Y}{\partial X_{ij}})
\]
我们没有给出基于矩阵整体的链式求导法则,主要原因是矩阵对矩阵的求导是比较复杂的定义,我们目前也未涉及。因此只能给出对矩阵中一个标量的链式求导方法。这个方法并不实用,因为我们并不想每次都基于定义法来求导最后再去排列求导结果。
虽然我们没有全局的标量对矩阵的链式求导法则,但是对于一些线性关系的链式求导,我们还是可以得到一些有用的结论的。
我们来看这个常见问题:\(A,X,B,Y\)都是矩阵,\(Z\)是标量,其中\(z=f(Y)=f(AX+B)\),我们欲求出
\(\frac{\partial z}{\partial X}\),这个问题在机器学习中是很常见的。此时,我们并不能直接整体使用矩阵的链式求导法则,因为矩阵对矩阵的求导结果不好处理。
这里我们回归初心,使用定义法试一试,先使用上面的标量链式求导公式:
\[\frac{\partial z}{\partial x_{ij}}=\sum_{k,l}\frac{\partial z}{\partial Y_{kl}}\frac{\partial Y_{kl}}{\partial X_{ij}}
\]
后半部分
\[\frac{\partial Y_{kl}}{\partial X_{ij}}=\frac{\partial\sum_s(A_{ks}X_{sl})}{\partial X_{ij}}=\frac{\partial A_{ki}X_{il}}{\partial X_{ij}}=A_{ki}\delta_{lj}
\]
其中\(\delta_{lj}\)为克罗内克符号,当\(l=j\)的时候为1,否则为0
那么最终的标签链式求导公式转化为:
\[\frac{\partial z}{\partial x_{ij}}=\sum_{k,l}\frac{\partial z}{\partial Y_{kl}}A_{ki}\delta_{lj}=\sum_k\frac{\partial z}{\partial Y_{kj}}A_{ki}
\]
排列成矩阵即为:
\[\frac{\partial z}{\partial X}=A^T\frac{\partial z}{\partial Y}
\]
总结出四条deeplearning中常用的结论
\[z=f(Y),Y=AX+B\to\frac{\partial z}{\partial X}=A^T\frac{\partial z}{\partial Y}
\]
\[z=f(\mathbf{y}),\mathbf{y}=A\mathbf{x}+\mathbf{b}\to\frac{\partial z}{\partial\mathbf{x}}=A^T\frac{\partial z}{\partial\mathbf{y}}
\]
\[z=f(Y),Y=XA+B\to\frac{\partial z}{\partial X}=\frac{\partial z}{\partial Y}A^T
\]
\[z=f(\mathbf{y}),\mathbf{y}=X\mathbf{a}+\mathbf{b}\to\frac{\partial z}{\partial\mathbf{X}}=\frac{\partial z}{\partial\mathbf{y}}a^T
\]