# Softmax 损失-梯度计算

## Softmax 梯度

\begin{align} a_{j} &= \frac{\exp(z_{j})}{\sum_{k=1}^K \exp(z_{k})} \forall j\in 1...K \\ {\partial a_{j}\over \partial z_{j} } &= {\exp(z_{j})\cdot(\Sigma - \exp(z_{j}) )\over \Sigma^2} = a_j(1 - a_j) \\ {\partial a_{k}\over \partial z_{j} } &= { - \exp(z_{k}) \cdot \exp(z_{j}) \over \Sigma^2} = -a_j a_k \tag{k\ne j} \end{align}

$a_j^{l+1}$可以解释成观察到的数据 $a^l$ 属于类别 j 的概率，或者称作似然 (Likelihood)。

\begin{align} {\partial a\over \partial z_k}= \begin{bmatrix} {\partial a_1\over \partial z_k} \\ \vdots \\ {\partial a_k\over \partial z_k} \\ \vdots \\ {\partial a_K\over \partial z_k} \end{bmatrix} = \begin{bmatrix} -a_1 \\ \vdots \\ (1-a_k) \\ \vdots \\ -a_K \end{bmatrix}a_k = (\begin{bmatrix} 0 \\ \vdots \\ 1 \\ \vdots \\ 0 \end{bmatrix} -a)a_k \end{align}

${\partial E\over \partial z_k}={\partial E\over \partial a}{\partial a\over \partial z_k}=({\partial E\over \partial a_k} - [{\partial E\over \partial a}]^T a)a_k \\ {\partial E\over \partial z}={\partial E\over \partial a}{\partial a\over \partial z}=({\partial E\over \partial a} - [{\partial E\over \partial a}]^T a)⊙ a$

# dot 表示矩阵乘法, * 表示按对应元素相乘
bottom_diff = (top_diff - dot(top_diff, top_data)) * top_data


## Softmax loss 梯度

$E = -\sum^K_{k}y_k\log(a_{k}) \\ {\partial E\over \partial a_j} = -\sum^K_{k}{y_k\over a_k}\cdot {\partial a_k\over \partial a_j}=-{y_j\over a_j}$

\begin{align} {\partial E\over \partial b_j^{l+1}} &= {\partial E\over \partial z_j^{l+1}} = \sum_k{\partial E\over \partial a_k^{l+1}} \cdot {\partial a_k^{l+1}\over \partial z_j^{l+1}} \\ &=-{y_j^{l+1}\over a_j^{l+1}} \cdot a_j^{l+1}(1 - a_j^{l+1})+\sum_{k\ne j}[-{y_k^{l+1}\over a_k^{l+1}} \cdot -a_j^{l+1} a_k^{l+1}] \\ &= -y_j^{l+1}+y_j^{l+1} a_j^{l+1} +\sum_{k\ne j}y_k^{l+1}a_j^{l+1} \\ &= a_j^{l+1}-y_j^{l+1} \\ {\partial E\over \partial w_{ij}^{l+1}} &= {\partial E\over \partial z_j^{l+1}} \cdot {\partial z_j^{l+1}\over w_{ij}^{l+1}}=(a_j^{l+1}-y_j^{l+1})a_i^l \end{align}

# prob_data 为前向传播时softmax的结果, label_data 是标签的one-hot表示
bottom_diff = prob_data - label_data


1. softmax的log似然代价函数（公式求导） https://blog.csdn.net/u014313009/article/details/51045303 ↩︎

2. Softmax与SoftmaxWithLoss原理及代码详解 https://blog.csdn.net/u013010889/article/details/76343758 ↩︎

posted @ 2018-07-22 16:54  康行天下  阅读(4001)  评论(0编辑  收藏