逻辑回归——梯度下降

逻辑回归的代价函数

\[J\left( \theta  \right) =  - \frac{1}{m}[\sum\limits_{i = 1}^m {{y^{\left( i \right)}}\log \left( {{h_\theta }\left( {{x^{\left( i \right)}}} \right)} \right) + \left( {1 - {y^{\left( i \right)}}} \right)\log \left( {1 - {h_\theta }\left( {{x^{\left( i \right)}}} \right)} \right)]} \]

梯度下降算法

 重复{

\[{\theta _j}: = {\theta _j} - \alpha \frac{\partial }{{\partial {\theta _j}}}J\left( \theta  \right)\]

(同步更新所有的tθ)

}

其中

\[\frac{\partial }{{\partial {\theta _j}}}J\left( \theta  \right) = \sum\limits_{i = 1}^m {\left( {{h_\theta }\left( {{x^{\left( i \right)}}} \right) - {y^{\left( i \right)}}} \right)} {x^{\left( i \right)}}\]

所以梯度下降算法

重复{

\[{\theta _j}: = {\theta _j} - \alpha \sum\limits_{i = 1}^m {\left( {{h_\theta }\left( {{x^{\left( i \right)}}} \right) - {y^{\left( i \right)}}} \right)} {x^{\left( i \right)}}\]

}

乍一看这和线性回归中的梯度下降算法一样,其实不然

线性回归中\[{h_\theta }(x) = {\theta ^T}x\]

逻辑回归中\[{h_\theta }(x) = \frac{1}{{1 + {e^{ - {\theta ^T}x}}}}\]


逻辑回归也需要对数据进行标准化

 

posted @ 2018-10-25 18:47  qkloveslife  阅读(427)  评论(0编辑  收藏  举报