摘要:
逻辑回归的另一种观点 \[{h_\theta }\left( x \right) = \frac{1}{{1 + {e^{ - {\theta ^T}x}}}}\] 如果y=1,我们希望hθ(x)≈1,对应θTx >> 0 如果y=0,我们希望hθ(x)≈0,对应θTx << 0 对于一个样本(x, 阅读全文
posted @ 2018-11-01 19:15
qkloveslife
阅读(248)
评论(0)
推荐(0)
摘要:
2001年Bank和Bill做了这么一个实验 区分容易混淆的词,如(to, two, too) 比如:For breakfast I ate two eggs. 他们用了不同的算法: Perceptron (Logistic regression) Winnow Memory-based Naïve 阅读全文
posted @ 2018-11-01 11:38
qkloveslife
阅读(219)
评论(0)
推荐(0)
摘要:
对于癌症检测的例子来说,y=1代表有癌症(1代表数目比较小的类) Precision/Recall \[\Pr ecision = \frac{{True \bullet positive}}{{predicted \bullet positive}} = \frac{{True \bullet p 阅读全文
posted @ 2018-11-01 10:07
qkloveslife
阅读(1450)
评论(0)
推荐(0)
摘要:
Rcommended approach Start with a simple algorithm that you can implement quickly. Implement it and test it on your cross-validation data. Plot learnin 阅读全文
posted @ 2018-11-01 09:12
qkloveslife
阅读(724)
评论(0)
推荐(0)
摘要:
“Small” neural network (fewer parameters; more prone to underfitting) Computationally cheaper "Large" neural network (more parameters; more prone to o 阅读全文
posted @ 2018-11-01 02:07
qkloveslife
阅读(321)
评论(0)
推荐(0)
摘要:
学习曲线 “训练误差”和“交叉验证误差”如下 \[\begin{array}{l}{J_{train}}\left( \theta \right) = \frac{1}{{2{m_{train}}}}\sum\limits_{i = 1}^{{m_{train}}} {{{\left( {{h_\t 阅读全文
posted @ 2018-11-01 01:52
qkloveslife
阅读(380)
评论(0)
推荐(0)

浙公网安备 33010602011771号