随笔分类 -  Deep Learning

摘要:## [VGG](http://www.robots.ox.ac.uk/~vgg/publications/)- [Andrea Vedaldi](http://www.robots.ox.ac.uk/~vedaldi/)## [Berkeley](http://bvlc.eecs.berkeley... 阅读全文
posted @ 2015-03-30 19:36 n0p 阅读(189) 评论(0) 推荐(0)
摘要:- vanishing gradient problem- multi-dimensional lstm 阅读全文
posted @ 2015-03-30 17:14 n0p 阅读(189) 评论(0) 推荐(0)
摘要:Ref: 阅读全文
posted @ 2015-03-19 17:16 n0p 阅读(183) 评论(0) 推荐(0)
摘要:- `cross entropy` loss is not quite the same as optimizing classification accuracy. Althougth the two are correlated.- It's not necessarily true tha... 阅读全文
posted @ 2015-03-18 00:20 n0p 阅读(270) 评论(0) 推荐(0)
摘要:Andrew Ng 组的Tutorial做的很浅显易懂啊,今天明白了Autoencoder。其实,autoencoder做的就是降维,我觉得最让我眼睛一亮的地方是,用KL divergence(\ref{kl})做约束实现sparsity,相当于把$\rho$跟$\hat{\rho}$都看成是一种分布,统计的思想体现的很好,跟L1 norm有异曲同工之妙,而且,我感觉解析性应该比L1 norm更好!\begin{equation}\label{kl}\mathbf{ KL}(\rho || \hat\rho_j) = \rho \log \frac{\rho}{\hat\rho_j} + (1 阅读全文
posted @ 2012-11-17 03:29 n0p 阅读(352) 评论(0) 推荐(0)