论文笔记-Sequence to Sequence Learning with Neural Networks

大体思想和RNN encoder-decoder是一样的,只是用来LSTM来实现。

paper提到三个important point:

1)encoder和decoder的LSTM是两个不同的模型

2)deep LSTM表现比shallow好,选用了4层的LSTM

3)实践中发现将输入句子reverse后再进行训练效果更好。So for example, instead of mapping the sentence a,b,c to the sentence α,β,γ, the LSTM is asked to map c,b,a to α,β,γ, where α, β, γ is the translation of a, b, c. This way, a is in close proximity to α, b is fairly close to β, and so on, a fact that makes it easy for SGD to “establish communication” between the input and the output.  

posted @ 2017-12-23 16:37  Akane  阅读(1805)  评论(0编辑  收藏  举报